This research investigates the effect of topic sensitivity on panelists' motivations and data quality. An Internet survey in which topic sensitivity varied (high, low) was conducted with panelists using the Survey Participation Inventory (SPI). A two-factor structure based on intrinsic versus extrinsic motivations was used to cluster respondents. A two-way factorial MANOVA between the sensitivity conditions and clusters assessed self-report data quality, completion time, extreme response style, and response dispersion. Panelists' motivations decreased in the high sensitivity topic condition. However, extrinsic rewards appeared to fortify intrinsic motives without seriously compromising data quality for panelists asked to respond to sensitive questions. This article considers the theory-practice gap in professional sales. Scholars note a discrepancy between scholarly knowledge and the practice of selling. We study three exemplar gaps using an extensive qualitative dataset, mainly in-depth interviews, in order to better understand why these gaps exist. Theory-practice gaps in listening, follow-up, and adaptability have not been empirically confirmed in light of rapid change in the sales field. After confirming these gaps, we explore antecedents, uncovering several underlying reasons for gap formation. We consider theoretical and managerial implications. In particular, we elaborate on the need for theory to be more relevant and contextualized. This study examines consumer-based brand equity of private-label branding and relative significance of its dimensions in creating a strong private-label brand. Based on brand equity theory and private-label branding research, a survey instrument was developed, scale measures were pretested, and the final purified survey was administered online to Wal-Mart shoppers. The study found that awareness/familiarity and perceived quality are keys in reducing the perceived risk and increasing the perceived value of private-label brands in building brand equity. Also, perceived risk, perceived value, and brand loyalty for Wal-Mart have significant mediating roles in creating private-label (i.e., Great Value) brand equity. The role of employee's emotions are examined as influencers of job satisfaction, affective organizational commitment, and turnover intentions within a frontline employee context. Using a sample of 126 retail employees, structural equation modeling is used to test the theoretically developed model. Findings suggest that deep acting emotions and emotional exhaustion significantly impact job satisfaction, whereas both deep acting emotions and job satisfaction predict affective organizational commitment. When predicting turnover intentions, surface and deep acting emotions, along with emotional exhaustion, each significantly impact turnover intentions; however, neither job satisfaction nor affective organizational commitment are significant predictors. In this article we posit that firms signal resource availability (stock returns and risk) and favorable reputation (network of ties) to attract alliance partners. We use stock market and network characteristics to predict a firms' propensity to engage in product alliances. Using 1,877 observations of 302 biopharmaceutical firms over a twenty-year period, we find that increase in stock returns and a decrease in stock risk is associated with an increase in firms' product alliances. The position of the firm in its network improves product alliance formation, whereas the structure of the overall network (density) has no such effect. Counterproductive Work Behaviors (CWBs) are based on the harm, or intended harm, they cause to organizations and or stakeholders (Spector and Fox 2005), while sales deviance is based on the violation of organizational norms (Robinson and Bennet 1995). Utilizing the definitional difference, this article explores a gap in the sales deviance literature that allows for potentially unidentified intentionally harmful behaviors that do not violate organizational norms, to exist. In addition, we propose strategic motivations for such behavior. Results suggest that motivations include long-term attitudes toward CWBs, moral obligation, consensus beliefs, productive equity, self-image congruence, and impression management. In the nineteenth century the British repeatedly attempted to improve the quality of Indian cotton. This was a major enterprise involving the importation of thousands of pounds of exotic seeds, the establishment of experimental farms and outreach programs, and the hiring of American overseers to transfer American methods to the subcontinent. The British failed due to their inability to overcome bioclimatic challenges and to replicate the American South's efficient marketing structures. There is little evidence to support the recent claims that the British sought to import slave management methods or that more coercion was needed for success. Based on daily journals and personal interviews with surviving family members, this article examines the life of Ed Robinson, a renting farmer in the Blackland Prairie of Texas, who succeeded financially despite the inherent inequities of the crop-lien system and the economic crises in agriculture between the 1920s and 1938. Robinson (1886-1958) and his landlord conducted farm business outside the parameters of typical landlord-tenant relations, and while many renters faced eviction for non-payment of their debts, the Robinsons' landlord allowed them to be virtually self-sufficient. Not even the evictions of tenants associated with the New Deal's crop-reduction payments posed a real threat to the Robinsons. Although few renters achieved the Robinsons' success, historians have since discovered a class of landless farmers who survived the depression. The small island nation of Sao Tome and Principe was one of the world's leading producers of cocoa beans in the early twentieth century. The tropical climate, the abundant precipitation, and the fertile volcanic soils of the islands contributed to a rapid development of cocoa estates, but only the interaction of myriad other factors can explain the quick rise of the islands cocoa production from a few hundred kilograms in 1878 to nearly twelve thousand tons in 1900. This paper explores the development of Sao Tome and Principe cocoa production from its beginnings to a position as a global leader in the cocoa market. This article highlights the important function of family and kinship networks in the pastoral industry of the Port Phillip District and Victoria, Australia, during the nineteenth century. Using the core case study of the extended Cameron family or the Cameron "clan" from the Scottish Highlands in the Western District of Victoria, it demonstrates how family networks assisted in the accumulation and consolidation of large pastoral properties and enterprises and thus aided the agricultural entrepreneurialism of migrants who saw greater commercial opportunities throughout the Empire than at home. Although prior research has found clear impacts of schools and school quality on property values, little is known about whether charter schools have similar effects. Using sale price data for residential properties in Los Angeles County from 2008 to 2011 we estimate the neighborhood level impact of charter schools on housing prices. Using an identification strategy that relies on census-block fixed effects and variation in charter penetration over time, we find little evidence that the availability of a charter school affects housing prices on average. We do find, however, that when restricting to districts other than Los Angeles Unified and counting only charter schools located in the same school district as the household, housing prices fall in response to an increase in nearby charter penetration. High school graduation rates are a central policy topic in the United States and have been shown to be stagnant for the past three decades. Using student-level administrative data from New York City Public Schools, I examine the impact of compulsory school attendance on high school graduation rates and grade attainment, focusing the analysis on ninth and tenth grade cohorts. I exploit the interaction between the school start-age cutoff and compulsory attendance age requirement to identify the effect of compulsory schooling. I find that an additional year in compulsory attendance leads to an increase of 9 to 12 percent in the probability of progressing to grades 11 and 12, and raises the probability of graduating from high school by 9 to 14 percent, depending on the specification. This paper investigates how accountability pressures under No Child Left Behind (NCLB) may have affected students' rate of overweight. Schools facing pressure to improve academic outcomes may reallocate their efforts in ways that have unintended consequences for children's health. To examine the impact of school accountability, we create a unique panel dataset containing school-level data on test scores and students' weight outcomes from schools in Arkansas. We code schools as facing accountability pressures if they are on the margin of making Adequate Yearly Progress, measured by whether the school's minimum-scoring subgroup had a passing rate within 5 percentage points of the threshold. We find evidence of small effects of accountability pressures on the percent of students at a school who are overweight. This finding is little changed if we controlled for the school's lagged rate of overweight, or use alternative ways to identify schools facing NCLB pressure. The No Child Left Behind Act (NCLB) has been criticized for encouraging schools to neglect students whose performance exceeds the proficiency threshold or lies so far below it that there is no reasonable prospect of closing the gap during the current year. We examine this hypothesis using longitudinal data from 2002-03 through 2005-06. Our identification strategy relies on the fact that as NCLB was phased in, states had some latitude in designating which grades were to count for purposes of a school making adequate yearly progress. We compare the mathematics achievement distribution in a grade before and after it became a high-stakes grade. We find in general no evidence that gains were concentrated on students near the proficiency standard at the expense of students scoring much lower, though there are inconsistent signs of a trade-off with students at the upper end of the distribution. Several papers have proposed that the grading system affects students' incentives to exert effort. In particular, the previous literature has compared student effort under relative and absolute grading systems, but the results are mixed and the implications of the models have not been empirically tested. In this paper, I build a model where students maximize their utility by choosing effort. I investigate how student effort changes when there is a change in the grading system from absolute grading to relative grading. I use a unique dataset from college students in Chile who faced a change in the grading system to test the implications of my model. My model predicts that, for low levels of uncertainty, low-ability students exert less effort with absolute grading, and high-ability students exert more effort with absolute grading. The data confirm that there is a change in the distribution of effort. The fin de siecle American fascination with electricity has been well documented. David E. Nye, Carolyn Marvin, and other historians have explored the hopes and fears of the new technology in the nineteenth and early twentieth centuries, and they have brought laboratories, living rooms, and worlds fairs back to life in the process. But turn-of-the-century writers and inventors sparked another fantasy that remains unaccounted for in these histories: the dream of enjoying electricity without the machinery or the corporations that generate it. This article recovers that dream of a wireless future. By reading Tesla in tandem with Edward Bellamy, Charlotte Perkins Gilman, and other electrical utopians, this paper illuminates the utopian dimension of a major inventor; it challenges the conventional interpretation of the utopian novel as a vehicle for economic and political concerns; and it expands the history of electricity to account for a provocative and underexamined fantasy of wirelessness. Most importantly, it argues for the inextricable interrelationship of technological and literary production during the turn of the twentieth century. This article outlines the process that Thomas Mann followed to create an imaginary experiment that was able to demonstrate the relativity of time. Einstein's thought experiments, among other sources, served as inspiration for the author; however, even though Mann followed the physicist's methods and applied the same initial conditions, which he learned through newspaper articles, his results were different. As the author continually emphasized in The Magic Mountain, the main reason for this was that science based its claims on the idea that time can be measured; however, from the author's point of view, humans have no way to corroborate this. This essay offers a Deleuzean reading of desire in the relationship between the eponymous protagonist of Zakes Mda's The Whale Caller (2005) and a whale named Sharisha. In the setting of a highly stratified ecotourist village in South Africa where most characters relate to marine animals only through consumption and capitalization, the human-whale relationship between the protagonist and Sharisha offers a different mode of comportment. While some Animal Studies scholars read the novel as evidence of animal subjectivity and call for a recognition of animal rights in South African law, this essay contends that the novel's more significant contribution to ecocritical thought is its insistence on positing nonhuman desire as a mode of resistance to neocolonial capitalist violence. The essay also engages this discussion of nonhuman desire as resistance with postcolonial critiques of both resistance literature and posthuman accounts of subjectivity. H. D.'s prose novel Asphodel has often been discussed in terms of its codes of lesbian desire. A posthumanist lens, however, can instead illuminate the ways that informational code operates throughout the novel. In this article, I use N. Katherine Hayles's theories on the dialectics of pattern/randomness to parse how Asphodel's Hermione engages in a machinic identity with Morse code's technologies and universal language. Although this language ultimately fails, it nonetheless becomes a heteroglossia that allows Hermione to communicate and express herself in varied ways. I thus posit that, in using the technologies and literary techniques of the early twentieth century to initiate these engagements, Asphodel exhibits a particularly modernist posthumanism. The purpose of this work is to investigate the convergence of the solutions of the following max-type difference equation z(n) = max{1/z(n-s),P-n/z(n-t)(alpha n)}, n = 0,1,2, . . ., where s,t is an element of {1, 2, 3, . . .} with s not equal t, alpha(n) is an element of (0, 1) is an s-periodic sequence, {P-n}(n=0)(+infinity) is a constant sequence satisfying P-n is an element of (0, 1] for every n >= 0. We show that if {z(n)}(n=-r)(+infinity), (r = max{s, t}) is a positive solution of the above equation with the initial conditions z(-r), z(-r+1), . . ., z(-1) is an element of (0, +infinity), then lim(n ->infinity) z(n) = 1 or {z(2sn+k)}(n=0)(+infinity) is eventually monotone for every 0 <= k <= 2s - 1. Further, we show that if Pn is a periodic sequence, s = 1 and t is even, then limn, z(n ->infinity) z(n) = 1 or {z(n)}(n=-t)(+infinity) is eventually periodic with period 2. In this paper we introduce a class (F mu.bC0)-C-m,k (alpha) of concave functions by using the generalized Srivastava-Attiya operator. Also, we get distortion bounds for this class. In this paper, we prove analogues of the classical Sturm comparison and oscillation theorems for Sturm-Liouville problem together with boundary-transmission conditions on two disjoint intervals. We present a new version for Sturm's comparison and oscillation theorems. The obtained results generalizes the recently obtained oscillation and comparison theorems for regular Sturm-Liouville problem which contained transmission conditions. A mapping f: X x X -> Y is called additive-quadratic if f satisfies the system of equations { f(x + y, z) = f(x, z) + f (y, z), f(x, y + z) + f(x, y-z) = 2f(x,y) + 2f(x,z). In this paper, using the fixed point method, we prove the Hyers-Ulam stability in matrix fuzzy normed spaces associated to the following additive-quadratic functional equation f (x + y, z + w) + f (x + y, z - w) = 2f (x, z) + 2f (x, w) + 2f(y,z) + 2f (y, w) for all x,y, z,w is an element of X. In this paper we give the closed form expressions of some two dimensional systems of nonlinear rational partial difference equations of second order. We shall use a new method to prove the results by using (odd-even) double mathematical induction. As a direct consequences, we investigate and drive the explicit solutions of some partial difference equations and some (systems of) ordinary difference equations. In this paper, two-dimensional Chlodowsky variant q-based Bernstein-Schurer-Stancu operators are introduced. Korovkin-type approximation theorems in different function spaces are studied. The error of approximation by using full modulus of continuity and partial modulus of continuities are given. Moreover, we introduce a generalization of our operators and investigate its approximation in more general weighted space. In this paper, we investigate behaviors of well-defined solutions of the following system x(n+1) = A(1)y(n-(3k-1))/B-1 + C(1)y(n-(3k-1))x(n-(2k-1)))y(n-(k-1)), y(n+1) = A(2)x(n-(3k-1))/B-2 + C(2)x(n-(3k-1))y(n-(2k-1)))x(n-(k-1)), where n is an element of N-0, k is an element of Z(+) the coefficients A(1), A(2), B-1,B- B2, C-1, C-2 and the initial conditions are arbitrary real numbers. This paper presents an iterative collocation numerical approach based on interpolating subdivision schemes for the solution of non-linear fourth order boundary value problems involving ordinary differential equations. Numerical evidence suggests that the scheme converges to a smooth approximate solution of non-linear fourth order boundary value problem. The convergence of the approach is also discussed. Main purpose of this article is to explore and seek the applications of subdivision schemes in the field of physics and engineering. In this paper, using the direct and fixed point methods, we investigate the generalized Hyers-Ulam stability of the quintic functional equation: 2f(2x + y) + 2f(2x - y) + f(x + 2y) + f(x - 2y) = 20[f(x + y) + f(x - y)] + 90f(x) in random normed spaces under the minimum t-norm. In this paper, we investigate the boundedness and compactness of generalized composition operators on Zygmund type spaces and Bloch type spaces with normal weight. Little work on the convergence and error estimates of approximate series solutions exists in the literature. For general nth-order linear differential equations with initial conditions, a rigorous proof of convergence for the series solutions given by the homotopy analysis method is first presented in this paper. Furthermore, an upper bound for the absolute error of these approximations is obtained. For any d-dimensional neutral stochastic functional differential equation with infinite delay and m-dimensional Brownian motion, we introduce a sequence of approximate equations and offer sufficient conditions such that the approximate solutions converge with probability one to the solution of the given equation. This iterative method called the generalized Z-algorithm is a generalization to many well-known analytic iterative method. In this paper, based on the generalized parameterized inexact Uzawa method (GPIU) presented by Zhang and Wang [Applied Mathematics and Computation, 219(2013) 4225-4231], we introduce and study an improved generalized parameterized inexact Uzawa method (IG-PIU) for singular saddle point problems. Moreover, theoretical analysis shows that the semi-convergence of the IGPIU method can be guaranteed by suitable choices of the iteration parameters. Finally, numerical experiments are carried out, which show that the improved generalized parameterized inexact Uzawa method (IGPIU) with appropriate parameters improve the convergence of iteration method efficiently when solving singular saddle point problems from the classic incompressible steady state Stokes problems. In this paper, we study linear differential equations arising from Bessel polynomials and their applications. From these linear differential equations, we give some new and explicit identities for Bessel polynomials. With the advancement in stochastic calculus, stochastic differential equations have now become very common in different fields such as engineering, population dynamics, physics, system sciences, ecological sciences, medicine and financial mathematics. In several stochastic dynamic systems, one assumes that the future state of the system does not depend on its past states. However, under close analysis, it becomes evident that most realistic models would contain some of the past states of the system, and one would require stochastic functional differential equations in order to study such systems. This paper presents the existence theory for stochastic functional differential equations in the G-framework (in short G-SFDEs). The comparison theorem has been developed in a bid to obtain the required results. It is ascertained that the G-SFDEs, whose coefficients may be discontinuous functions, have more than one continuous and bounded solutions. It is necessary to assume additivity and independent among decision making criteria for traditional multiple decision making (MDM) in which the weights given by decision makers based on a additive measure. However, most criteria have inter-dependent or interactive characteristics in the real decision making problems. Furthermore, with respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of interval-valued intuitionistic sets, we propose interval-valued intuitionistic fuzzy Choquet integral operators based on Archimedean t-norm and discuss their calculations in this paper. First, we introduce some concepts of fuzzy measure, interval-valued intuitionistic sets and Archimedean t-norm. Then, the representations and transformations of Archimedean t-norm and Archimedean t-conorm are obtained, and the operational rules of interval-valued intuitionistic fuzzy sets based on Archimedean t-norm are presented under intuitionistic fuzzy environment. Finally, as fuzzy Choquet integral operators, some aggregating of interval-valued intuitionistic fuzzy sets based on Archimedean t-norm are given. In this paper, we generalize the concept of homomorphisms and derivations in intuitionistic fuzzy normed algebras for 2-dimensional functional equations. Furthermore, we investigate the Hyers-Ulam stability bi-homomorphisms and bi-derivations in intuitionistic fuzzy ternary normed algebras concerning a 2-dimensional bi-additive functional equation. In this paper, new efficient quadrature rules are established using a newly developed special type of kernel for n-times differentiable mappings, having five steps. Some previous inequalities are also recaptured as special cases of our main inequalities. At the end, efficiency of the newly developed quadrature rules are discussed. In the present paper, second order duality for multiobjective non-linear programming are investigated under the second order generalized (F, b, phi, rho, theta) - univex functions. The weak, strong and converse duality theorems are proved. Further, we also illustrated an example of (F, b, phi, rho, theta) - univex functions. Results obtained in this paper extend some previously known results of multiobjective nonlinear programming in the literature. In this paper, we prove the Hyers-Ulam stability of a fractional differential equation of order alpha is an element of (1, 2) with certain boundary conditions. In the last years there is an increasing interest in modifying linear operators so that the new versions reproduce some basic functions. This idea motivated us to modify the sequence of linear Bernstein Stancu type operators. Using numerical examples we show that these operators present a better degree of approximation than the original ones. In this note the modified Bernstein Stancu operators are studied in regard to uniform convergence and global smoothness preservation. In this paper, we consider linear differential equations satisfied by the generating function for Hermite polynomials and derive some new identities involving those polynomials. In this paper, we introduce the spaces c(lambda(2), Delta) and c(0)(lambda(2), Delta), which are BK-spaces of non-absolute type and we prove that these spaces are linearly isomorphic to the spaces c and c(0), respectively. Moreover, we give some inclusion relations and compute the alpha-, beta- and gamma-duals of these spaces. We also determine the Schauder basis of the c(lambda(2), Delta) and c(0)(lambda(2), Delta). Lastly we give some matrix transformations between of these spaces and others. The notions of (almost) stable cubic set, stable element, evaluative set and stable degree are introduced, and related properties are investigated. Regarding internal (external) cubic sets and the complement of cubic set, their (almost) stableness and unstableness are discussed. Regarding the P-union, R-union, P-intersection and R-intersection of cubic sets, their. (almost) stableness and unstableness are investigated. In this paper, we investigate some properties of Chebyshev polynomials arising from non-linear differential equations. From our investigation, we derive some new and interesting identities on Chebyshev polynomials. In this paper, we are interested in the blowup behavior of the solution to a degenerate and singular parabolic equation u(t) = (x(a)u(x))x + integral(l)(0) u(p)dx - ku(q), (x, t) is an element of(0, l) x (0, +infinity) with nonlocal boundary condition u(0, t) = integral(l)(0) f (x) u (x, t)dx, u (l, t) = integral(l)(0) g (x) u (x, t)dx, t is an element of(0, +infinity) where p, q is an element of[1, infinity), alpha is an element of[0, 1) and k is an element of(0, infinity). In view of comparison principle, we investigate the conditions on the global existence and blowup of the solutions. Moreover, under some suitable hypotheses, we discuss the global blowup and the uniform blowup profile of the blowup solution. In this paper, we introduce a Kantorovich-type Bernstein-Stancu-Schurer operators K-n,p,q(alpha,beta) based on the concept of q-integers. We investigate statistical approximation properties and establish a local approximation theorem, we also give a convergence theorem for the Lipschitz continuous functions. Finally, we give some graphics to illustrate the convergence properties of operators to some functions. In this paper, we study the exact values of the generalized von Neumann-Jordan constant C-NJ((p))(X) for X being l(infinity)-l(1) and l(q)-l(1) spaces. Moreover, we shown that some new conditions for uniformly normal structure of a Banach space X. Keywords: generalized von Neumann-Jordan constant; l(infinity)-l(1) and l(q)-l(1) space; uniformly normal structure In this paper the iteration of soft continuous functions is investigated and their discrete dynamical systems in soft topological spaces are defined. Some basic concepts related to discrete dynamical systems (such as soft omega-limit set, soft invariant set, soft periodic point, soft nonwandering point, and soft recurrent point) are introduced into soft topological spaces. Soft topological mixing and soft topological transitivity are also studied. At last, soft topological entropy is defined and several properties of it are discussed. In this paper, we prove that the generalized Hyers-Ulam stability of the additive functional inequality parallel to f(ax +by + cz) + f(bx + ay + bz) + f(cx + cy + az)parallel to <= parallel to(a+ b + c)f(x + y + z)parallel to in vector Banach space, where a not equal b not equal c is an element of R are fixed points with 3 > vertical bar a +b+c vertical bar. In this manuscript, we give coupled fixed point results for generalized (psi, phi)-weak contraction, satisfying rational type expression in the context of partially ordered G-metric spaces. The derived results generalize the result of K. Chakrabarti (K. Chakrabarti, Coupled fixed point theorems with rational type contractive condition in a partially ordered G-metric space, Journal of Mathematics, Volume 2014, Article ID 785357, 7 pages). To demonstrate our result and also to demonstrate the authenticity of our result from the previous one, we give suitable example. We introduce the concept of intuitionistic fuzzy BCK-submodules of a BCK-module with respect to a t-norm, and a s-norm and present some basic properties. The purpose of this paper is to introduce some new sequence spaces of fuzzy numbers defined by lacunary ideal convergence using generalized difference matrix and Orlicz functions. We also study some algebraic and topological properties of these classes of sequences. Moreover, some illustrative examples are given in support of our results. In the paper, the authors establish an exponential representation for a function involving the gamma function and originating from investigation of the Catalan numbers in combinatorics, find necessary and sufficient conditions for the function to be logarithmically completely monotonic, introduce a generalization of the Catalan numbers, derive an exponential representation for the generalization, and present some properties of the generalization. The notion of the meet set based on two subsets of a lower BCK-semilattice X is introduced, and related properties are investigated. Conditions for the meet set to be a (positive implicative; commutative, implicative) ideal are discussed. The meet ideal based on subsets, and the plus ideal of two subsets in a lower BCK-semilattice X are also introduced, and related properties are investigated. Using meet operation and addition, the semiring structure is induced. In this paper, we investigate some properties of solutions of some types of q-shift difference differential equations. In addition, we also generalize the Rellich-Wittich-type theorem about differential equations to the case of q-shift difference differential equations. Moreover, we give some example to show the existence and growth of some q-shift difference differential equations. In this paper, we considered a matrix inequality constrained linear matrix operator minimization problems with a particular structure, some of whose reduced versions can be applicable to image restoration. We present an efficient iteration method to solve this problem. The approach belongs to the category of Powell-Hestense-Rockafellar augmented Lagrangian method, and combines a nonmonotone projected gradient type method to minimize the augmented Lagrangian function at each iteration. Several propositions and one theorem on the convergence of the proposed algorithm were established. Numerical experiments are performed to illustrate the feasibility and efficiency of the proposed algorithm, including when the algorithm is tested with randomly generated data and on image restoration problems with some special symmetry pattern images. In this paper, we deal with the uniqueness problem of two non-admissible functions sharing some values and sets in the unit disc, and also investigate the problem on an admissible function and a non-admissible function sharing some values and sets. Some theorems of this paper improve the results given by Fang. In addition, the results in this paper analogous version of the uniqueness theorems of meromorphic functions sharing some sets on the whole complex plane which given by Yi and Cao. In this paper, we solve the additive (alpha, beta)-functional equation f(x) + f(y) + 2f(z) = alpha f(beta(x + y + 2z)), (0.1) where alpha,beta are fixed real or complex numbers with alpha not equal 4 and alpha beta = 1. Using the fixed point method and the direct method, we prove the Hyers-Ulam stability of the additive (alpha,beta)-functional equation (0.1) in Banach spaces. By introducing the concept of beta(U)-order functions, we study the error in approximating Dirichlet series of infinite order in the half plane by Dirichlet polynomials. Some necessary and sufficient conditions on the error and regular growth of finite beta(U)-order of these functions have been obtained. In the present paper we establish several fuzzy differential subordinations regardind the operator I (m, lambda, l), given by I (m, lambda, l) : A -> A, I (m, lambda, l) f (z) = z + Sigma(infinity)(j=2) (1+lambda(j-1)+l/l+1)(m) a(j)zj and A = {f is an element of H(U), f(z) = z + a(2)z(2) +..., z is an element of U} is the class of normalized analytic functions. A certain fuzzy class, denoted by SIF delta (m, lambda, l), of analytic functions in the open unit disc is introduced by means of this operator. By making use of the concept of fuzzy differential subordination we will derive various properties and characteristics of the class SIF delta (m, lambda, l). Also, several fuzzy differential subordinations are established regarding the operator I (m, lambda, l). In this paper we obtain some subordination and superordination results for the operator IR lambda.lm,n and we establish differential sandwich-type theorems. The operator IR lambda.lm,n is defined as the Hadamard product of the multiplier transformation I (m, lambda, l) and Ruscheweyh derivative R-n. In this paper, we consider the following functional equation af(x + y) + bf(x - y) + cf(y - x) = (a + b) f(x) + cf (-x) + (a + c) f(y) + bf (-y) for a fixed real numbers a, b, c with a = b + c and a not equal 0. We study the fuzzy version of the generalized Hyers-Ulam stability for it in the sense of Mirmostafaee and Moslehian. In this paper, we devoted study exact controllability for fuzzy differential equations with the control function in credibility spaces. Moreover we study exact controllability for every solutions of fuzzy differential equations. The result is obtained by using extremal solutions. In this paper, by integrating interval-valued intuitionistic fuzzy soft set with rough set theory, the concept of generalized interval-valued intuitionistic fuzzy soft rough sets is proposed, which is an extension of generalized intuitionistic fuzzy soft rough sets. Then the properties of this model are investigated. Furthermore, classical representations of generalized interval-valued intuitionistic fuzzy soft rough approximation operators are also introduced. Finally, an approach based on generalized interval valued intuitionistic fuzzy soft rough sets in decision making is developed, and we provide a practical example to illustrate the validity of this approach. In this paper, we study the Heinz mean inequalities of two positive operators involving positive linear map. We obtain a generalized conclusion based on operator Diaz-Metcalf type inequality. The conclusion is presented as follows: Let Phi be a unital positive linear map, if 0 < m(1)(2) <= A <= M-1(2) and 0 < m(2)(2) <= B <= M-2(2) for some positive real numbers m(1) <= M-1, m(2) <= M-2, then for alpha is an element of [0, 1] and p >= 2, the following inequality holds: (M(2)m(2)/M(1)m(1) Phi(A) + Phi(B))(p) <= 2(-(p+4)) [M(2)m(2)(M-1(2) + m(1)())(2) + M(1)m(1)(M-2(2) + m(2)(2))/min{(M(1)m(1))(3-alpha/2) (M(2)m(2))(1+alpha/2), (M(1)m(1))(2+alpha/2) (M(2)m(2))(2-alpha/2)}](2p) Phi(p)(H-alpha(A,B)). In this article, we study some new existence results for a nonlinear fractional difference equation with fractional sum-difference boundary conditions. Our problem containing sequential fractional difference operators that have different orders. The existence and uniqueness results are based on Banach contraction mapping principle and Schaefer's fixed point theorem. Finally, we present some examples to show the importance of these results. The notion of hesitant fuzzy mighty filter of a BE-algebra is introduced and related properties are investigated. We provide conditions for a hesitant fuzzy filter to be a hesitant fuzzy mighty filter. We construct a new quotient structure of a transitive BE-algebra using a hesitant fuzzy filter and study some properties of it. In this paper, we introduce and study a class of new general iteration processes for two finite families of total asymptotically nonexpansive mappings in hyperbolic spaces, which includes asymptotically nonexpansive mapping, (generalized) nonexpansive mapping of all normed linear spaces, Hadamard manifolds and CAT(0) spaces as special cases. Some important related properties to the new general iterative processes are also given and analyzed, and Delta-convergence and strong convergence of the iteration in hyperbolic spaces are proved. Furthermore, some meaningful illustrations for clarifying our results and two open questions are proposed. The results presented in this paper extend and improve the corresponding results announced in the current literature. In the present article, we establish an integral identity for Riemann-Liouville fractional integrals. Some Simpson type integral inequalities utilizing this integral identity are obtained. It is worth mentioning that the presented results have close connection with those in [M. Z Sarikaya, E. Set, M. E Ozdemir, On new inequalities of Simpson's type for s-convex functions, Computers and Mathematics with Applications, 60 (2010), 2191-2199)]. In this paper, a nonautonomous delayed Gilpin-Ayala competition system without instantaneous negative feedbacks (i.e., pure-delay-type system) is investigated. By the techniques of comparison arguments and constructing Lyapunov functionals something different to usual case, several results to guarantee the permanence of the system are derived by means of Ahmad and Lazer's definitions of lower and upper averages of a function. Moreover, the sufficient conditions for the global attractivity of the positive solution are also obtained, in which it is not necessarily to require the exponent of nonlinear intraspecific interference to exceed that of nonlinear interspecific interactions. These results are more general and practical, and possess a wide range of applications. Obviously, they are basically an extension of many existing conclusions for nonlinear competitive systems. In the paper, we presented a family M(mu, x) of approximations of the Bateman function G(x). The family M(mu, x) = G(x) for a certain whenever x is fixed and it presented asymptotical approximation of the Bateman's G-function as x -> infinity. We studied the order of convergence of the approximations M(mu, x) of the function G(x). Some properties and bounds of the error are deduced. We presented new sharp double inequality of G(x) with the upper and lower bounds M(1, x) and M(4/e(2)-4, x) (resp.). Also, we show that the approximations M(1, x) are better than the approximation 1/x + 1/2x(2) for any mu, in an open subinterval of [1, 4/e(2)-4]. In this paper, we consider some ordinary differential equations associated with modified degenerate Euler and Bernoulli numbers and give some new identities for these numbers arising from our differential equations. Let M(1)f(x,y): = 3/4 f(x + y) - 1/4 f(-x -y) +1/4f(x - y) + 1/4f (y - x) - f (x) - f(y), M(2)f(x, y) := 2f (x+ y/2) + f(x-y/2)+f(y-x/2) f(x) - f(y). We solve the additive-quadratic rho-functional inequalities parallel to M(1)f(x,y)parallel to <= parallel to rho M(2)f(x, y)parallel to, (0.1) where rho is a fixed complex number with vertical bar rho vertical bar < 1/2 and parallel to M(2)f(x,y)parallel to <= parallel to rho M(1)f(x,y)parallel to, (0.2) where rho is a fixed complex number with vertical bar rho vertical bar < 1. Using the direct method, we prove the Hyers-Ulam stability of the additive -quadratic rho-functional inequalities (0.1) and (0.2) in complex Banach spaces. Let M(1)f(x,y): = 3/4f(x+y) - 1/4f(-x - y) +1/4f(x - y) + 1/4f(y - x) - f(x) - f(y), M(2)f(x,y) := 2f(x + y/2) + f(x - y/2) + f(y - x/2) - f(x) - f(y). We solve the additive-quadratic rho-functional inequalities parallel to M(1)f(x,y)parallel to <= parallel to rho M(2)f(x, y)parallel to, (0.1) where rho is a fixed complex number with vertical bar rho vertical bar < 1/2 and parallel to M(2)f(x,y)parallel to <= parallel to rho M(1)f(x, y)parallel to, (0.2) where rho is a fixed complex number with vertical bar rho vertical bar < 1. Using the fixed point method, we prove the Hyers-Ulam stability of the additive-quadratic rho-functional inequalities (0.1) and (0.2) in complex Banach spaces. The main target of our study to cover the solutions behavior of the following difference equation x(n+1) = ax(n) + bx(n-1) + c + dxn-2/e + fxn-2, n=0, 1,..., where the parameters a, b, c, d, e and f are positive real numbers and the initial conditions x-2, x-1 and x(0) are positive real numbers. In this paper, we investigate essentially stability theory for the fuzzy differential equations in the quotient space of fuzzy numbers by Lyapunov-like functions. By using the differential inequalities and the comparison principle for Lyapunov-like functions, we give some sufficient criterias for the asymptotically stability, equi-asymptotically stability and uniformly asymptotically stability of the trivial solution of the fuzzy differential equations. In this paper, we investigate differential equations associated with squared Hermite polynomials and derive some new and explicit identities for these polynomials arising from the differential equations. We study the quenching for the discrete semi-linear heat equation with singular absorption u(t) = Delta(omega)u - lambda u(-p) on finite graph with Dirichlet boundary condition and the positive initial condition u(0)(x). When lambda(-p) >= max(x is an element of s) u(0)(x), we prove that the solution will quench in finite time by comparison principal. Meanwhile, we study the quenching rate. Moreover, we also prove that there exists a critical exponent lambda* such that the problem admits a global solution for all lambda <= lambda*. Finally, a numerical experiment on two finite graphs is given to illustrate our results. In this paper, we study existence and uniqueness of solutions for nonlocal boundary value problems of Caputo fractional differential equations equipped with generalized Riemann-Liouville integral boundary conditions. A variety of fixed point theorems such as Banach's fixed point theorem, nonlinear contractions, Krasnoselskii's fixed point theorem, Schaefer's fixed point theorem, Leray-Schauder's nonlinear alternative and Leray-Schauder degree theory are applied to obtain the desired results. Several examples are discussed for illustration of the obtained results. In this paper, we investigate the uniqueness of an entire function of finite order sharing a small entire function with its high order forward difference operator. The results obtained extend some known theorems and also show the exact solutions of some certain difference equations. Consider the difference equation (x) over right arrow (n+1) = f (n, (x) over right arrow (n),..., (x) over right arrow (n-k)), n = 0,1,..., where k is an element of {0,1,...} and the initial conditions are real vectors. We investigate the asymptotic behavior of the solutions of the considered equation. We give some effective conditions for the global stability and global asymptotic stability of the zero or positive equilibrium of this equation. Our results are based on application of the linearizations technique. We illustrate our results with many examples that include some equations from mathematical biology. We compute the direction of the Naimark-Sacker bifurcation for the difference equation x(n+1) = p + x(n)(2)/x(n-1)(2) where p is a positive number and the initial conditions x(-1) and x(0) are positive numbers. Furthermore, we give the asymptotic approximation of the invariant curve. In this paper, we study the reverse order law for the Moore-Penrose inverse of an operator product T1T2T3. In particular, using the matrix form of a bounded linear operator we derive some necessary and su cient conditions for the reverse order law (T1T2T3)(dagger) = (T3T2T1 dagger)-T-dagger-T-dagger. Moreover, some nite dimensional results are extended to in nite dimensional settings. In this paper, we study some differential equations arising from certain Sheffer sequence and investigate some identities for the Sheffer sequence of polynomials which is related to the theory of hyperbolic differential equations. We prove Hyers-Ulam stability of the first order linear inhomogeneous matrix difference equation (x) over right arrow (i)+1 = A(i)(x) over right arrow (i) +(g) over right arrow (i) for all integers i is an element of Z. Moreover, we show Hyers-Ulam stability of the nth order linear difference equation as a corollary. We present here several self adjoint operator Ostrowski type inequalities to all directions. These are based in the operator order over a Hilbert space. We present here several integer and fractional self adjoint operator Opial type inequalities to many directions. These are based in the operator order over a Hilbert space. In this paper, an approximate solution of the generalized Hirota-Satsuma (HS) coupled Kortewegde Vries (KdV) equation by the use of Fourier pseudospectral method is presented. A time discrete scheme is constructed by approximating the time derivative using forward difference formula, while the pseudospectral method is used in the space direction. The stability and convergence of the scheme are investigated using the energy method. The numerical results reveal that the Fourier pseudospectral method is a convenient, effective and accurate method to solve the generalized HS coupled KdV equation. This paper publishes for the first time the dedication to the Royal Society that John Webster wrote for his Displaying of Supposed Witchcraft (1677), but which failed to appear in the published work. It also investigates the circumstances in which the book received the Royal Society's imprimatur, in the light of the Society's ambivalent attitude towards witchcraft and related phenomena in its early years. The paper concludes that the role of Sir Jonas Moore as Vice-President in licensing the book was highly irregular, evidently reflecting the troubled state of the Society in the mid to late 1670s. This paper draws attention to the remarkable closing words of Isaac Newton's Optice (1706) and subsequent editions of the Opticks (1718, 1721), and tries to suggest why Newton chose to conclude his book with a puzzling allusion to his own unpublished conclusions about the history of religion. Newton suggests in this concluding passage that the bounds of moral philosophy will be enlarged as natural philosophy is 'perfected'. Asking what Newton might have had in mind, the paper first considers the idea that he was foreshadowing the 'moral Newtonianism' developed later in the eighteenth century; then it considers the idea that he was perhaps pointing to developments in natural theology. Finally, the paper suggests that Newton wanted to at least signal the importance of attempting to recover the true original religion, and perhaps was hinting at his intention to publish his own extensive research on the history of the Church. This paper proposes a fresh look at the 'Dissensions' that held up scientific business at the Royal Society during the spring of 1784. It focuses attention on the career and personal networks of Charles Hutton, whose dismissal from the role of Foreign Secretary ignited the row. It shows that the incident had no single cause but was the outcome of several factors that made Hutton intolerable to Joseph Banks, President of the Society, and of several factors that made Banks unpopular as President among a group of about 40 otherwise rather disparate Fellows. In 1978 M. J. Peterson examined the role played by the Royal College of Surgeons (RCS) in nineteenth-century dental reform, noting the establishment of its Licence in Dental Surgery (LDS) in 1859. In a paper published in Notes and Records in 2010, the present author described the influential role played by Fellows of the Royal Society during the nineteenth-century campaign for dental reform led by Sir John Tomes. Key players in this campaign, including the dentists Samuel Cartwright, Thomas Bell and James Salter, were, as well as being Fellows of the Royal Society, members of the Atheneum Club. The present research report indicates the roles played by those members of the Athenum Club who were also Fellows of the Royal Society in the scientific and professional reform of nineteenth-century dentistry. Although it does not attempt to document meetings at the Club, it suggests the potential for a symbiotic effect between the Royal Society and the Athenum. Where the previous paper proposed an active scientific role for the Royal Society in reforming dentistry, this paper presents the Athenum as a significant extension of the sphere of influence into the cultural realm for those who did enjoy membership of both organizations. This paper examines three female writers who chose to affiliate their educational scientific works with the 'domestic sphere': Priscilla Wakefield, Jane Marcet and Maria Edgeworth. It shows that within what is now broadly categorized as 'familiar science', differing motivations for writing, publishing and reading existed. Between 1790 and 1830 many educationalists claimed that the best way for children to learn was for them to exercise their memory on things encountered in everyday life. Religious allegiances, attitudes towards female science education and the utility of science in the home help to explain why these writers chose to introduce their readers to the illimitable world of science by setting their books in the seemingly restrictive domestic sphere. Furthermore, this paper argues that three different authors envisioned subtly different domestic spheres as settings for their work. Rather than there being a single homogeneous domestic sphere in which women and children received their education, and about which such authors wrote, there existed a multiplicity of domestic spheres depicted across the genre of educational science texts. The history of science has many functions. Historians should consider how their work contributes to various functions, going beyond a simple desire to understand the past correctly. There are both internal and external functions of the history of science in relation to science itself; I focus here on the internal, as they tend to be neglected these days. The internal functions can be divided into orthodox and complementary. The orthodox function is to assist with the understanding of the content and methods of science as it is now practised. The complementary function is to generate and improve scientific knowledge where current science itself fails to do so. Complementary functions of the history of science include the raising of critical awareness, and the recovery and extension of past scientific knowledge that has become forgotten or neglected. These complementary functions are illustrated with some concrete examples. Freedom is a phenomenon in the natural world. This phenomenon-and indirectly the question of free will-is explored using a variety of systems-theoretic ideas. It is argued that freedom can emerge only in systems that are partially determined and partially random, and that freedom is a matter of degree. The paper considers types of freedom and their conditions of possibility in simple living systems and in complex living systems that have modeling (cognitive) subsystems. In simple living systems, types of freedom include independence from fixed materiality, internal rather than external determination, activeness that is unblocked and holistic, and the capacity to choose or alter environmental constraint. In complex living systems, there is freedom in satisfaction of lower level needs that allows higher potentials to be realized. Several types of freedom also manifest in the modeling subsystems of these complex systems: in the transcending of automatism in subjective experience, in reason as instrument for passion yet also in reason ruling over passion, in independence from informational colonization by the environment, and in mobility of attention. Considering the wide range of freedoms in simple and complex living systems allows a panoramic view of this diverse and important natural phenomenon. This paper presents a theory of scientific study which is regarded as a social learning process of (working) scientific knowledge creation, revision, application, monitoring (e.g., confirmation) and dissemination (e.g., publication) with the aim of securing good quality, general, objective, testable and complete scientific knowledge of the domain. The theory stipulates the aim of scientific study that forms the basis of its principles. It also makes seven assumptions about scientific study and defines the major participating entities (i.e., scientists, scientific knowledge and enabling technical knowledge). It extends a recent process model of scientific study into a detailed interaction model as this process model already addresses many issues of philosophy of science. The detailed interaction model of scientific study provides a common template of scientific activities for developing logical (data) models in different scientific disciplines (for physical database implementation), or alternatively for developing (domain) ontologies of different scientific disciplines. Differences between research and scientific studies are discussed, and a possible way to develop a scientific theory of scientific study is described. Big historians are attempting to construct a general holistic narrative of human origins enabling an approach to studying the emergence of complexity, the relation between evolutionary processes, and the modern context of human experience and actions. In this paper I attempt to explore the past and future of cosmic evolution within a big historical foundation characterized by physical, biological, and cultural eras of change. From this analysis I offer a model of the human future that includes an addition and/or reinterpretation of technological singularity theory with a new theory of biocultural evolution focused on the potential birth of technological life: the theory of atechnogenesis. Furthermore, I explore the potential deep futures of technological life and extrapolate towards two hypothetical versions of an 'Omega Civilization': expansion and compression. Foundations of Science recently published a rebuttal to a portion of our essay it published 2 years ago. The author, G. Schubring, argues that our 2013 text treated unfairly his 2005 book, Conflicts between generalization, rigor, and intuition. He further argues that our attempt to show that Cauchy is part of a long infinitesimalist tradition confuses text with context and thereby misunderstands the significance of Cauchy's use of infinitesimals. Here we defend our original analysis of various misconceptions and misinterpretations concerning the history of infinitesimals and, in particular, the role of infinitesimals in Cauchy's mathematics. We show that Schubring misinterprets Proclus, Leibniz, and Klein on non-Archimedean issues, ignores the Jesuit context of Moigno's flawed critique of infinitesimals, and misrepresents, to the point of caricature, the pioneering Cauchy scholarship of D. Laugwitz. I present a discussion of some issues in the ontology of spacetime. After a characterisation of the controversies among relationists, substantivalists, eternalists, and presentists, I offer a new argument for rejecting presentism, the doctrine that only present objects exist. Then, I outline and defend a form of spacetime realism that I call event substantivalism. I propose an ontological theory for the emergence of spacetime from more basic entities (timeless and spaceless 'events'). Finally, I argue that a relational theory of pre-geometric entities can give rise to substantival spacetime in such a way that relationism and substantivalism are not necessarily opposed positions, but rather complementary. In an appendix I give axiomatic formulations of my ontological views. This paper examines the concept of information in situation semantics. For this purpose the most fundamental principles of situation semantics are classified into three groups: (1) principles of the more fundamental kind, (2) principles related to regularity, and (3) principles governing incremental information. Fodor's well-known criticisms of situation semanticists' concepts of information target the first group. Interestingly, situation semanticists have been anxious to articulate either the principles of the second group or the principles of the third group in order to meet these criticisms. Based on these observations, I will launch a dilemma for situation semanticists. Either they fail to handle information about individuals or they fail to present any acceptable account of the laws of nature. Millikan's version of situation semantics, I shall argue, is not the exception to the rule. The question that is the subject of this article is not intended to be a sociological or statistical question about the practice of today's mathematicians, but a philosophical question about the nature of mathematics, and specifically the method of mathematics. Since antiquity, saying that mathematics is problem solving has been an expression of the view that the method of mathematics is the analytic method, while saying that mathematics is theorem proving has been an expression of the view that the method of mathematics is the axiomatic method. In this article it is argued that these two views of the mathematical method are really opposed. In order to answer the question whether mathematics is problem solving or theorem proving, the article retraces the Greek origins of the question and Hilbert's answer. Then it argues that, by Godel's incompleteness results and other reasons, only the view that mathematics is problem solving is tenable. The contemporary debate between scientific realism and anti-realism is conditioned by a polarity between two opposing arguments: the realist's success argument and the anti-realist's pessimistic induction. This polarity has skewed the debate away from the problem that lies at the source of the debate. From a realist point of view, the historical approach to the philosophy of science which came to the fore in the 1960s gave rise to an unsatisfactory conception of scientific progress. One of the main motivations for the scientific realist appeal to the success of science was the need to provide a substantive account of the progress of science as an increase of knowledge about the same entities as those referred to by earlier theories in the history of science. But the idea that a substantive conception of progress requires continuity of reference has faded from the contemporary debate. In this paper, I revisit the historical movement in the philosophy of science in an attempt to resuscitate the original agenda of the debate about scientific realism. I also briefly outline the way in which the realist should employ the theory of reference as the basis for a robust account of scientific progress which will satisfy realist requirements. The function and legitimacy of values in decision making is a critically important issue in the contemporary analysis of science. It is particularly relevant for some of the more application-oriented areas of science, specifically decision-oriented science in the field of regulation of technological risks. Our main objective in this paper is to assess the diversity of roles that non-cognitive values related to decision making can adopt in the kinds of scientific activity that underlie risk regulation. We start out, first, by analyzing the issue of values with the help of a framework taken from the wider philosophical debate on science and values. Second, we study the principal conceptualizations used by scholars who have applied them to numerous case studies. Third, we appraise the links between those conceptualizations and learning processes in decision-oriented science. In this, we recur to the concept of methodological learning, i.e., learning about the best methodologies for generating knowledge that is useful for science-based regulatory decisions. The main result of our analysis is that non-cognitive values can contribute to methodological improvements in science in three principal ways: (a) as basis for critical analysis (to differentiate "sound" from "bad" science), (b) for contextualizing methodologies (by identifying links between methods and objectives), and (c) for establishing the burden of proof (in order to generate data that otherwise would not be generated). This article analyzes the institutionalization of the global organic agriculture field and sheds new light on the conventionalization debate. The institutions that shape the field form a tripartite standards regime of governance (TSR) that links standard-setting, certification, and accreditation activities, in a layering of markets for services that are additional to (and inseparable from) the market for certified organic products. At each of the three poles of the TSR, i.e., for standard-setting, certification, and accreditation, we describe how the corresponding markets were constructed over time and the role of the different actors in their evolution. We analyze the politics at stake among the actors at each pole, their competing or cooperative interests and visions, and the tensions between them in the promotion of markets. Through the lens of the TSR heuristic, we show that the institutionalization of the organic field beginning in the 1990s and its de facto inclusion in the broader sustainability field beginning in the 2000s contribute to a progressive distancing between the organic movement and its initial political project of alterity, to which public and private actors both contribute actively. As a set of interlinked market institutions, the TSR orients and narrows the scope of debate, which becomes restricted to "market-compatible" dimensions and objects. We conclude that the TSR is a promising heuristic for analyzing contemporary global regulation. Agriculture plays a key role in national economies and individual livelihoods in many developing countries, and yet agriculture as a field of study and an occupation remain under-emphasized in many educational systems. In addition, working in agriculture is often perceived as being less desirable than other fields, and not a viable or compelling option for students who have received a post-secondary education. This article explores the historical and contemporary perceptions of agriculture as a field of study and an occupation globally, and applies themes from the literature to analyze primary data from focus groups with international students studying for university degrees in the United States. The article analyzes students' perceptions and experiences in four countries-Bangladesh, Nepal, Honduras and Haiti-in order to make recommendations about how best to address challenges and develop capacity in agricultural education and employment in low-income countries. Family farming, understood as a household which combines family, farm and commercial activity, still represents the backbone of the world's agriculture. On family farms, labour division has generally been based on complementarity between persons of different gender and generations, resulting in specific male and female spheres and tasks. In this 'traditional' labour division, gender inequality is inherent as women are the unpaid and invisible labour force. Although this 'traditional' labour division still prevails through time and space, new arrangements have emerged. This paper asks whether we are witnessing changes in the unequal structure of family farming and analyses the diversity of farming family configurations, using the Swiss context as a case study. The typology of farming-family configurations developed, based on qualitative data, indicates that inequalities are related to status on the farm and position in the configuration rather than to gender identity per se. This insight enables a discussion of equality and fairness in a new light. This paper shows that farming-family configurations are often pragmatic but objectively unequal. However, these arrangements might still be perceived as fair when mutual recognition exists, resulting in satisfaction among the family members. The paper concludes that although family farming presents challenges to gender equality, some types of farming-family configurations offer new pathways towards enhanced gender equality. Although analyses of large-scale land acquisitions (LSLA) often contain an explicit or implicit normative judgment about such projects, they rarely deduce such judgment from a nuanced balancing of pros and cons. This paper uses assessments about a well-researched LSLA in Sierra Leone to show that a utilitarian approach tends to lead to the conclusion that positive effects prevail, whereas deontological approaches lead to an emphasis on negative aspects. LSLA are probably the most radical land-use change in the history of humankind. This process of radical transformation poses a challenge for balanced evaluations. Thus, we line out a framework that focuses on the options of local residents but sets boundaries of acceptability through the core contents of human rights. In addition, systemic implications of a project need to be regarded. Local food systems (LFSs) have grown in popularity around the world in recent years. Their framing often emphasizes the re-connection of producers and consumers against the "faceless" and "placeless" industrial agriculture. However, previous research suggests that such romanticized narratives may not keep up with reality. This relates to the transformative potential of LFSs and to whether they actually generate alternative modes of social organization that challenge problematic aspects of the food system. We place our focus on the practices and narratives that construct the producer/consumer relationship and show how these systems are governed. Our fieldwork was carried out in two LFSs in two distinct settings: community supported agriculture groups in NYC and responsible consumption communities in Catalonia, Spain. Three main types of practices and narratives are identified: sharing, negotiation and utilization practices, and narratives. Our findings reveal great heterogeneity between the two LFSs and show how intermediates participate in the producer/consumer relationship. The large-scale, intensive production of meat and other animal products, also known as the animal-industrial complex, is our largest food system in terms of global land use and contribution to environmental degradation. Despite the environmental impact of the meat industry, in much of the policy literature on climate and environmental change, sustainability and food security, meat continues to be included as part of a sustainable food future. In this paper, I present outcomes of a discourse analysis undertaken on a selection of key major international and Australian reports. After highlighting common themes in the ways that meat and animals are discussed, I draw on the animal studies literature to critically analyse the assumptions underpinning such policy documents. My analysis illustrates that animals are effectively de-animated and rendered invisible in these bodies of literature by being either aggregated-as livestock, units of production and resources, or materialised-as meat and protein. These discursive frames reflect implicit understandings of meat as necessary to human survival and animals as a natural human resource. A critical examination of these understandings illustrates their dual capacity to normalise and encourage the continuation of activities known to be seriously harming the environment, climate and human health, while at the same time obstructing and even denigrating alternative, less harmful approaches to food. In response, I offer some conceptual and analytical modifications that can be easily adopted by researchers on climate change, sustainability and food security with the aim of challenging dominant discourses on meat and animals. Golden Rice has played a key role in arguments over genetically modified (GM) crops for many years. It is routinely depicted as a generic GM vitamin tablet in a generic plant bound for the global South. But the release of Golden Rice is on the horizon only in the Philippines, a country with a storied history and complicated present, and contested future for rice production and consumption. The present paper corrects this blinkered view of Golden Rice through an analysis of three distinctive "rice worlds" of the Philippines: Green Revolution rice developed at the International Rice Research Institute (IRRI) in the 1960s, Golden Rice currently being bred at IRRI, and a scheme to promote and export traditional "heirloom" landrace rice. More than mere seed types, these rices are at the centers of separate "rice worlds" with distinctive concepts of what the crop should be and how it should be produced. In contrast to the common productivist framework for comparing types of rice, this paper compares the rice worlds on the basis of geographical embeddedness, or the extent to which local agroecological context is valorized or nullified in the crop's construction. The Green Revolution spread generic, disembedded high-input seeds to replace locally adapted landraces as well as peasant attitudes and practices associated with them. The disembeddedness of Golden Rice that boosts its value as a public relations vehicle has also been the main impediment in it reaching farmers' fields, as it has proved difficult to breed into varieties that grow well specifically in the Philippines. Finally, and somewhat ironically, IRRI has recently undertaken research and promotion of heirloom seeds in collaboration with the export scheme. The consumption of halal food may be seen as an expression of the Muslim identity. Within Islam, different interpretations of 'halal' exist and the pluralistic Muslim community requests diverse halal standards. Therefore, adaptive governance arrangements are needed in the halal food market. Globalization and industrialization have complicated the governance of halal food. A complex network of halal governors has developed from the local to the global level. In this paper, we analyze to what extent halal certification bodies in the Netherlands address the needs of the Muslim community and how they are influenced by international halal governance. The Netherlands serves as a case study with its growing Muslim community and its central position in international trade. The data comes from literature review and eleven qualitative semi-structured interviews with the most prominent actors in the Dutch halal governance system. Our analysis shows that the halal governance system in the Netherlands is weakly institutionalized and hardly adaptive to the needs of a heterogeneous Muslim community. Improvements are needed concerning stakeholder engagement, transparency, accessibility, impartiality and efficiency. Alternative food networks (AFNs) have become a common response to the socio-ecological injustices generated by the industrialized food system. Using a political ecology framework, this paper evaluates the emergence of an AFN in Chiapas, Mexico. While the Mexican context presents a particular set of challenges, the case study also reveals the strength the alternative food movement derives from a diverse network of actors committed to building a "community economy" that reasserts the multifunctional values of organic agriculture and local commodity chains. Nonetheless, just as the AFN functions as an important livelihood strategy for otherwise disenfranchised producers it simultaneously encounters similar limitations as those observed in other market-driven approaches to sustainable food governance. Civic agriculture is an approach to agriculture and food production that-in contrast with the industrial food system-is embedded in local environmental, social, and economic contexts. Alongside proliferation of the alternative food projects that characterize civic agriculture, growing literature critiques how their implementation runs counter to the ideal of civic agriculture. This study assesses the relevance of three such critiques to urban farming, aiming to understand how different farming models balance civic and economic exchange, prioritize food justice, and create socially inclusive spaces. Using a case study approach that incorporated interviews, participant observation, and document review, I compare two urban farms in Baltimore, Maryland-a "community farm" that emphasizes community engagement, and a "commercial farm" that focuses on job creation. Findings reveal the community farm prioritizes civic participation and food access for low-income residents, and strives to create socially inclusive space. However, the farmers' "outsider" status challenges community engagement efforts. The commercial farm focuses on financial sustainability rather than participatory processes or food equity, reflecting the use of food production as a means toward community development rather than propagation of a food citizenry. Both farms meet authentic needs that contribute to neighborhood improvement, though findings suggest a lack of interest by residents in obtaining urban farm food, raising concerns about its appeal and accessibility to diverse consumers. Though not equally participatory, equitable, or social inclusive, both farms exemplify projects physically and philosophically rooted in the local social context, necessary characteristics for promoting civic engagement with the food system. There is growing recognition that land grabbing is a global phenomenon. In Canada, investors are particularly interested in Saskatchewan farmland, the province where 40 % of country's agricultural land is situated. This article examines how the changing political, economic, and legal context under neoliberalism has shaped patterns of farmland ownership in Saskatchewan, between 2002 and 2014. Our research indicates that over this time, the amount of farmland owned by investors increased 16-fold. Also, the concentration of farmland ownership is on the rise, with the share of farmland owned by the largest four private owners increasing six-fold. Our methodology addresses some of the criticisms raised in the land grabbing literature. By using land titles data, we identified farmland investors and determined very precisely their landholdings thus allowing us to provide a fine-grained analysis of the actual patterns of farmland ownership. Although the article analyzes changes to farmland ownership in a specific historical, cultural and legislative context, it serves as the basis for a broader discussion of the values and priorities that land ownership policies reflect. Namely, we contrast an 'open for business' approach that prioritizes financial investment to one based on a land sovereignty approach that prioritizes social investment. The latter has greater potential if the aim is ecological sustainability and food sovereignty. This is the story of Slow Food University of Wisconsin (SFUW), a student organization that grew from one woman's idea to a community of over 3200 people dedicated to making sustainable, fairly produced, delicious food accessible in a small city, with a big university, in the heart of the United States. Along the way SFUW has fostered new ideas, developed skills, and built relationships through conscious food procurement, cooking and eating. This essay describes the evolution of the organization and its four projects: Family Dinner Night, South Madison, Outreach and the Caf,. It hasn't always been easy, but it's always been delicious. The contemporary process of financialization has been a major driver of the remarkable changes witnessed in global food and agricultural markets over the past decade, contributing to the rise and subsequent volatility of food and agricultural commodity prices since 2006. In the wake of these developments it has become clear that the turmoil has intensified the relationship between agriculture and finance in ways that have profound and enduring implications for the sector, and the people whose lives and livelihoods depend upon it. This symposium brings together four original research articles that contemplate the contemporary relationship between the agrifood and financial sectors. They examine a variety of overlapping themes, including the creation of financial assets from farmland and agricultural commodities, the activities of different types of investors in these assets in specific geographic contexts, and the challenges of governing this activity at the global scale. These articles show that the period of market volatility that began a decade ago re-invigorated investor interest in financial products linked to agriculture and farming, and inspired the packaging of new forms of financial assets in ways that have affected politics and practice on the ground, and are likely to leave a lasting legacy. This article critically analyzes the assumption that land is becoming increasingly scarce and that, therefore, farmland values are bound to rise across the globe. It investigates the process of land value creation, as well as its flipside: value erosion and stagnation, looking at the various mechanisms involved in each. As such, it is a study of how the financialization of agriculture affects the process of land commoditization. I show that, for farmland to be turned into an asset, a whole range of conditions have to be fulfilled, presenting a typology of asset making in the context of farmland. Asset making, like commoditization, is a process of assemblage, and it is less straightforward and less stable than generally assumed. Further, I argue that 'asset making' is not a one-way process. The article is based on an analysis of global data on land values and the case of farmland investment in post-Soviet farmland (Russia and Ukraine). According to portfolio managers, agriculture in general, and farmland in particular, can be considered an emerging asset class. Specialized financial vehicles, such as private equity and mutual funds, are emerging and competing to attract potential investment in this asset class. In recent years, there has been significant development of such vehicles targeting South Africa's farming sector. These innovations are led by a group of market intermediaries (e.g. asset managers or consultants) who endeavour to "re-shape" South African farmland as an opportunity for institutional investors. These "pioneers" engage in a multifaceted mediation process between global financial investors on one hand, and the South African agricultural sector on the other. Drawing upon an empirical study of such intermediaries in South Africa, this paper analyses the concrete mechanisms that facilitate this particular form of commodification. The paper presents and compares the intermediaries, giving particular attention to their structure, governance mechanisms and asset allocations within this "market in the making". It describes how intermediaries develop different paths of asset valorization to unlock the "financial value" of South African farmlands (i.e. "liquifying", standardizing, neutralizing, and depoliticizing agriculture as an asset). But, it also highlights some of the difficulties faced in the process of translating between international investors and local managers, questioning the "land-asset fiction" that is materializing through the subordination of farmland to the needs of financial society. This paper proposes two interrelated arguments: first, it is argued that agro-commodity traders are uniquely placed at the crossroads of agricultural trade to benefit from agricultural commodity speculation; and second, that the networks constituting their operations are central to their hedging activities. The case of Cargill-the largest privately owned company in the United States and one of the largest agricultural traders in the world-is used to support this argument by unpacking its operations, structure, and hedging strategies. In order to connect the operations of Cargill to its speculating strategies, this paper first traces how agriculture and finance have become increasingly intertwined leading to heightened agricultural commodity speculation. Second, Cargill will be positioned within this process by analyzing how it has financialized its own strategies and its Corporate Platform. Third, Black River Asset Management, Cargill's private equity arm, will be analyzed to show how it uses the information moving through Cargill's Platform to engage in hedging and/or speculation. This paper examines the recent rise of initiatives for responsible agricultural investment and provides a preliminary assessment of their likely success in curbing the ecological and social costs associated with the growth in private financial investment in the sector over the past decade. I argue that voluntary responsible investment initiatives for agriculture are likely to face similar weaknesses to those experienced in responsible investment initiatives more generally. These include vague and difficult to enforce guidelines, low participation rates, an uneven business case, and confusion arising from multiple and competing initiatives. In addition, the large diversity of investors and high degree of complexity of financial investments further complicate efforts to discern who bears the burden of responsibility in practice. As a result, there is a strong likelihood that voluntary governance initiatives for responsible agricultural investment will shift discourse more than they will change practice. The internet has considerably changed epistemic practices in science as well as in everyday life. Apparently, this technology allows more and more people to get access to a huge amount of information. Some people even claim that the internet leads to a democratization of knowledge. In the following text, we will analyze this statement. In particular, we will focus on a potential change in epistemic structure. Does the internet change our common epistemic practice to rely on expert opinions? Does it alter or even undermine the division of epistemic labor? The epistemological framework of our investigation is a naturalist-pragmatist approach to knowledge. We take it that the internet generates a new environment to which people seeking information must adapt. How can they, and how should they, expand their repertory of social markers to continue the venture of filtering, and so make use of the possibilities the internet apparently provides? To find answers to these questions we will take a closer look at two case studies. The first example is about the internet platform WikiLeaks that allows so-called whistle-blowers to anonymously distribute their information. The second case study is about the search engine Google and the problem of personalized searches. Both instances confront a knowledge-seeking individual with particular difficulties which are based on the apparent anonymity of the information distributor. Are there ways for the individual to cope with this problem and to make use of her social markers in this context nonetheless?. This paper puts forward a theoretical framework for the analysis of expertise and experts in contemporary societies. It argues that while prevailing approaches have come to see expertise in various forms and functions, they tend to neglect the broader historical and societal context, and importantly the relational aspect of expertise. This will be discussed with regard to influential theoretical frameworks, such as laboratory studies, regulatory science, lay expertise, post-normal science, and honest brokers. An alternative framework of expertise is introduced, showing the limitations of existing frameworks and emphasizing one crucial element of all expertise, which is their role in guiding action. This paper addresses the growing problem of retractions in the scientific literature of publications that contain bad data (i.e., fabricated, falsified, or containing error), also called "false science." While the problem is particularly acute in the biomedical literature because of the life-threatening implications when treatment recommendations and decisions are based on false science, it is relevant for any knowledge domain, including the social sciences, law, and education. Yet current practices for handling retractions are seen as inadequate. We use the metaphor of a virus to illustrate how such studies can spread and contaminate the knowledge system, when they continue to be treated as valid. We suggest drawing from public health models designed to prevent the spread of biological viruses and compare the strengths and weaknesses of the current governance model of professional self-regulation with a proposed public health governance model. The paper concludes by considering the value of adding a triple-helix model that brings industry into the university-state governance mechanisms and incorporates bibliometric capabilities needed for a holistic treatment of the retraction process. Research institutions and universities are positioned in a state of inherent struggle to reconcile the pressures and demands of the external environment with those of the scientific community. This paper is focused on one contested area, the division between basic and applied research, and explores how universities work to balance organizational legitimacy and scientific reputation. Building on an in-depth case study of the Weizmann Institute of Science, established as an institute of basic research in the context of the new Israeli state, I explore how managers and scientists at the Institute engaged in organizational experimentation to demarcate basic and applied research during the 1950s-1970s. In analyzing the case of the Weizmann Institute, the paper draws on the concept of boundary-work and explores organizational strategies of boundary-work focused on the demarcation of activities and units and creation of new organizational forms. This paper reports experiences from an art-science project set up in an educational context as well as in the tradition of placing artists in labs. It documents artists' and scientists' imaginations of their encounter and analyses them drawing on the concepts of "boundary object" and "boundary work". Conceptually, the paper argues to broaden the idea of boundary objects to include inhibitory boundary objects that hinder rather than facilitate communication across boundaries. This focus on failures to link social worlds brings the boundary object concept closer to Gieryn's boundary work and allows for a co-application of the two concepts in the analysis of cross-boundary communication. Empirically, the paper provides an in-depth ethnographic description of an art-science project as a resource for future practice. In conclusion, the art-science encounter included meeting points as well as multiple levels of boundary work which engaged the artists in a different way than as illustrators of scientific representations of climate change. The closer they got to the research practice the more the public and policy construct of climate change disappeared. Rather than political activism, the approach triggered explorations of the scientific context, including affirmative as well as critical re-imaginations of research practices. Artists and scientists acted as publics for one another, as resources to draw on for reflection and self-identification. But instead of cutting back or renegotiating standards of one's own practice, especially the artists engaged in boundary work creating space to produce a piece of art according to their own criteria of quality and relevance. This article deals with the relationship between the creator of psychoanalysis, Sigmund Freud, and the Latvian-born Chilean professor of physiology - and endocrinologist and anthropologist - Alejandro (or Alexander) Lipschutz. Up till now, the historiography of psychoanalysis in Chile has ignored the existence of this relationship, that is to say, the fact that there exists an interesting exchange of correspondence as well as references to Lipschutz in some important works published by Freud and in Freud's correspondence with the Hungarian psychoanalyst Sandor Ferenczi. There are also references to works on psychoanalysis carried out by Lipschutz in Chile. The Freud-Lipschutz relationship allows us to examine two interesting topics in contemporary historiographical approaches to psychoanalysis. First, it permits us to reflect on the connections that Freud and Ferenczi sought to establish between psychoanalysis and biology (endocrinology in particular) as a strategy to address criticism of the scientific foundations of psychoanalysis and, therefore, to help legitimize psychoanalysis in the field of science. Second, the relationship between Freud, working in a culturally influential city such as Vienna, and Lipschutz, working in a peripheral' country such as Chile, paves the way to reflect on the consequences of a history of psychoanalysis written from the perspective of the margins'. This is a history that focuses not on regions where early industrialization and modernization processes, along with an important academic and scientific tradition, help explain the interest in and reception of psychoanalysis, but on regions where different sets of conditions have to be examined to explain appropriation and dissemination processes. Today, complaints about information overload - associated with an overwhelming deluge of data - are commonplace. Early modernists have reacted to these concerns by showing that similar ones have arisen before. While this perspective is useful, it leaves out what was novel about the concept of information overload, which relied on a historically specific model of the human being. I trace the term's history back to 1960, when the American psychologist and systems theorist James Grier Miller published his article on information input overload and psychopathology'. In Grier Miller's usage, the idea of information overload signalled as much a reconceptualization of human beings as communication channels whose capacity could be overwhelmed as it did concerns about the volume of reading material to manage. Through his work, and the broader subsequent adoption of the term in academic and journalistic venues, I show how information overload' reflected intellectual and social trends specific to its time. Twentieth-century anthropology has been operating with the assumption of one nature and many cultures, one reality experienced and lived in many different ways. Its primary job, therefore, has been to render the otherness of the other understandable, to demonstrate that although different it is also the same; in short, to show that although other, others are people like us. The latest theoretical paradigm, known as the ontological turn', appears to reverse this assumption and to posit many natures and one culture. Whether it does in fact reverse it and constitutes a meta-ontology, as critics have pointed out, or it is only a heuristic, methodological device, as some of the proponents of the turn' have recently argued, the contention of my article is the same: first, this move - the ontological - is made in the hope of doing a better job in redeeming otherness than earlier anthropological paradigms; second, it fails as they did - in the same way and for the same reasons. An early proponent of the social sciences, Frederic Le Play, was the occupant of senior positions within the French state in the mid- to late 19th century. He was writing at a time when science was ascending. There was for him no doubt that scientific observation, correctly applied, would allow him unmediated access to the truth. It is significant that Le Play was the organizer of a number of universal expositions because these expositions were used as vehicles to demonstrate the ascendant position of western civilization. The fabrication of linear time is a history of progress requiring a vision of history analogous to the view offered the spectator at a diorama. Le Play employed the design principles and spirit of the diorama in his formulations for the social sciences, and L'Exposition Universelle of 1867 used the technology wherever it could. Both the gaze of the spectators and the objects viewed are part and products of the same particular and unique historical formation. Ideas of perception cannot be separated out from the conditions that make them possible. Vision and its effects are inseparable from the observing subject who is both a product of a particular historical moment and the site of certain practices. The Cambridge Malting House, an experimental school, serves here as a case study for investigating the tensions within 1920s liberal elites between their desire to abandon some Victorian and Edwardian sets of values in favour of more democratic ones, and at the same time their insistence on preserving themselves as an integral part of the English upper class. Susan Isaacs, the manager of the Malting House, provided the parents - some of whom were the most famous scientists and intellectuals of their age - with an opportunity to fulfil their fantasy' of bringing up children in total freedom. In retrospect, however, she deeply criticized those from their milieu for not fully understanding the real socio-cultural implications of their ideological decision to make independence and freedom the core values in their children's education. Thus, 1920s progressive education is a paradigmatic case study of the cultural and ideological inner contradictions within liberal thought in the interwar era. The article also shows how psychoanalysis - which attracted many progressive educators - played a crucial role in providing liberals of all sorts with a new language to articulate their political visions, but, at the same time, explored the limits of the liberal discourse as a whole. There has been consistent interest in telepathy within psychoanalysis from its start. Relational psychoanalysis, which is a relatively new development in psychoanalytic theory and practice, seems more receptive to experiences between patient and analyst that suggest ostensibly anomalous communicative capacities. To establish this openness to telepathic phenomena with relational approaches, a selection of papers recently published in leading academic journals in relational psychoanalysis is examined. This demonstrates the extent to which telepathy-like experiences are openly presented and seriously considered in the relational community. The article then discusses those characteristics of the relational approach that may facilitate greater openness to telepathic experience. The argument is that relational psychoanalysis provides a coherent framework in which otherwise anomalous phenomena of patient-analyst interaction can be understood. Historically, American political science has rarely engaged popular culture as a central topic of study, despite the domain's outsized influence in American community life. This article argues that this marginalization is, in part, the by-product of long-standing disciplinary debates over the inadequate political development of the American public. To develop this argument, the article first surveys the work of early political scientists, such as John Burgess and Woodrow Wilson, to show that their reformist ambitions largely precluded discussion of mundane activities of social life such as popular culture. It then turns to Harold Lasswell, who produced some of the first investigations of popular culture in American political science. Ironically, however, his work - and the work of those who adapted similar ways of speaking about popular culture after him - only reinforced skepticisms concerning the American public. It has thus helped keep the topic on the margins of disciplinary discourse. During the second half of the nineteenth century, land frontiers became areas of unique significance for surveyors in colonial India. These regions were understood to provide the most stringent tests for the men, instruments, and techniques that collectively constituted spatial data and representations. In many instances, however, the severity of the challenges that India's frontiers afforded stretched practices in the field and in the survey office beyond breaking point. Far from producing supposedly unequivocal maps, many involved in frontier surveying acknowledged that their work was problematic, partial, and prone to contrary readings. They increasingly came to construe frontiers as spaces that exceeded scientific understanding, and resorted to descriptions that emphasized fantastical and disorienting embodied experiences. Through examining the many crises and multiple agents of frontier mapping in British India, this article argues that colonial surveying and its outputs were less assured and more convoluted than previous histories have acknowledged. There has been an explosion of interest in innovation-oriented knowledge and utility in early modern knowledge economies. Despite this, a healthy skepticism surrounding the category of useful knowledge persists, at least in part because of its association with intentional concealment. Helpful in many ways, this skepticism has fostered a tendency to overlook a variety of efforts to teach useful knowledge in the period: efforts that were anchored in engagement with the real and involved the cultivation of an ability to direct the powers of the imagination. Indeed, for some the imagination served as a faculty central to an epistemology of use. This article takes as its example a handbook written by an early political economist (c. 1700) who endeavored to teach readers how to imagine uses for things they observed in collections while traveling so they would be better prepared to participate in a new, transnational culture of innovation. This essay challenges the dominance of the spherical earth model in fifteenth- and early-sixteenth-century Western European thought. It examines parallel strains of Latin and vernacular writing that cast doubt on the existence of the southern hemisphere. Three factors shaped the alternate accounts of the earth as a plane and disk put forward by these sources: (1) the unsettling effects of maritime expansion on scientific thought; (2) the revival of interest in early Christian criticism of the spherical earth; and (3) a rigid empirical stance toward entities too large to observe in their entirety, including the earth. Criticism of the spherical earth model faded in the decades after Magellan's crew returned from circuiting the earth in 1522. What did nineteenth-century chemists know? This essay uses Emil Fischer's classic study of the sugars in 1880s and 90s Germany to argue that chemists' knowledge was not primarily vested in the theories of valence, structure, and stereochemistry that have been the subject of so much historical and philosophical analysis of chemistry in this period. Nor can chemistry be reduced to a merely manipulative exercise requiring little or no intellectual input. Examining what chemists themselves termed the art of chemical experimentation reveals chemical practice as inseparable from its cognitive component, and it explains how chemists integrated theory with experiment through reason. In 1964, William Hamilton presented a mechanism for the evolution of altruism, which was perceived by its main promoters as an alternative to explanations based on "group selection,'' invoking advantage to the population to account for the evolution of such traits. Less than ten years later, Hamilton used the framework developed by George Price to model the evolution of an altruistic gene in a structured population, a result that has been interpreted as a spectacular conversion to group selection. This paper revisits the modeling research on altruism and the considerable semantic ambiguities concerning the levels of selection in the late 1960s and early 1970s, by studying in close detail the reflections and exchanges among Hamilton, Price, Robert Trivers, and Ilan Eshel. The challenge in this research was not simply to find and model robust mechanisms for the evolution of altruism, but to interpret their properties in unambiguous terms that could be accepted by other researchers. The continuing debate over the levels of selection results from the tension between the properties of the models and the words used in interpreting them. This article explores the emotional community of museum natural scientists in late nineteenth-and early twentieth-century Argentina, a context in which the growth of museum natural sciences and nation-state formation became closely intertwined. Influenced by powerful nineteenth-century notions of civilization and modernity, Argentine scientists and statemakers sought to create a distinctively Argentine science, which would emulate European science in form but also retain a uniquely national character. A small group of influential museum administrators and scientists consciously strove to strengthen science's influence in Argentine national society by creating communal norms among scientists that resonated with narratives about civilization and modernity, and that guided proper behavior and emotional expression. Scientists also challenged the expectations of their community, testing the strength of central emotional tenets such as patriotism and objectivity. This article uses emotional communities as a framework for exploring the push and pull between social patterns and individual choices in this critical moment in Argentina's history, when new and powerful ideas about science-as a modern, objective, and national practice-emerged in tandem with nation-state formation. In particular, this article explores museum natural scientists' emotional concerns with objectivity and patriotism through a small group of Argentine museum natural scientists: Francisco P. Moreno, Juan B. Ambrosetti, Hermann Burmeister, and Florentino Ameghino. This paper aims to contribute to a better understanding of the history of biology and forestry in Portugal. It will focus on the one state-owned cork oak station devoted to forestry research, showing how its foresters and scientists shaped, and relied on, the state-controlled unions, both for producing and distributing varieties of cork oak and for controlling the seeds and plants forest owners used. Portugal played a very special role in the international development of Mediterranean forest genetics during the first half of the twentieth century. Forestry genetics were decisive for the Estado Novo government, and the Alcobaca Station became a model for the future organization of other countries' applied forestry research centers. The paper shows how the milieu of forestry scientists and breeders played an important role in the development and institutionalization of genetics in Portugal. The paper will explore how these relationships made it possible for the scientists to test, multiply, and distribute the seeds and plants they produced at the laboratory throughout the Portuguese landscape, thus demonstrating the role of scientists as active agents of state formation and landscape transformation within a corporate political economy. The history of the Alcobac, a Forest Station is an important example of fascist institution building. Nanomedicine offers remarkable options for new therapeutic avenues. As methods in nanomedicine advance, ethical questions conjunctly arise. Nanomedicine is an exceptional niche in several aspects as it reflects risks and uncertainties not encountered in other areas of medical research or practice. Nanomedicine partially overlaps, partially interlocks and partially exceeds other medical disciplines. Some interpreters agree that advances in nanotechnology may pose varied ethical challenges, whilst others argue that these challenges are not new and that nanotechnology basically echoes recurrent bioethical dilemmas. The purpose of this article is to discuss some of the ethical issues related to nanomedicine and to reflect on the question whether nanomedicine generates ethical challenges of new and unique nature. Such a determination should have implications on regulatory processes and professional conducts and protocols in the future. We investigated family members' lived experience of Parkinson's disease (PD) aiming to investigate opportunities for well-being. A lifeworld-led approach to healthcare was adopted. Interpretative phenomenological analysis was used to explore in-depth interviews with people living with PD and their partners. The analysis generated four themes: It's more than just an illness revealed the existential challenge of diagnosis; Like a bird with a broken wing emphasizing the need to adapt to increasing immobility through embodied agency; Being together with PD exploring the kinship within couples and belonging experienced through support groups; and Carpe diem! illuminated the significance of time and fractured future orientation created by diagnosis. Findings were interpreted using an existential-phenomenological theory of well-being. We highlighted how partners shared the impact of PD in their own ontological challenges. Further research with different types of families and in different situations is required to identify services required to facilitate the process of learning to live with PD. Care and support for the family unit needs to provide emotional support to manage threats to identity and agency alongside problem-solving for bodily changes. Adopting a lifeworld-led healthcare approach would increase opportunities for well-being within the PD illness journey. The studies of health care systems are conducted intensively on various levels. They are important because the systems suffer from numerous pathologies. The health care is analyzed, first of all, in economic aspects but their functionality in the framework of systems theory is studied, as well. There are also attempts to work out some general values on which health care systems should be based. Nevertheless, the aforementioned studies, however, are fragmentary ones. In this paper holistic approach to the philosophical basis of health care is presented. The levels on which the problem can be considered are specified explicitly and relations between them are analyzed, as well. The philosophical basis on which the national health care systems could be based is proposed. Personalism is the basis for the proposal. First of all, the values, that are derived from the personalistic philosophy, are specified as the basic ones for health care systems. Then, general organizational and functional properties of the system are derived from the assumed values. The possibility of adaptation of solutions from other fields of social experiences are also mentioned. The existing health care systems are analyzed within the frame of the introduced proposal. Guidelines orient best practices in medicine, yet, in health care, many real world constraints limit their optimal realization. Since guideline implementation problems are not systematically anticipated, they will be discovered only post facto, in a learning curve period, while the already implemented guideline is tweaked, debugged and adapted. This learning process comes with costs to human health and quality of life. Despite such predictable hazard, the study and modeling of medical guideline implementation is still seldom pursued. In this article we argue that to systematically identify, predict and prevent medical guideline implementation errors is both an epistemic responsibility and an ethical imperative in health care, in order to properly provide beneficence, minimize or avoid harm, show respect for persons, and administer justice. Furthermore, we suggest that implementation knowledge is best achieved technically by providing simulation modeling studies to anticipate the realization of medical guidelines, in multiple contexts, with system and scenario analysis, in its alignment with the emerging field of implementation science and in recognition of learning health systems. It follows from both claims that it is an ethical imperative and an epistemic responsibility to simulate medical guidelines in context to minimize (avoidable) harm in health care, before guideline implementation. In biomedical research lack of trust is seen as a great threat that can severely jeopardise the whole biomedical research enterprise. Practices, such as informed consent, and also the administrative and regulatory oversight of research in the form of research ethics committees and Institutional Review Boards, are established to ensure the protection of future research subjects and, at the same time, restore public trust in biomedical research. Empirical research also testifies to the role of trust as one of the decisive factors in research participation and lack of trust as a barrier for consenting to research. However, what is often missing is a clear definition of trust. This paper seeks to address this gap. It starts with a conceptual analysis of the term trust. It compares trust with two other related terms, those of reliance and trustworthiness, and offers a defence of Baier's attribute of 'good will' a basic characteristic of trust. It, then, proceeds to consider trust in the context of biomedical research by examining two questions: First, is trust necessary in biomedical research?; and second, do increases in regulatory oversight of biomedical research also increase trust in the field? This paper argues that regulatory oversight is important for increasing reliance in biomedical research, but it does not improve trust, which remains important for biomedical research. It finishes by pointing at professional integrity as a way of promoting trust and trustworthiness in this field. This paper invokes the conceptual framework of Bourdieu to analyse the mechanisms, which help to maintain inappropriate authorship practices and the functions these practices may serve. Bourdieu's social theory with its emphasis on mechanisms of domination can be applied to the academic field, too, where competition is omnipresent, control mechanisms of authorship are loose, and the result of performance assessment can be a matter of symbolic life and death for the researchers. This results in a problem of game-theoretic nature, where researchers' behaviour will be determined more by the logic of competition, than by individual character or motives. From this follows that changing this practice requires institutionalized mechanisms, and change cannot be expected from simply appealing to researchers' individual conscience. The article aims at showing that academic capital (administrative power, seniority) is translated into honorary authorship. With little control, undetected honorary authorship gives the appearance of possessing intellectual capital (scientific merit). In this way a dominant position is made to be seen as natural result of intellectual ability or scientific merit, which makes it more acceptable to those in dominated positions. The final conclusion of this paper is that undemocratic authorship decisions and authorship based performance assessment together are a form of symbolic violence. Evidence-based medicine (EBM) and medical professionalism are two prominent notions in current medical debates. However, proponents of professionalism fear a restriction in doctors' freedom to make their best decisions for individual patients caused by the influence of EBM and highly standardised decision procedures. The challenge which EBM allegedly poses to physicians' discretion forms the starting point for an analysis of the relationship between professionalism, as an inherent value system of medical practice, and EBM, as an approach to optimise the decision-making for individual patients. The analysis starts with a brief conceptual clarification of the ambiguous term "professionalism". It then focuses on three key aspects of medical professionalism which may come into conflict with the basic tenets of EBM. The potential tensions between (a) professional autonomy and clinical practice guidelines, (b) individualised care and standardisation, and (c) esoteric authority and public accountability are analysed and a suggestion for reconcilement regarding each point is made. The article closes with a summary on how a better reflection on medical professionalism may help towards a fuller understanding of EBM and vice versa. Recent health legislation in Norway significantly increases access to specialist care within a legally binding time frame. The paper describes the contents of the new legislation and introduces some of the challenges with proliferations of rights to health care. The paper describes some of the challenges associated with the proliferation of legal rights to health care. It explains the benefits of assessing the new law in the light of a rights framework. It then analyses the problematic aspects of establishing additional priority rules as solutions to rights conflicts. It then defends adequacy criteria for acceptable priority rules when such rules are unavoidable. It finally defends our proposed method and explores concrete applications. In this study I explore from a phenomenological perspective the relationship between affectivity and narrative self-understanding in depression. Phenomenological accounts often conceive of the disorder as involving disturbances of the narrative self and suggest that these disturbances are related to the alterations of emotions and moods typical of the illness. In this paper I expand these accounts by advancing two sets of claims. In the first place, I suggest that, due to the loss of feeling characteristic of the illness, the narratives with which the patients identified prior to the onset of depression are altered in various ways, thus leading to the weakening or abandonment of the narratives themselves. I then move to show that these autobiographical narratives are replaced by new stories which possess a distinctive structure and I argue that this is dependent upon specific configurations of affective experience, such as existential feelings of guilt, hopelessness, and isolation, and particular forms of temporal and spatial experience. In this paper we explore the rise of 'the breast cancer gene' as a field of medical, cultural and personal knowledge. We address its significance in the Norwegian public health care system in relation to so-called biological citizenship in this particular national context. One of our main findings is that, despite its claims as a measure for health and disease prevention, gaining access to medical knowledge of BRCA 1/2 breast cancer gene mutations can also produce severe instability in the individuals and families affected. That is, although gene testing provides modern subjects with an opportunity to foresee their biological destiny and thereby become patients in waiting, it undoubtedly also comes with difficult existential dilemmas and choices, with implications that resonate beyond the individual and into different family and love relations. By elaborating on this finding we address the question of whether the empowerment slogan, which continues to be advocated through various health, BRCA and breast cancer discourses, reinforces a nai "ve or an idealized notion of the actively responsible patient: resourceful enough to seek out medical expertise and gain sufficient knowledge, on which to base informed decisions, thereby reducing the future risk of developing disease. In contrast to this ideal, our Norwegian informants tell a different story, in which there is no apparent heroic mastery of genetic fates, but rather a pragmatic attitude to dealing with a dire situation over which they have little control, despite having complied with medical advice through national guidelines and follow-up procedures for BRCA 1/2 carriers. In conclusion we claim that the sense of safety that gene testing and its associated medical solutions allegedly promise to provide proved illusory. Although BRCA-testing offers the potential for protection from adverse DNA-heritage, administered through possibilities for self-monitoring and selfmanagement of the body, the feeling of 'being in good health' has hardly been reinforced by the emergence of gene technology. In public health, the issue of pharmaceutical pricing is a perennial problem. Recent high-profile examples, such as the September 2015 debacle involving Martin Shkreli and Turing Pharmaceuticals, are indicative of larger, systemic difficulties that plague the pharmaceutical industry in regards to drug pricing and the impact it yields on their reputation in the eyes of the public. For public health ethics, the issue of pharmaceutical pricing is rather crucial. Simply, individuals within a population require pharmaceuticals for disease prevention and management. In order to be effective, these pharmaceuticals must be accessibly priced. This analysis will explore the notion of corporate social responsibility in regards to pharmaceutical pricing with an aim of restoring a positive reputation upon the pharmaceutical industry in the public eye. The analysis will utilize the 2005 United Nations Educational, Scientific, and Cultural Organization's Universal Declaration on Bioethics and Human Rights (UDBHR) to establish implications regarding the societal responsibilities of pharmaceutical companies in a global context. To accomplish this, Article 14 of the UDBHR-social responsibility and health-will be articulated in order to advocate a viewpoint of socially responsible capitalism in which pharmaceutical companies continue as profit- making ventures, yet establish moral concern for the welfare of all their stakeholders, including the healthcare consumer. Dementia is highly prevalent and up until now, still incurable. If we may believe the narrative that is currently dominant in dementia research, in the future we will not have to suffer from dementia anymore, as there will be a simple techno-fix solution. It is just a matter of time before we can solve the growing public health problem of dementia. In this paper we take a critical stance towards overly positive narratives of techno-fixes by placing our empirical analysis of dementia research protocols and political statements in a framework of technology assessment. From this perspective, it becomes obvious that a techno-fix is just one of many ways to approach societal problems and more importantly that technologies are way less perfect than they are presented. We will argue that this narrow scope, which focusses on the usual suspects for solving illnesses, reduces dementia to organismic aspects, and may be counterproductive in finding a cure for dementia. We conclude with outlining how the narrow scope can be balanced with other narratives and why we should have a reasonable scepticism towards the usual suspects. Slippery-slope arguments typically question a course of action by estimating that it will end in misery once the first unfortunate step is taken. Previous studies indicate that estimations of the long-term consequences of certain debated actions, such as legalizing physicianassisted suicide, may be strongly influenced by tacit personal values. In this paper, we suggest that to the extent that slippery-slope arguments rest on estimations of future events, they may be mere rationalizations of personal values. This might explain why there are proponents even for strikingly poor slippery-slope arguments. Which domains of biology do philosophers of biology primarily study? The fact that philosophy of biology has been dominated by an interest for evolutionary biology is widely admitted, but it has not been strictly demonstrated. Here I analyse the topics of all the papers published in Biology & Philosophy, just as the journal celebrates its thirtieth anniversary. I then compare the distribution of biological topics in Biology & Philosophy with that of the scientific journal Proceedings of the National Academy of Science of the USA, focusing on the recent period 2003-2015. This comparison reveals a significant mismatch between the distributions of these topics. I examine plausible explanations for that mismatch. Finally, I argue that many biological topics underrepresented in philosophy of biology raise important philosophical issues and should therefore play a more central role in future philosophy of biology. Despite its explanatory clout, the theory of evolution has thus far compiled a modest record with respect to predictive power-that other major hallmark of scientific theories. This is considered by many to be an acceptable limitation of a theory that deals with events and processes that are intrinsically random (and historic). However, whether this is an inherent restriction or simply the sign of an incomplete theory is an open question. In an attempt to help answer that question, we propose a classification scheme for several types of prediction that might occur with regard to evolutionary systems, then explore the nature of these predictions in a system that simulates the evolution of neural architectures. This provides a platform from which to consider the relevance of such observations for real biological systems and illuminates a variety of key issues pertaining to prediction in those environments. In a recent article in this Journal, Fumagalli (Biol Philos 26:617-635, 2011) argues that economists are provisionally justified in resisting prominent calls to integrate neural variables into economic models of choice. In other articles, various authors engage with Fumagalli's argument and try to substantiate three often-made claims concerning neuroeconomic modelling. First, the benefits derivable from neurally informing some economic models of choice do not involve significant tractability costs. Second, neuroeconomic modelling is best understood within Marr's three-level of analysis framework for information-processing systems. And third, neural findings enable choice modellers to confirm the causal relevance of variables posited by competing economic models, identify causally relevant variables overlooked by existing models, and explain observed behavioural variability better than standard economic models. In this paper, I critically examine these three claims and respond to the related criticisms of Fumagalli's argument. Moreover, I qualify and extend Fumagalli's account of how trade-offs between distinct modelling desiderata hamper neuroeconomists' attempts to improve economic models of choice. I then draw on influential neuroeconomic studies to argue that even the putatively best available neural findings fail to substantiate current calls for a neural enrichment of economic models. Robert Trivers has proposed perhaps the only serious adaptationist account of self-deception-that the primary function of self-deception is to better deceive others. But this account covers only a subset of cases and needs further refinement. A better evolutionary account of self-deception and cognitive biases more generally will more rigorously recognize the various ways in which false beliefs affect both the self and others. This article offers formulas for determining the optimal doxastic orientation, giving special consideration to conflicted self-deception as an alternative to outright self-delusion. A novel taxonomy of self-deception, as it relates to the beliefs held by others, is also presented. While Trivers makes a plausible case for the adaptive value of certain cognitive biases, a more fragmented and nuanced account of the social forces impacting the evolution of self-deception is needed. 'Gouldian arguments' appeal to the contingency of a scientific domain to establish that domain's autonomy from some body of theory. For instance, pointing to evolutionary contingency, Stephen Jay Gould suggested that natural selection alone is insufficient to explain life on the macroevolutionary scale. In analysing contingency, philosophers have provided source-independent accounts, understanding how events and processes structure history without attending to the nature of those events and processes. But Gouldian Arguments require source-dependent notions of contingency. An account of contingency is source-dependent when it is indexed to (1) some pattern (i.e., microevolution or macroevolution) and (2) some process (i.e., Natural Selection, species sorting, etc.). Positions like Gould's do not turn on the mere fact of life's contingency-that life's shape could have been different due to its sensitivity to initial conditions, path-dependence or stochasticity. Rather, Gouldian arguments require that the contingency is due to particular kinds of processes: in this case, those which microevolutionary theory cannot account for. This source-dependent perspective clarifies both debates about the nature and importance of contingency, and empirical routes for testing Gould's thesis. There have been periodic claims that evolutionary biology needs urgent reform, and this article tries to account for the volume and persistence of this discontent. It is argued that a few inescapable properties of the field make it prone to criticisms of predictable kinds, whether or not the criticisms have any merit. For example, the variety of living things and the complexity of evolution make it easy to generate data that seem revolutionary (e.g. exceptions to well-established generalizations, or neglected factors in evolution), and lead to disappointment with existing explanatory frameworks (with their high levels of abstraction, and limited predictive power). It is then argued that special discontent stems from misunderstandings and dislike of one well-known but atypical research programme: the study of adaptive function, in the tradition of behavioural ecology. To achieve its goals, this research needs distinct tools, often including imaginary agency, and a partial description of the evolutionary process. This invites mistaken charges of narrowness and oversimplification (which come, not least, from researchers in other subfields), and these chime with anxieties about human agency and overall purpose. The article ends by discussing several ways in which calls to reform evolutionary biology actively hinder progress in the field. In response to Germain (Biol Philos 27: 785-810, 2012. doi: 10.1007/s10539-012-9334-2) argument that evolution by natural selection has a limited explanatory power in cancer, Lean and Plutynski (Biol Philos 31: 39-57, 2016. doi: 10.1007/s10539-015-9511-1) have recently argued that many adaptations in cancer only make sense at the tumor level, and that cancer progression mirrors the major evolutionary transitions. While we agree that selection could potentially act at various levels of organization in cancers, we argue that tumor-level selection (MLS2) is unlikely to actually play a relevant role in our understanding of the somatic evolution of human cancers. This article describes a medieval English astrolabe usually known as Mensing-26 and now in the Adler Planetarium and Astronomical Museum in Chicago. Details of its star positions and names, saints feast days, metallurgy, construction, and general style (featuring a quatrefoil rete) are examined and used to place the instrument as one of a small group of astrolabes, epitomised by the great Sloane astrolabe, which has been associated with King Edward III. Using this hypothesis, potential original owners of the instrument in c. 1330-1340 are proposed. It is also shown how the astrolabe was later modified to have a second life as a working instrument in Renaissance Florence. A page of notes in Copernicus's hand shows the origin of the numerical parameters in the Commentariolus from the Alfonsine Tables and provides evidence for the derivation of the heliocentric theory from Regiomontanus's description of eccentric models of the second inequality for superior and inferior planets in the Epitome of the Almagest. This is an explanation of the derivations of the parameters and of the heliocentric theory followed by comments on Professor Jamil Ragep's criticisms and his own derivation of the heliocentric theory directly from the planetary models of Ibn ash-Shtir. It is a long time since the last comprehensive compilation of meteoric observations from medieval European sources was published. Since then, the advances in information technology, search engines and, above all, the emergence and development of the Internet have facilitated the access to and search for these records for scholars, making their work easier and even avoiding the need to go to libraries that keep the documents. In this paper, we have significantly enlarged the list of reports of medieval European meteoric events, using mainly the current classical sources and also other local documents previously not considered by the authors that have dealt with this issue. The "lost" Yahya ibn Adi treatises recently discovered in the Tehran codex Marwi 19 include a record of a philosophical debate instigated by the Hamdanid prince Sayf-al-Dawla. More precisely, Marwi 19 contains Yahya's adjudication of a dispute between an unnamed Opponent and Yahya's younger relative Ibrahim ibn. Adi (who also served as al-Farabi's assistant), along with Ibrahim's response to Yahya's adjudication, and Yah.ya's final word. At issue was a problem of Aristotelian exegesis: should "body" be understood as falling under the category of substance or under the category of quantity? The unnamed Opponent argues that body is a species of substance; Ibrahim argues that technically speaking, body is a species of quantity, and hence an accident; and Yahya judges that body is a species of substance, though for very different reasons than the Opponent gives. For the first time, the Arabic text of this exchange is edited and translated into English. Also provided is an Introduction that sets the debate in historical context, and discusses in particular the possible influence of John Philoponus. The debate is interesting and important not only because of the philosophical ramifications of the issues under discussion, but because it constitutes evidence of dialectical practice among Arabic-speaking philosophers from the middle of the 10th century. This article argues that a fragment froma lost treatise by Abu Bakr al-Razi (d. 925) is preserved in the Book on Morphology (Kitab al-Tasrif) by Ps-Gabir ibn Hayyan. Paul Kraus reached the conclusion that the collection to which this book belongs was written between the end of the ninth and the beginning of the tenth century AD. This fragment represents the first attempt-to our knowledge-to analyze the logical structure of sign-based inference in Arabic, which is known as istidlal bi-alsahid. ala al-gaib among theologians and philosophers. The author distinguishes between sign-inferences based on homogeneity (al-muganasa), course of habit (magra al-ada) and traces (atar). After providing a translation of the fragment, the first part of this paper argues that its author is Abu Bakr al-Razi. My argument is based on a comparison between this text and a passage from the Doubts About Galen, which is also by Abu Bakr al-Razi. I hypothesize that at least two other fragments from the same work or from different works by Abu Bakr al-Razi are preserved in the corpus attributed to Ps-Gabir. The second part of the paper aims to reconstruct Abu Bakr al-Razi's theory of sign-inference. In so doing, I show the historical influence that Hellenistic debates on sign-inference might have had on al-Razi, and I situate al-Razi's theory in the context of the prominent use that the theologians of the kalam made of the istidlal bi-al-sahid. ala al-gaib. To offer a more comprehensive reconstruction of al-Razi's theory of sign-inference, this article compares the critical approach presented in the newly identified fragment with the epistemological framework outlined in the Doubts About Galen. Finally, this article shows that Abu Bakr al-Razi's theory of sign-inference had a strong influence on al-Farabi's logical developments especially in his Epitome of the Prior Analytics, even if he does not acknowledge this intellectual debt. Abu Bakr al-Razi (d. 925) and al-Farabi (d. 950) both adopt the classical ideal of a philosophical way of life in the sense that being a philosopher implies certain ethical guidelines to which the philosopher should adhere. In both cases, moreover, their ethical writings appear to reflect a certain tension with respect to what the ethical goal of the philosopher consists of. In this study, I will argue that this apparent tension is relieved when their ethics is understood as a progression in a double sense. In the first sense, both authors adopt the Neoplatonic distinction between pre-philosophical and philosophical ethics. The second aspect of the progression takes place within the degree of virtue required of the philosopher, which for al-Razi and alFarabi proceeds in contrary directions. For al-Razi, the philosopher progresses from the moderately ascetic requirements of Spiritual Medicine to the higher license present in Philosophical Life, following the stages of the life of Socrates. In contrast, for al-Farabi the progression follows roughly along the Neoplatonic grades of virtue from Aristotelian moderation, which in Exhortation to the Way to Happiness is connected with character training in a pre-philosophical sense, towards purely contemplative existence. It is known that Farabi, in his political program, assumes philosophically some Islamic sciences like kalamand fiqh. Focusing here on the case of the fiqh and in the limits of his K. al-Milla, we try to establish correspondences between his theory of legislation and references to historically attested sciences. Our purpose is to show that he was able to articulate fiqh to the political science through an undeclared use of. ilm usul al-fiqh (science of principles of juridical science). He empties this science of its own material and preserves its form in order to fill it with philosophical material. This contributes to clear up his conception in this book of the shift from the voluntary universal to the particular. This shift is governed by a set of formal rules. These guidelines, which are rather deliberative, take place as a subject matter of the political science, alongside with universals, its classical subject matter. We think that these formal rules are borrowed from this typical Islamic science whose subject matter is the study of principles and rules governing the inference, from their sources, of particular legal status, the subject matter of fiqh. If, from the strict philosophical point of view, political science provides the sources or foundations from which are inferred the original and primary legislation initiated by the founder of the religion (al-milla), as well as the derived and secondary one elaborated by his successors, the methodological part of this Islamic science would be the equivalent to the general prescriptions necessary for the application of universals so that concrete cases can be determined. For this purpose, Farabi borrows willingly from the famous Shafii's Risala, the prototype of the treatises in usul al-fiqh. The contribution of the body to cognition and control in natural and artificial agents is increasingly described as offloading computation from the brain to the body, where the body is said to perform morphological computation. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the offloading perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that facilitates control, (2) morphology that facilitates perception, and the rare cases of (3) morphological computation proper, such as reservoir computing, where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and controlhow it contributes to the overall orchestration of intelligent behavior. It is well known that cooperation cannot be an evolutionarily stable strategy for a non-iterative game in a well-mixed population. In contrast, structured populations favor cooperation, since cooperators can benefit each other by forming local clusters. Previous studies have shown that scale-free networks strongly promote cooperation. However, little is known about the invasion mechanism of cooperation in scale-free networks. To study microscopic and macroscopic behaviors of cooperators' invasion, we conducted computational experiments on the evolution of cooperation in scale-free networks where, starting from all defectors, cooperators can spontaneously emerge by mutation. Since the evolutionary dynamics are influenced by the definition of fitness, we tested two commonly adopted fitness functions: accumulated payoff and average payoff. Simulation results show that cooperation is strongly enhanced with the accumulated payoff fitness compared to the average payoff fitness. However, the difference between the two functions decreases as the average degree increases. As the average degree increases, cooperation decreases for the accumulated payoff fitness, while it increases for the average payoff fitness. Moreover, for the average payoff fitness, low-degree nodes play a more important role in spreading cooperative strategies than for the accumulated payoff fitness. We develop and apply several novel methods quantifying dynamic multi-agent team interactions. These interactions are detected information-theoretically and captured in two ways: via (i) directed networks (interaction diagrams) representing significant coupled dynamics between pairs of agents, and (ii) state-space plots (coherence diagrams) showing coherent structures in Shannon information dynamics. This model-free analysis relates, on the one hand, the information transfer to responsiveness of the agents and the team, and, on the other hand, the information storage within the team to the team's rigidity and lack of tactical flexibility. The resultant interaction and coherence diagrams reveal implicit interactions, across teams, that may be spatially long-range. The analysis was verified with a statistically significant number of experiments (using simulated football games, produced during RoboCup 2D Simulation League matches), identifying the zones of the most intense competition, the extent and types of interactions, and the correlation between the strength of specific interactions and the results of the matches. We investigate a hierarchical approach to robot control inspired by joint-level control in animals. The method combines a high-level controller, consisting of an artificial neural network (ANN), with joint-level controllers based on digital muscles. In the digital muscle model (DMM), morphological and control aspects of joints evolve concurrently, emulating the musculoskeletal system of natural organisms. We introduce and compare different approaches for connecting outputs of the ANN to DMM-based joints. We also compare the performance of evolved animats with ANN-DMM controllers with those governed by only high-level (ANN-only) and low-level (DMM-only) controllers. These results show that DMM-based systems outperform their ANN-only counterparts while also exhibiting less complex ANNs in terms of the number of connections and neurons. The main contribution of this work is to explore the evolution of artificial systems where, as in natural organisms, some aspects of control are realized at the joint level. Evolutionary robotics using real hardware is currently restricted to evolving robot controllers, but the technology for evolvable morphologies is advancing quickly. Rapid prototyping (3D printing) and automated assembly are the main enablers of robotic systems where robot offspring can be produced based on a blueprint that specifies the morphologies and the controllers of the parents. This article addresses the problem of gait learning in newborn robots whose morphology is unknown in advance. We investigate a reinforcement learning method and conduct simulation experiments using robot morphologies with different size and complexity. We establish that reinforcement learning does the job well and that it outperforms two alternative algorithms. The experiments also give insights into the online dynamics of gait learning and into the influence of the size, shape, and morphological complexity of the modular robots. These insights can potentially be used to predict the viability of modular robotic organisms before they are constructed. Living systems such as gene regulatory networks and neuronal networks have been supposed to work close to dynamical criticality, where their information-processing ability is optimal at the whole-system level. We investigate how this global information-processing optimality is related to the local information transfer at each individual-unit level. In particular, we introduce an internal adjustment process of the local information transfer and examine whether the former can emerge from the latter. We propose an adaptive random Boolean network model in which each unit rewires its incoming arcs from other units to balance stability of its information processing based on the measurement of the local information transfer pattern. First, we show numerically that random Boolean networks can self-organize toward near dynamical criticality in our model. Second, the proposed model is analyzed by a mean-field theory. We recognize that the rewiring rule has a bootstrapping feature. The stationary indegree distribution is calculated semi-analytically and is shown to be close to dynamical criticality in a broad range of model parameter values. We produce algorithms to detect whether a complex affine variety computed and presented numerically by the machinery of numerical algebraic geometry corresponds to an associated component of a polynomial ideal. (C) 2016 Elsevier Ltd. All rights reserved. An upper bound for the roots of X-d + a(1) Xd-1 + ... + a(d-1) X + a(d) is given by the sum of the largest two of the terms This bound by Lagrange has gained attention from different sides recently, while a succinct proof seems to be missing. We present a short, original proof of Lagrange's bound. Our approach leads to some definite improvements. To benefit computationally from these improvements, we construct a modified Lagrange bound which at the same asymptotic computational complexity is at most 11 per cent from optimal for degrees d >= 16. (C) 2017 Elsevier Ltd. All rights reserved. We give an algorithm for computing Segre classes of subschemes of arbitrary projective varieties by computing degrees of a sequence of linear projections. Based on the fact that Segre classes of projective varieties commute with intersections by general effective Cartier divisors, we can compile a system of linear equations which determine the coefficients for the Segre class pushed forward to projective space. The algorithm presented here comes after several others which solve the problem in special cases, where the ambient variety is for instance projective space; to our knowledge, this is the first algorithm to be able to compute Segre classes in projective varieties with arbitrary singularities. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we consider the non-trivial problem of converting a zero-dimensional parametric Grobner basis w.r.t. a given monomial ordering to a Grobner basis w.r.t. any other monomial ordering. We present a new algorithm, so-called parametric FGLM algorithm, that takes as input a monomial ordering and a finite parametric set which is a Grobner basis w.r.t a given set of parametric constraints, and outputs a decomposition of the given space of parameters as a finite set of (parametric) cells and for each cell a finite set of parametric polynomials which is a Grobner basis w.r.t. the target monomial ordering and the corresponding cell. For this purpose, we develop computationally efficient algorithms to deal with parametric linear systems that are applicable in computing comprehensive Grobner systems of parametric linear ideals also in the theory of parametric linear algebra to compute Gaussian elimination and minimal polynomial of a parametric matrix. All proposed algorithms have been implemented in MAPLE and their efficiency is discussed on a diverse set of benchmark polynomials. (C) 2017 Elsevier Ltd. All rights reserved. We propose a new hybrid symbolic-numerical approach to the center-focus problem. The method allowed us to obtain center conditions for a three-dimensional system of differential equations, which was previously not possible using traditional, purely symbolic computational techniques. (C) 2016 Published by Elsevier Ltd. We present an algorithm based on triangular sets to decide whether a given ideal in the polynomial ring contains a monomial. (C) 2017 Elsevier Ltd. All rights reserved. A computation method of algebraic local cohomology classes, associated with zero-dimensional ideals with parameters, is introduced. This computation method gives us in particular a decomposition of the parameter space depending on the structure of algebraic local cohomology classes. This decomposition informs us on several properties of input ideals and the output of the proposed algorithm completely describes the multiplicity structure of input ideals. An algorithm for computing a parametric standard basis of a given zero-dimensional ideal, with respect to an arbitrary local term order, is also described as an application of the computation method. The algorithm can always output "reduced" standard basis of a given zero-dimensional ideal, even if the zero-dimensional ideal has parameters. (C) 2017 Elsevier Ltd. All rights reserved. Given two coprime polynomials P and Q in Z[x,y] of degree at most d and coefficients of bitsize at most tau, we address the problem of computing a triangular decomposition {(U-i(x), V-i(x, y))}(i is an element of I) of the system {P, Q}. The state-of-the-art worst-case complexities for computing such triangular decompositions when the curves defined by the input polynomials do not have common vertical asymptotes are (O) over tilde (d(4)) for the arithmetic complexity and (O) over tilde (B) (d(6) d(5)tau) for the bit complexity, where (O) over tilde refers to the complexity where polylogarithmic factors are omitted and O-B refers to the bit complexity. We show that the same worst-case complexities can be achieved even when the curves defined by the input polynomials may have common vertical asymptotes. We actually present refined complexities, (O) over tilde (d(x)d(y)(3) + d(x)(2) d(y)(2)) for the arithmetic complexity and (O) over tilde (B) (d(x)(3)d(y)(3) (d(x)(2)d(y)(3) + d(x)d(y)(4))tau) for the bit complexity, where d(x) and d(y) bound the degrees of P and Q in x and y, respectively. We also prove that the total bitsize of the decomposition is in (O) over tilde ((d(x)(2)d(y)(3) + d(x)d(y)(4))tau). (C) 2017 Elsevier Ltd. All rights reserved. The class of objects we consider are algebraic relations between the four kinds of classical Jacobi theta functions theta(j)(z vertical bar r), j = 1,, 4, and their derivatives. We present an algorithm to prove such relations automatically where the function argument z is zero, but where the parameter tau. in the upper half complex plane is arbitrary. (C) 2017 Elsevier Ltd. All rights reserved. There is research that supports the safety of planned home birth for healthy women, and more women in the United States are choosing to give birth at home. Strategic initiatives developed at the Home Birth Summit in 2011 address issues related to planned home birth including integration into the health system. This editorial discusses the ongoing work on these initiatives including the development and endorsement of best practice guidelines for safe transfer from home to hospital. The American College of Obstetricians and Gynecologists revised policy statement on home birth calls for the integration of home birth into the health system. This is an important step in making home birth even safer for mothers and babies. The purpose of this interpretive study was to investigate planned home births that occurred in Washington State and to provide meaning. A Heideggerian phenomenological approach was chosen to investigate and interview a purposive sample of 9 childbearing women who experienced at least 1 home birth between 2010 and 2014 in Washington State. The results of this study suggest that childbirth education is an essential and valued aspect of birthing. Childbirth educators can use the findings from this investigation as a means to increase their awareness of birthing in the home. This interpretive investigation can give "voice" to the compelling evidence accumulating that is investigating planned home births as a sanctuary to allow physiological and low-intervention births to transpire. Stories have been used as a way to educate and inform. An educational activity was created for use with prelicensure nursing students in a maternal infant health course where students had the opportunity to be present to the birth stories of older adults. These stories were transformative and brought new context to how the students understood current-day labor and birth practices. This activity allowed students to see how powerful the birth process is in a woman's life, in that these memories had the power to transcend time. Students were also able to build relationships and practice their communication skills with older adults, which in turn may also be beneficial for the older adults as well. This article presents a clinical project of the development and evaluation of an educational intervention that aimed at promoting the development of a sense of mastery of the anticipated paternal role in soon-to-be fathers. The preventive role supplementation conceptual framework guided the development of 4 educational sessions that were delivered to 6 expectant fathers attending prenatal classes at a local community services center in Greater Montreal area. The participants highly appreciated the content and format of the educational intervention. They also expressed to have developed a sense of mastery of the anticipated paternal role. This interactive educational intervention, which focused on the specific needs of expectant fathers, seems appropriate to support men in their transition to fatherhood. We evaluated a patient education pamphlet on vaginal birth after cesarean (VBAC). Focus groups with 17 women in 4 communities involved a 5-item knowledge pretest and question on intention to plan VBAC, reading the pamphlet, a knowledge posttest, and a moderated discussion. Forming a preference for birth after cesarean was characterized by (a) consolidating information from social sources, (b) seeking certainty in your next birth, and (c) questioning your ability to have a vaginal birth. Participants preferred vaginal birth, but all feared the uncertainty of labor. Knowledge scores increased for all participants, but intentions to plan a VBAC did not change. Our findings may encourage the development of interventions to reduce women's fear of vaginal birth. The aim of this qualitative study was to explore the perception of women regarding long-term effects of childbirth education on future health-care decision making. This qualitative study used a purposive sample of 10 women who participated in facilitated focus groups. Analysis of focus group narratives provided themes in order of prevalence: (a) self-advocacy, (b) new skills, (c) anticipatory guidance, (d) control, (e) informed consent, and (f) trust. This small exploratory study does not answer the question of whether childbirth education influences future health-care decision making, but it demonstrates that the themes and issues from participants who delivered 15-30 years ago were comparable to current findings in the literature. The mixed three-moment hydrodynamic description of fermionic radiation transport based on the BoltzInann entropy optitnization procedure is considered for the case of one-dimensional flows. The conditions for realizability of the mixed three moments chosen as the energy density and two partial heat fluxes are established. The domain of admissible values of those moments is determined and the existence of the solution to the optimization problem is proved. Here, the standard approaches related to either the truncated Hausdorff or Markov moment problems do not apply because the non-negative fermionic distribution function, denoted f, must satisfy the inequality f <= 1 and, at the same time, there are three different intervals of integration in the integral formulae defining the mixed moments. The hydrodynamic equations are obtained in the form of the symmetric hyperbolic system for the Lagrange multipliers of the opt fin izat ion problem with constraints. The potentials generating this system are explicitly determined as dilogarithm and trilogarithm functions of the Lagrange multipliers. The invertibility of the relation between moments and Lagrange multipliers is proved. However, the inverse relation cannot be determined in a closed analytic form. Using the H -theorem for the radiative transfer equation, it is shown that the derived system of hydrodynamic radiation equations has as a consequence an additional balance law with a non -negative source term. We study weak solutions of the homogeneous Boltzmann equation for Maxwellian molecules with a logarithmic singularity of the collision kernel for grazing collisions. Even though in this situation the Boltzmann operator enjoys only a very weak coercivity estimate, it still leads to strong smoothing of weak solutions in accordance to the smoothing expected by an analogy with a logarithmic heat equation. We consider some extensions of the classical discrete Boltzmann equation to the cases of multicomponent mixtures, polyatomic molecules (with a finite number of different internal energies), and chemical reactions, but also general discrete quantum kinetic Boltzmann-like equations; discrete versions of the Nordheim-Boltzmann (or Uehling-Uhlenbeck) equation for bosons and fermions and a kinetic equation for excitations in a Bose gas interacting with a Bose-Einstein condensate. In each case we have an H-theorem and so for the planar stationary half-space problem, we have convergence to an equilibrium distribution at infinity (or at least a manifold of equilibrium distributions). In particular, we consider the nonlinear half-space problem of condensation and evaporation for these discrete Boltzmann-like equations. We assume that the flow tends to a stationary point at infinity and that the outgoing flow is known at the wall, maybe also partly linearly depending on the incoming flow. We find that the systems we obtain are of similar structures as for the classical discrete Boltzmann equation (for single species), and that previously obtained results for the discrete Boltzmann equation can be applied after being generalized. Then the number of conditions on the assigned data at the wall needed for existence of a unique solution is found. The number of parameters to be specified in the boundary conditions depends on if we have subsonic or supersonic condensation or evaporation. All our results are valid for any finite number of velocities. In gas dynamics, the connection between the continuum physics model offered by the Navier-Stokes equations and the heat equation and the molecular model offered by the kinetic theory of gases has been understood for some time, especially through the work of Chapman and Enskog, but it has never been established rigorously. This paper established a precise bridge between these two models for a simple linear Boltzman-like equation. Specifically a special class of solutions, the grossly determined solutions, of this kinetic model are shown to exist and satisfy closed form balance equations representing a class of continuum model solutions. We consider a two dimensional collisionless plasma interacting with a fixed background of positive charge, the density of which depends only upon velocity variable nu and decays as vertical bar v vertical bar ->infinity. Suppose that mobile negative ions balance the positive charge as spatial variable vertical bar x vertical bar -> > infinity, then on the mesoscopic level the system is characterized by the two dimensional Vlasov-Poisson system with steady spatial asymptotics, whose total positive charge and total negative charge are both infinite. Smooth solutions with appropriate asymptotic behavior are shown to exist locally in time, and an "almost optimal" criterion for the continuation of these solutions is established. We study a Cucker-Smale-type system with time delay in which agents interact with each other through normalized communication weights. We construct a Lyapunov functional for the system and provide sufficient conditions for asymptotic flocking, i.e., convergence to a common velocity vector. We also carry out a rigorous limit passage to the mean-field limit of the particle system as the number of particles tends to infinity. For the resulting Vlasov-type equation we prove the existence, stability and large-time behavior of measure-valued solutions. This is, to our best knowledge, the first such result for a Vlasov-type equation with time delay. We also present numerical simulations of the discrete system with few particles that provide further insights into the flocking and oscillatory behaviors of the particle velocities depending on the size of the time delay. This paper considers the initial boundary problem to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. The global existence and uniqueness of large strong solutions are established when the heat conductivity coefficient K(theta) satisfies C-1(1 broken vertical bar theta(q)) <= kappa(theta) <= C-2(1 broken vertical bar theta(q)) for some constants q > 0, and C1,C2 >0. In this paper, the applicability of the entropy method for the trend towards equilibrium for reaction-diffusion systems arising from first order chemical reaction networks is studied. In particular, we present a suitable entropy structure for weakly reversible reaction networks without detail balance condition. We show by deriving an entropy-entropy dissipation estimate that for any weakly reversible network each solution trajectory converges exponentially fast to the unique positive equilibrium with computable rates. This convergence is shown to be true even in cases when the diffusion coefficients of all but one species are zero. For non-weakly reversible networks consisting of source, transmission and target components, it is shown that species belonging to a source or transmission component decay to zero exponentially fast while species belonging to a target component converge to the corresponding positive equilibria, which are determined by the dynamics of the target component and the mass injected from other components. The results of this work, in some sense, complete the picture of trend to equilibrium for first order chemical reaction networks. It is interesting to analyze the mutual influence of relativistic effect and electrostatic potential force on the qualitative behaviors of charge particles simulated by the one-species relativistic Vlasov-Poisson-Landau (rVPL) system with the physical Coulombic interaction. In this paper, we first study the spectrum structure on the linearized rVPL system and obtain the optimal time decay rates of the solutions to the linearized system, and then we construct global strong solutions to the nonlinear system around a global relativistic Maxwellian. Finally we make use of time decay rates of the solutions to the linearized system and uniform energy estimates to establish the time decay of the global solution to the original Cauchy problem for the rVPL system to the absolute Maxwellian at the optimal convergence rate (1 + t)(-3/4.) This time rate is faster than the optimal rate (1 + t)(-1/4) of classical Vlasov-PoissonBoltzmann [2,10] Ed and Vlasov-Poisson-Landau system [7, 8, 17] and this fast time decay rate is caused by the combined influence of relativistic effect and electrostatic potential force. Mixed-moment models, introduced in [18, 44] for one space dimension, are a modification of the method of moments applied to a (linear) kinetic equation, by choosing mixtures of different partial moments. They are well-suited to handle equations where collisions of particles are modelled with a Laplace-Beltrami operator. We generalize the concept of mixed moments to two dimensions. In the context of minimum-entropy models, the resulting hyperbolic system of equations has desirable properties (entropy-diminishing, bounded eigenvalues), removing some drawbacks of the well-known M1 model. We furthermore provide a realizability theory for a first-order system of mixed moments by linking it to the corresponding quarter-moment theory. Additionally, we derive a type of Kershaw closures for mixed-and quarter-moment models, giving an efficient closure (compared to minimum-entropy models). The derived closures are investigated for different benchmark problems. We consider the diffusive limit of an unsteady neutron transport equation in a two-dimensional plate with one-speed velocity. We show the solution can be approximated by the sum of interior solution, initial layer, and boundary layer with geometric correction. Also, we construct a counterexample to the classical theory in [1] which states the behavior of solution near boundary can be described by the Knudsen layer derived from the Milne problem. The Cauchy problem of the reduced gravity two and a half layer model in dimension three is considered. We obtain the pointwise estimates of the time-asymptotic shape of the solution, which exhibit two kinds of the generalized Huygens' waves. It is a significant different phenomenon from the Navier-Stokes system. Lastly, as a byproduct, we also extend L-2(R-3)-decay rate to LP(R-3)-decay rate with p > 1. This paper is concerned with the planar magnetohydrodynamics with initial data whose behaviors at far fields x -> +/-infinity are different. Motivated by the relationship between planar magnetohydrodynamics and Navier-Stokes, we can prove that the solutions to the planar magnetohydrodynamics tend time-asymptotically to a viscous contact wave which is constructed from a contact discontinuity solution of the Riemann problem on Euler system. This result is proved by the method of elementary energy estimates. We derive lower bounds on the resolvent operator for the linearized steady Boltzmann equation over weighted L-infinity Banach spaces in velocity, comparable to those derived by Pogan&Zumbrun in an analogous weighted L-2 Hilbert space setting. These show in particular that the operator norm of the resolvent kernel is unbounded in L-P(R) for all 1 < p <= +/-infinity, resolving an apparent discrepancy in behavior between the two settings suggested by previous work. Using a CEO wealth decomposition method, I examine how each wealth component affects managerial incentives to raise external public funds. I find that the board compensation policy adjustment and CEO's own portfolio adjustment account for a larger proportion of total wealth change in issuance firms than in non-issuance firms; and larger in equity issuance firm than in bond issuance firm. I provide evidence that those adjustments serve to weaken the shareholder-manager interest alignment in that they are insignificantly or even negatively sensitive to shareholder returns. I also show these perceived wealth effects help explain a firm's ex ante financing choice. There is an increasing public concern about climate change. As a response to such concern in the accounting field, in 2010, the Securities and Exchange Commission (SEC) announced the SEC 2010 Commission Guidance Regarding Disclosure Related to Climate Change (SEC 2010 Guidance), the first disclosure guidance issued by either the FASB or the SEC for U.S. listed companies. However, the publication provoked criticism and debate. Opponents point out that the SEC 2010 Guidance might have an adverse impact on corporate social responsibility (CSR) reporting "by registrants fearful of liability under securities laws for the contents of such disclosures" (Shorter, 2013). This study investigates (1) the relation between firms' climate change disclosure and corporate social responsibility (CSR) disclosures and (2) the impact of the passage of SEC 2010 Guidance on corporate social responsibility (CSR) reporting. The analysis results suggest that climate change disclosures are positively associated with corporate social responsibility concerns, strengths, and overall disclosure. In addition, we do not find empirical evidence that the SEC 2010 Guidance discourages firms' overall environmental and corporate social responsibility disclosures. The purpose of this study was to determine whether employees in Ghana's banking sector perceive their leaders to be emotionally intelligent based on their style of leadership. The study was cross-sectional in nature and made use of structured questionnaires to collect quantitative data. Out of 300 questionnaires administered, 234 were returned (comprising of 115 males and 119 females). The findings of the study revealed that a positive relationship exists between transformational leadership and emotional intelligence (EI) whereas a negative relationship was found between transactional leadership and EI of leaders. The study also noted that transformational leaders are more emotionally intelligent; thus, it is recommended that EI an attribute associated with leader effectiveness be made part of leadership development in organizations. AFS (available-for-sale securities) and FVTOC (fair value through other comprehensive income) have several similar characteristics that users may presume they are the same in accounting. Both require fair value measurement and recognition of changes in fair value in other comprehensive income (OCI). AFS is introduced in IAS 39, whereas FVTOCI securities have recently been finalized and published in IFRS 9. This paper identifies the similarities and differences between AFS and FVTOCI securities on classification and measurement, impairment, and hedge accounting. An empirical research on Thai banks and AFS was conducted during 2012-2015. The findings of this study indicate that some of AFS Securities may not meet the criteria for FVTOCI Securities classification. Impairment loss is an overhaul from IAS 39; expected loss is applied in IFRS 9. While both standards consider hedge accounting as optional, IFRS 9 widens the choices of hedging instruments and is more principle based. Traditionally, the literature has used the terms "browser" and "hunter" shopping styles interchangeably. In-depth interviews with 12 consumers revealed that however there is a distinction between the two. The results suggest that, browsers may seek to surprise themselves by shopping secondhand, typically by finding a "treasure", but without knowing what they look for, until they unexpectedly find a valuable product and have actually the means to acquire and store it. On the other hand, hunters may search to surprise themselves by shopping secondhand, to find a "rare find", but with full and conscious knowledge of what they are looking for, while having the means to acquire and store the product. Both may have space for storage and spend very little to very large amounts of time, but browsers try to find a random product that they like, whereas hunters aim at finding the specific gem that they are looking for. We provide necessary and sufficient conditions on the existence of common hypercyclic vectors for multiples of the backward shift operator along sparse powers. Our main result strongly generalizes corresponding results which concern the full orbit of the backward shift. Some of our results are valid in a more general context, in the sense that they apply for a wide class of hypercyclic operators. We investigate how a C*-algebra could consist of functions on a noncommutative set: a discretization of a C*-algebra A is a *-homomorphism A -> M that factors through the canonical inclusion C (X) * subset of l(infinity) (X) when restricted to a commutative C*-subalgebra. Any C*-algebra admits an injective but nonfunctorial discretization, as well as a possibly noninjective functorial discretization, where M is a C*-algebra. Any subhomogenous C*-algebra admits an injective functorial discretization, where M is a W*-algebra. However, any functorial discretization, where M is an AW*-algebra, must trivialize A = B(H) for any infinite-dimensional Hilbert space H. We study hypercyclicity properties of a family of non-convolution operators defined on the spaces of entire functions on C-N. These operators are a composition of a differentiation operator and an affine composition operator, and are analogues of operators studied by Aron and Markose on H(C). The hypercyclic behavior is more involved than in the one dimensional case, and depends on several parameters involved. In this paper we study two semigroups of completely positive unital self-adjoint maps on the von Neumann algebras of the free orthogonal quantum group O-N(+) and the free permutation quantum group S-N(+). We show that these semigroups satisfy ultracontractivity and hypercontractivity estimates. We also give results regarding spectral gap and logarithmic Sobolev inequalities. Let A and B be positive semidefinite matrices. It is shown that vertical bar Tr(A(w)B(z)A(1-w)B(1-z))vertical bar <= Tr(AB) for all complex numbers w, z for which vertical bar Re w - 1/2 vertical bar + vertical bar Re z - 1/2 vertical bar <= 1/2. This is a generalization of a trace inequality due to T. Ando, F. Hiai, and K. Okubo for the special case when w, z are real numbers, and a recent trace inequality proved by T. Bottazzi, R. Elencwajg, G. Larotonda, and A. Varela when w = z with 1/4 <= Re z <= 3/4. As a consequence of our new trace inequality, we prove that parallel to A(w)B(z) + B(1-(z) over bar)A(1-(w) over bar)parallel to(2) <= parallel to A(w)B(z) + A(1-(w) over bar)B(2)(1-(z) over bar parallel to) for all complex numbers w, z for which vertical bar Rew - 1/2 vertical bar + vertical bar Re z - 1/2 vertical bar <= 1/2. This is a generalization of a recent norm inequality proved by M. Hayajneh, S. Hayajneh, and F. Kittaneh when w, z are real numbers. We first develop in the context of complete metric spaces a one-to-one correspondence between the class of means G = {G(n)}(n >= 2) that are symmetric, multiplicative, and contractive and the class of contractive ( with respect to the Wasserstein metric) barycentric maps on the space of L-1-probability measures. We apply this equivalence to the recently introduced and studied Karcher mean on the open cone P of positive invertible operators on a Hilbert space equipped with the Thompson metric to obtain a corresponding contractive barycentric map. In this context we derive a version of earlier results of Sturm and Lim and Palfia about approximating the Karcher mean with the more constructive inductive mean. This leads to the conclusion that the Karcher barycenter lies in the strong closure of the convex hull of the support of a probability measure. This fact is a crucial ingredient in deriving a version of Jensen's inequality, with which we close. In this paper, we generalize Young's inequality for locally compact quantum groups and obtain some results for extremal pairs of Young's inequality and extremal functions of Hausdorff-Young inequality. We consider self-adjoint Dirac operators D = D-0 + V(x), where D 0 is the free three-dimensional Dirac operator and V(x) is a smooth compactly supported Hermitian matrix. We define resonances of D as poles of the meromorphic continuation of its cut-off resolvent. An upper bound on the number of resonances in disks, an estimate on the scattering determinant and the Lifshits-Krein trace formula then leads to a global Poisson wave trace formula for resonances of D. We introduce the fundamental group F(A) of a unital C*-algebra A with finite dimensional trace space. The elements of the fundamental group are restricted by K-theoretical obstruction and positivity. Moreover we shall show there are uncountably many mutually non-isomorphic simple C*-algebras such that F(A) = {I-n}.Our study extends the results on the fundamental group due to Nawata and Watatani. Let N be a nest on a Hilbert space H and Alg N the corresponding nest algebra. We obtain a characterization of the compact and weakly compact multiplication operators defined on nest algebras. This characterization leads to a description of the closed ideal generated by the compact elements of Alg N. We also show that there is no non-zero weakly compact multiplication operator on Alg N / Alg N boolean AND K(H). From a continuous field of Fourier invariant projections of the continuous field of rotation C*-algebras, we obtain a characteristic equation which fully determines the orthogonality of naturally arising projections from the field. The continuous field turns out to be the support projection of a non-commutative version of a 2-dimensional Theta function. Further, we compute the K-theoretical topological invariants of the projection field. The noncommutative Fourier transform is the canonical order 4 automorphism sigma of the rotation C*-algebra A(theta) defined by the relations sigma(U) = V-1, sigma(V) = U, where U, V are the canonical unitary generators of A(theta) satisfying VU = e(2 pi i theta)UV. We show that a semibounded Toeplitz quadratic form is closable in the space l(2)(Z(+)) if and only if its entries are Fourier coefficients of an absolutely continuous measure. We also describe the domain of the corresponding closed form. This allows us to define semibounded Toeplitz operators under minimal assumptions on their matrix elements. Let (G, alpha) and (H, beta) be locally compact groupoids with Haar systems. We define a topological correspondence from (G, alpha) to (H, beta) to be a G-H-bispace X on which H acts properly, and X carries a continuous family of measures which is H-invariant and each measure in the family is (G, alpha)-quasi invariant. We show that a topological correspondence produces a C*-correspondence from C*(G, alpha) to C*(H, beta). We give many examples of topological correspondences. Fine and Antonelli introduce two generalizations of permutation invariance - internal invariance and simple/double invariance respectively. After sketching reasons why a solution to the Bad Company problem might require that abstraction principles be invariant in one or both senses, I identify the most finegrained abstraction principle that is invariant in each sense. Hume's Principle is the most fine-grained abstraction principle invariant in both senses. I conclude by suggesting that this partially explains the success of Hume's Principle, and the comparative lack of success in reconstructing areas of mathematics other than arithmetic based on non- invariant abstraction principles. .Abstraction principles provide implicit definitions of mathematical objects. In this paper, an abstraction principle defining categories is proposed. It is unsatisfiable and inconsistent in the expected ways. Two restricted versions of the principle which are consistent are presented. The Association Proyecto Hombre, in the year 2015 has been 25 years since its creation, in this period has done a lot of work in parallel to the evolution of the phenomenon of addictions, addressing from a philosophy, a mission and values centered on people, different programs of treatment, prevention and social incorporation, always working in collaboration with families and involving many people in a commendable work of volunteering. In addition, he has participated in international networks that address his work in addictions in Europe and Latin America and have developed training programs. This article reviews the work done in these years as well as the methodologies used. Objective To determine the relationship of the familiar history alcohol consumption and the teen alcohol consumption. Material and methods Correlational-descriptive study in 278 adolescents of a public institute of basic education of Ciudad del Carmen, Campeche, Mexico, it was conducted a stratified random sampling; for data collection was used Alcohol Consumption Familiar History Inventory and Alcohol Use Disorders Identification Test (AUDIT). Capture and analysis was performed in SPSS V 23. Results Was identified a positive and significant relationship of alcohol consumption familiar history and the AUDIT sum (rs=164, p=. 025) and with the amount of alcohol consumed by teens (rs=. 181, p=. 005). Conclusion The study findings show that the family plays a primary role in the acquisition of healthy and unhealthy behaviors in adolescents. Therefore, it is required that the nursing professional design and implement nursing interventions that include family and teenagers in promoting healthy lifestyles. Objective To establish the therapeutic approach that has been performed in patients diagnosed with insomnia in the town of Almazora (Castellon) Method It is a descriptive, observational, longitudinal and retrospective study. Subjects are adult patients over 18 belonging the Health Center Pio XII and the Center for Almazora Health, both in the town of Almazora. Patients were diagnosed with insomnia (ICD 780.52 and 307.4) and/or presented a NANDA diagnosis associated to this problem (Impaired sleep pattern: 00095, Sleep Deprivation: 00096 and/or disorder sleep pattern: 00198) and included in the Computerized Medical Record (Abucasis). A stratified sample by sex, age and quota (312 patients) was randomly selected out of the Almazora Abucasis database. Results The prevalence of insomnia was 5.5 % (95% CI 5.2 to 5.8). Only10% (95% CI 5.40 to 13.93) of the patients had received some non-drug treatment. Conclusions According to the data, underutilization of non-pharmacological measuresis evident. Moreover, a very high percentage of patients receiveddrug therapy on the first day of diagnosis without previous attempt of non-drug treatment. Nursing care plays a fundamental role in the therapeutic approach of primary insomnia, since the first line of treatment is thenon-pharmacological therapy. Results suggest that an increased nursing role in combination with medical care would promote professional development and subsequently would improve the treatment of patients presenting this disease. The detection of dysphagia is an important part of the management of stroke in its acute phase, since it is a marker of poor prognosis in terms of morbidity and functional recovery. Nevertheless, screening tends to be carried out mainly in patients with more severe. The method Volume-Viscosity clinical examination (MECV-V) allows us to confirm the presence of dysphagia and may establish individualized care plans. The aim of our study was to estimate the prevalence of dysphagia in patients who have had an episode of stroke and profile of the patient in relation to the type of stroke, dependency and family support. Methodology A descriptive observational study of prevalence. Results The prevalence of dysphagia estimated by frequentist analysis was 12.8% (95% CI 5.5 to 20.1). Using Bayesian inference was 20.9% (95% ICred: 14.4 to 28.3). The mean age was 74.76 years. They presented Ischemic Stroke 88 patients (93.6%) and 6 (6.4%) hemorrhagic stroke. One third of the study population was institutionalized and had some degree of dependence 56.4% (Barthel). The delay in the capture in dysphagia was 28.3 days and 19.8 in non dysphagia. Of patients with dysphagia They pointed acute phase food thickener 63.6%. Conclusions The identification test for dysphagia (MECV-V) after stroke should be done early in the hospital, to conduct an appropriate intervention to the patient's needs, with a hygienicdietary advice and prevent future complications. Objectives To review the available evidence on the effectiveness of topical treatments (Chlorhexidine and Alcohol 70 degrees) in newborns versus the dry core care to prevent infection of the umbilical cord. Methods A systematic review of randomized clinical trials published in the last 5 years (2011-2015) addressing the topical treatment of umbilical cord in healthy newborn of any gestational age in developed countries and non-hospital environments was performed. The research was conducted in Spanish and English. Results 118 records that are girded to 8 appointments after reading the title and abstract were found. Five of them were included in the study. Conclusions The available evidence shows no significant differences in onfalitis or signs of umbilical cord infection. Thus making the cure clean and dry cord does not pose a risk of increased onfalitis in the newborn. Antiseptics (chlorhexidine and alcohol 70 degrees) increase the decay time of the cord. This paper presents a self-evolving fuzzy model-based controller for a generic hypersonic vehicle (HV). The self-evolving fuzzy model can dynamically evolve its rule base through evaluating the influence of a rule and then is introduced to reconstruct the unknown dynamics so that the changing dynamics and unknown uncertainties of the HV are integrated. The control law is designed using the fuzzy identified model instead of the actual HV one. The stability analysis of the whole control system is presented from the Lyapunov function and shows that the tracking errors converge to zero. The bounds of the control inputs are considered in this study and ensured by means of the stable projection-type parameter adaptation laws. To reduce the computation complexity, only the parameters of a 'winner' rule closest to the current state are adjusted while those of other rules maintain unvaried. This differs from the existing studies in which the parameters of all rules need to be adjusted. The simulation results under the nominal conditions and parameter uncertainties demonstrate the superior performance of the proposed controller. (C) 2017 Elsevier Masson SAS. All rights reserved. A synthetic jet actuator-based output feedback control method is presented, which achieves asymptotic limit cycle oscillation regulation in small unmanned aerial vehicle wings, where the dynamic model contains uncertainty and unmodeled external disturbances. In addition, the proposed control method compensates for the parametric uncertainty and nonlinearity inherent in the synthetic jet actuator dynamics. Motivated by the limitations characteristic of small unmanned aerial vehicles, the control method is designed to be computationally inexpensive, eliminating the need for time-varying parameter update laws, function approximators, or other computationally heavy techniques. To this end, a computationally minimal robust-inverse control method is utilized, which is proven to compensate for the uncertainties in both the aerial vehicle dynamics and the synthetic jet actuator dynamics. By endowing the robust-inverse control law with a bank of dynamic filters, asymptotic limit cycle oscillation regulation is achieved using only pitching and plunging displacement measurements in the feedback loop. The result is an asymptotic synthetic jet actuator-based limit cycle oscillation regulation control method, which does not require velocity measurements, adaptive laws, or function approximators in the feedback loop. To achieve the result, a detailed mathematical model of the limit cycle oscillation dynamics is utilized, which includes nonlinear stiffness effects, unmodeled external disturbances, and dynamic model uncertainty, in addition to the parametric uncertainty in the synthetic jet actuator dynamic model. A rigorous Lyapunov-based stability analysis is utilized to prove asymptotic regulation of limit cycle oscillations, and numerical simulation results are provided to demonstrate the performance of the proposed control law. (C) 2017 Elsevier Masson SAS. All rights reserved. In this article, the low velocity impact response of circular clamped GLARE fiber-metal laminates is treated analytically using a linearized spring mass model. Differential equations of motion which represent the physical impact phenomenon are formulated and the corresponding initial value problems are set. Then, exact symbolic solutions of these problems are derived and expressions to calculate the impact load, position, velocity and kinetic energy time histories are given. Also, analytical equations to predict the coefficient of restitution and the energy restitution coefficient of the impact event are presented. The analytical formulas are applied in order to simulate published experiments concerning normal central low velocity impact of GLARE 4 and GLARE 5 circular laminates. The analytical and experimental impact load time histories of the two GLARE plates are found in good agreement. Apart from circular clamped GLARE plates, the equations of this article can also be employed in order to approximate the response of other GLARE or hybrid composite structures, when subjected to similar low velocity impact damage phenomena. (C) 2017 Elsevier Masson SAS. All rights reserved. A helicopter flying through an atmosphere containing particulates may accumulate high electrostatic charges which can challenge its operational safety. In this paper we elaborate and validate a triboelectric charging model to predict the electrification experienced by a helicopter while hovering in dusty air. We employed Large Eddy simulations to describe the turbulent structures inherent in the flow around the rotorcraft. The dust particles were tracked individually in a Lagrangian framework. A model accounting for the triboelectric charge transfer when a particle hits the helicopter was introduced. The results demonstrate the accuracy of the proposed approach and allow a detailed analysis of the location of the charge accumulation. (C) 2017 Elsevier Masson SAS. All rights reserved. In aircraft structural integrity analysis, the damage tolerance and fatigue life is investigated against a cyclic loading spectrum. The particular spectrum includes the stress/loading levels counted during a flight of certain duration. The occurrences of load factors may include higher gravitational acceleration 'g' levels. While maintaining a certain g level occurrence at higher angle of attack, wing structure vibrates with the amplitudes of its natural frequencies. The cyclic stress amplitudes of vibration depend upon the natural frequencies of vibrating structure, i.e. lower frequency gives higher amplitudes and vice versa. To improve the dynamic stability, modal parameters of simple carbon fibre sandwich panels have been adjusted by tailoring the fibre orientation angles and stacking sequence. In this way, the effect of change in structural dynamic characteristics on fatigue life of this simplified structure has been demonstrated. The research methodology followed in this work consists of two phases. In the first phase, aero-elastically tailored design was finalized using FEM based modal analysis and unsteady aerodynamic analysis simulations followed by experimental modal analysis. In the second phase, fatigue and damage tolerance behaviour of material was investigated using different fracture mechanics based techniques. ASTM's standard practices were adopted to determine material allowable and fracture properties. Simulation work was performed after proper calibration and correlation of finite element model with experimentally determined static and dynamic behaviour of panels. It has been observed that the applicable cyclic loading spectra, as major input parameter of fatigue analysis, largely depend upon the natural frequencies, damping and the stiffness of the structure. The results and discussions of the whole exercise may be beneficial while carrying out aero-elastic tailoring of composite aircraft wing. This research work has also a positive contribution towards multidisciplinary structural design optimization of aerospace vehicles. (C) 2017 Elsevier Masson SAS. All rights reserved. To make multi-objective reliability-based design optimization (MORBDO) more effective for complex structure with multi-failure modes and multi-physics coupling, multiple response surface method (MRSM)-based artificial neural network (ANN) (ANN-MRSM) and dynamic multi-objective particle swarm optimization (DMOPSO) algorithm are proposed based on MRSM and MOPSO algorithm. The mathematical model of ANN-MRSM is established by using artificial neural network to fit the multiple response surface function. The DMOPSO algorithm is proposed by designing dynamic inertia weight and dynamic learning factors. The proposed approach is verified by the MORBDO of turbine blisk deformation and stress with respect to fluid-thermal-structure interaction from probabilistic analysis. The optimization design results show that the proposed approach has the promising potentials to improve computational efficiency with acceptable computational precision for the MORBDO of turbine blisk deformation and stress. Moreover, Pareto front curve and a set of viable design values of turbine blisk are obtained for the high-reliability high-performance design of turbine blisk. The presented efforts provide an effective approach for MORBDO of complex structures, and enrich mechanical reliability design theory as well. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper investigates the small- and large-amplitude vibrations of compressed and thermally postbucicled carbon nanotube-reinforced composite (CNTRC) plates resting on elastic foundations. For the CNTRC plates, uniformly distributed (UD) and functionally graded (FG) reinforcements are considered where the temperature-dependent material properties of CNRTC plates are assumed to be graded in the thickness direction and estimated through a micromechanical model. The motion equations containing plate-foundation interaction are derived based on a higher order shear deformation plate theory and von Karman nonlinear strain-displacement relationships. The initial deflections caused by compressive or thermal postbuckling are included. The numerical illustrations concern small- and large-amplitude vibration characteristics of compressed postbuckled CNTRC plates in thermal environments and thermally postbuckled CNTRC plates under uniform temperature field. The effects of CNT volume fraction and distribution patterns as well as foundation stiffness on the vibration characteristics of CNTRC plates are examined in detail. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper investigates a velocity-free finite-time attitude control scheme for a rigid spacecraft with actuator saturation and external disturbances. Initially, a finite-time observer is designed to compensate the unknown angular velocity information. With the estimated values, a finite-time controller is further developed under which the time-varying reference attitude trajectory can be tracked precisely. Moreover, input saturation problem is solved by adaptive method. Rigorous Lyapunov-based analysis shows that the states of the closed-loop system converge to a small neighborhood of the origin in finite time. Finally, the dynamic performance of attitude control system is presented by numerical simulation examples and the superiority of the finite-time control scheme is verified. (C) 2017 Elsevier Masson SAS. All rights reserved. A multigrid pressure correction scheme suitable for high order discretizations of the incompressible Navier-Stokes equations is developed and demonstrated. The pressure correction equation is discretized with fourth-order compact finite-difference approximations. Iterative methods based on multigrid techniques accelerate the most demanding part of the overall solution algorithm, which is the numerical solution of the arised large and sparse linear system. Geometrical multigrid methods, using partial semicoarsenig strategy and zebra line Gauss-Seidel relaxation, are employed to efficiently approximate the solution of the resulting algebraic linear system. Effects of various multigrid components on the pressure correction procedure are evaluated and new high-order transfer operators are developed for the case of cell-centered grids. Their convergence rates are also compared with commonly used intergrid transfer operators. Furthermore, numerically comparisons between different multigrid cycle approaches, such as V-, W- and F -cycle, are presented. The performance tests demonstrate that the new pressure correction approach significantly reduces the computational effort compared to single-grid algorithms. Furthermore, it is shown that the overall high order accuracy of the numerical method is retained in space and time with increasing Reynolds number. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper investigates the robust dynamic output feedback non-fragile control (RDOFNFC) strategy for spacecraft attitude stabilization problem with nonlinear perturbations. The spacecraft attitude dynamics model takes the actuator saturation limits, external disturbances, controller perturbation and model parameter uncertainty into account. To make spacecraft attitude system satisfy 11,, performance and quadratic stability, with respect to the additive perturbation in initial state and multiplicative perturbation in decay phase, the corresponding RDOFNFC is designed respectively. Based on Lyapunov theory, the controller design is transformed into a multi-objective convex optimization problem based on linear matrix inequalities (LMIs). Simulation results based on the on-orbit servicing spacecraft show good performance under external disturbances, model parameter uncertainty and controller perturbation, which validates the effectiveness and feasibility of the proposed control method. (C) 2017 Elsevier Masson SAS. All rights reserved. Radial basis function (RBF) interpolation is a robust mesh deformation method, which has the main property of interpolating the displacements of mesh boundary points to the internal points through RBF. However, this method is computationally intensive, especially for problems with large number of grids. To handle this problem, a data reduction RBF method has been developed in literature. By using greedy algorithm, only a small subset of mesh boundary points is selected as the control point set to perform mesh deformation. Subsequently, much few boundary points are needed to approximate the shape of geometry and the computational cost of data reduction RBF method is much lower than the original RBF. Despite the referred benefits, this method incurs the loss of geometry precision especially at boundaries where large deformation happens, which results into the decline of deformation capacity. To further improve the data-reduction RBF method, a novel dynamic-control-point RBF (DCP-RBF) mesh deformation method is proposed in this paper, which employing a dynamic set of control points. In each time step of mesh deformation, the neighboring boundary point near the cell with the worst quality is added into the control point set, while the neighboring control point near the cell with the best quality is removed from the control point set. In this way, it is ensured that there are more control points placed around the region with lower mesh quality, where usually large deformation occurs. Consequently, in contrast with the data reduction RBF method, DCP-RBF permits significantly larger mesh deformation with a quite small increase in computational cost. The superiority of the proposed DCP-RBF method is demonstrated through several test cases including both 2D and 3D dynamic mesh applications. (C) 2017 Elsevier Masson SAS. All rights reserved. Many kinds of support system, such as tail support system, external/balance support system, side wall support system and wing tip support system are used for wind tunnel testing. The difference between the flow around the test model and the flow around the real aircraft is caused by the support system and results in a difference between aerodynamic characteristics of the test article and the actual one and is referred to as support interference. Support interference is one of the important topics of aerodynamic testing since it can have significant influence on the accuracy of the test data. The support system and support interference become one of the main investigation areas of experiment aerodynamics. The results of experimental investigation of the influence of model support on the determination of aerodynamic coefficients of a wind tunnel model are presented. A discussion is given of the forms of interference occurring in the low speed wind tunnel of the Military Technical Institute due to the model support system. Two types of model attachments, bent sting and external/balance model support are considered. The magnitude of interference on the test results is given. The main interference is on the pitching moment coefficient Cm. The computational results of the interference-free aerodynamic coefficients of a Training Aircraft Model are also given and compared to experimental data. A procedure for eliminating the undesired effect of interference of the model support system on the test results is presented. (C) 2017 Elsevier Masson SAS. All rights reserved. In order to solve the impact of the impulse noise in the universe on the long time epoch folding of X-ray pulsar profile, an innovative denoising method using the wavelet packet transformation is proposed. First, the system model containing noisy signal for X-ray pulsar is established. The time series of X-ray pulsar are sampled as certain time interval and folded as its rotational period. Secondly, the signal transformed by wavelet packet is computed according to the given entropy, aiming to obtain the optimum basis. Finally, the impulse noise is filtered with the proper threshold. The experimental results show that the impulse noise can be filtered by the proposed method, and the algorithm improves the signal-to-noise ratio (SNR) by about 40% compared with other methods under low SNR conditions. (C) 2017 Elsevier Masson SAS. All rights reserved. The effect of microwave and laser "heat spots" on a supersonic flow past a hemisphere-cylinder and a hemisphere-cone-cylinder at Mach 21 to 3.45 is studied. Drag forces are evaluated for the known experimental data on stagnation pressure dynamics. The energy deposition by laser and microwave discharges in the oncoming flow is approximated by heated rarefied layer/layers ("filaments"). Approaches for decreasing the frontal drag force for the considered microwave and laser experiments are suggested. Complex conservative difference schemes are used in the simulations. (C) 2017 Elsevier Masson SAS. All rights reserved. Laminar incompressible flow around a NACA0012 airfoil placed in a free-stream at various incidences has been revisited with the use of a fully-Lagrangian meshless method based on a Smoothed Particle Hydrodynamics (SPH) formulation. Spatial adaptivity has been incorporated in the scheme via splitting and merging of SPH particles employing zonal criteria. In addition, a novel algorithm has been proposed here so that particle merging may be achieved, ensuring the robustness, accuracy and efficiency of the computational simulations. The results obtained have been benchmarked against available data from mesh-based methods. Good agreement has been found both in steady and unsteady flow regimes. Overall, the present work demonstrated the effectiveness and competitiveness of this meshfree approach for detailed studies in aerodynamics. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper presents aerodynamic calculations of the model-scale ERICA tiltrotor with high-fidelity computational fluid dynamics. The aim of this work is to assess the capability of the present CFD method in predicting airloads on the tiltrotor at different flight configurations. In this regard, three representative flight configurations of the ERICA were selected, corresponding to aeroplane, transition corridor, and helicopter modes, covering most modes of tiltrotor flight. The rotor blades were fully resolved and the use of a uniform and non-uniform actuator disk was also put forward to quantify the effect of the rotor on the fuselage loads. A fundamental investigation of the effect of the aerodynamic interference between the rotor and wing of the tiltwing aircraft is also shown. The employed CFD method was able to capture the aerodynamics of the different configurations and the overall agreement obtained with the experimental data demonstrates the capability of the present CFD method in accurately predict tiltrotor flows. (C) 2017 Elsevier Masson SAS. All rights reserved. The optimizations of wing geometry parameters (WGP) and wing kinematic parameters (WKP) to minimize the energy consumption of flapping wing hovering flight are performed by using a revised quasi-steady aerodynamic model and hybrid genetic algorithm (hybrid-GA). The parametrization method of dynamically scaled wing with the non-dimensional conformal feature of fruit fly's wing is firstly developed for the optimization involving the WGP. And the objective function of optimization is formed on basis of the power density model with the additional penalty items of lift-to-weight ratio, boundary constraints, aspect ratio (AR) and Reynolds number (Re). The obtained optimal WGP and WKP are separately substituted into the power density model to estimate the instantaneous forces and the power output again. The lower power density, flapping frequency and larger WGP for the combined optimal WGP and WKP are obtained in comparison with the estimated values for hovering fruit fly. These results might arise from the effect of strong coupling relationship between WGP and WKP via AR and Re on minimization of power density under the condition of lift balancing weight. Moreover, the optimal flapping angle manifests the harmonic profile, and the optimal pitch angle possesses the round trapezoidal profile with certain faster time scale of pitch reversal. The conceptual model framework of combined optimization provides a useful way to design fundamental parameters of biomimic flapping wing micro aerial vehicle. (C) 2017 Elsevier Masson SAS. All rights reserved. The acceleration autopilot with a rate loop is the most commonly implemented autopilot, which has been extensively applied to high-performance missiles. Nevertheless, for spinning rockets, the design of the guidance and control modules is a challenging task because the rapid spinning of the body creates a heavy coupling between the normal and lateral rocket dynamics. Nonlinear modeling of the rocket dynamics, control design as well as guidance algorithms are performed in this paper. Moreover, discrete time guidance and control algorithms for the terminal phase, which is based on proportional navigation, are performed. Finally, complete nonlinear simulations based on realistic scenarios are developed to demonstrate the robustness of the proposed solution with respect to uncertainty regarding launch, environment and rocket conditions. The performance of the proposed navigation, guidance and control system for a high-spin rocket leads to significant reductions in impact point dispersion. (C) 2017 Elsevier Masson SAS. All rights reserved. As part of our efforts to find ways and means to further improve the combustion performance of variable geometry dual-mode combustor, flow field characteristics and mechanisms of combustion performance losses, which included compression and combustion losses resulted by heat addition, were investigated in variable geometry dual-mode combustor numerically and experimentally with a Mach number of 3.0, a divergent ratio ranging from 1.3 to 1.8 and a fuel equivalence ratio varying from 0.6 to 1.2. Irreversible entropy increase analysis was extended specifically in this paper to study the mechanism of combustion performance loss for the variable geometry dual-mode combustor. Numerical and experimental results indicated that for a given fuel equivalence ratio, the wall static pressure, total pressure recovery coefficient, combustion efficiency and thrust of the variable geometry dual-mode combustor increased with the decreasing of divergent ratio and there was a maximum thrust performance for the divergent ratio of 1.3 within stability margin. It was therefore strongly believed that the combustion performance of a variable geometry dual-mode combustor could be further improved by decreasing divergent ratio. (C) 2017 Elsevier Masson SAS. All rights reserved. A key to achieve reliable model-based engine control, diagnostics and prognostics resides in in-flight engine model with high confidence level. Presented here is a new lifecycle real-time model to describe turbofan engine dynamic behavior called ALPVM (Adaptive Linear Parameter Varying Model), and the issues of engine/model mismatch compensation and performance degradation adaption are focused on. This methodology is different from the widely used STORM (Self Tuning On-board Real-time Model) presented by Pratt & Whitney, and the ALPVM is proposed on the linear parameter varying framework. The system matrices of ALPVM are computed using simultaneous step response data, and the polynomial LPV model is designed by the sets of polynomial fitting curves with scheduling parameters of engine operation in continuous forms. The IR-KELM (independent reduction kernel extreme learning machine) is developed to improve computational efforts without prediction accuracy reduction, and it serves an empirical model for polynomial LPV model mismatch compensation. The mechanisms of predict error control and linear dependency are considered in the IR-KELM, and it leads to decrease the hidden node number and simplify the IR-KELM topology. Kalman filter is employed to tune the health parameters of LPV model over its course of lifetime. Finally, the IR-KELM performance is confirmed by the benchmark data, and the simulation results from the ALPVM application to track a low-bypass turbofan engine dynamic behavior in the flight envelope indicate the effectiveness and usefulness of the proposed approach. (C) 2017 Elsevier Masson SAS. All rights reserved. Blended Wing Body (BWB) aircraft are a relatively new concept offering advantages of aerodynamic performance and fuel economy. In BWB aircraft design, directional stability has been identified as one aspect that remains under-researched. This paper presents a design analysis of vertical stabilisers on a BWB aircraft to determine their suitability and effects on stability. Founded on an existing model [1], a baseline BWB aircraft model has been developed with vertical stabilisers designed using the volume coefficient method which, although not created for BWB aircraft, is used to aid the design. To ensure suitability for transonic flight, stabiliser dimensions were kept in proportion to that of the Airbus A380 due to having a similar payload and cruise condition. Two BWB aircraft CAD models were developed; one with twin-stabilisers mounted vertically and another with them inclined. CFD analyses were performed to assess stability with respect to rudder inputs and sideslip angle. Stability derivatives calculated were similar for both twin-stabiliser configurations; however, the inclined configuration gave a smoother response. Drag performance was also assessed with the inclined stabilisers generating greater drag than the vertical stabilisers. This research has shown that a twin-stabiliser design is suitable for BWB aircraft. (C) 2017 Elsevier Masson SAS. All rights reserved. In 1867 James Lane and George Gascoyen, surgeons to the London Lock Hospital, compiled a report on their experiments with a new and controversial treatment. The procedure, known as "syphilization," saw patients be inoculated with infective matter taken from a primary syphilitic ulcer or the artificial sores produced in another patient. Each patient received between 102 and 468 inoculations to determine whether syphilization could cure syphilis and produce immunity against reinfection. This article examines the theory and practice of this experimental treatment. Conducted against the backdrop of the Contagious Diseases Acts, the English syphilization experiments have been largely forgotten. Yet they constitute an important case study of how doctors thought about the etiology and pathology of syphilis, as well as their responsibilities to their patients, at a crucial moment before the advent of the bacteriological revolution. This article examines how lobotomy came to be banned in the Soviet Union in 1950. The author finds that Soviet psychiatrists viewed lobotomy as a treatment of "last resort," and justified its use on the grounds that it helped make patients more manageable in hospitals and allowed some to return to work. Lobotomy was challenged by psychiatrists who saw mental illness as a "whole body" process and believed that injuries caused by lobotomy were therefore more significant than changes to behavior. Between 1947 and 1949, these theoretical and ethical debates within Soviet psychiatry became politicized. Psychiatrists competing for institutional control attacked their rivals' ideas using slogans drawn from Communist Party ideological campaigns. Party authorities intervened in psychiatry in 1949 and 1950, persecuting Jewish psychiatrists and demanding adherence to Ivan Pavlov's theories. Psychiatrists' existing conflict over lobotomy was adopted as part of the party's own campaign against harmful Western influence in Soviet society. This article traces the battle over Freud within Cuban psychiatry from its pre-1959 origins through the "disappearance" of Freud by the early 1970s. It devotes particular attention to the visit of two Soviet psychiatrists to Cuba in the early 1960s as part of a broader campaign to promote Pavlov. The decade-long controversy over Freud responded to both theoretical and political concerns. If for some Freud represented political conservatism and theoretical mystification, Pavlov held out the promise of a dialectical materialist future. Meanwhile, other psychiatrists clung to psychodynamic perspectives, or at least the possibility of heterogeneity. The Freudians would end up on the losing side of this battle, with many departing Cuba over the course of the 1960s. But banishing Freud did not necessarily make for stalwart Pavlovians or vanguard revolutionaries. Psychiatry would find itself relegated to a handmaiden position in the work of revolutionary mental engineering, with the government itself occupying the vanguard. This article reviews adoption debates about the disclosure of children's medical history in the twentieth century, noting shifts in the prescription of how much and what to tell adoptive applicants. I look at how adoption professional debates throughout the twentieth century around the disclosure of a child's medical history reveal the ways in which these professionals tried to deal with issues of predictability, risk, adoptability, and acceptability when it came to the persistent question of disability in adoptive family making. I consider how this management is similar to and different from histories of reproduction. I argue that as child eligibility gradually expanded to include children labeled disabled, and as adoption moved from a being a parent-centered practice to a child-centered one, professionals more intensely negotiated the management and communication of disability risk as a way to both mitigate the possibility of a failed placement and to facilitate a successful one. At the beginning of the twentieth century, Japan and China, each for its own reasons, invited the famous physicist Niels Bohr to visit and give lectures. Bohr accepted their invitations and made the trip in 1937; however, the topics of his lectures in the two countries differed. In Japan, he mainly discussed quantum mechanics and philosophy, whereas in China, he focused more on atomic physics. This paper begins with a detailed review of Bohr's trip to Japan and China in 1937, followed by a discussion of the impact of each trip from the perspective of the social context. We conclude that the actual effect of Bohr's visit to China and Japan involved not only the spreading of Bohr's knowledge but also clearly hinged on the current status and social background of the recipients. Moreover, the impact of Bohr's trip to East Asia demonstrates that, as is the case for scientific exchanges at the international level, the international exchange of knowledge at the individual level is also powerful, and such individual exchange can even promote exchange on the international level. No direct evidence documents exactly how Jane Seymour gave birth on October 12, 1537. Several later commentators have raised cesarean birth as an option. This paper tries to establish the probable cause of Jane Seymour's death in accordance with present-day knowledge of obstetrics and whether or not a cesarean section could have been actually performed in sixteenth-century England. It appears almost certainly that there were no obstetrical indications that would have led the Queen's physicians to operate on her, a surgeon was not present at her delivery, cesarean section on a living woman was not regularly performed in England in 1537, puerperium events do not support surgery, and the existing procesarean confirmation was politically motivated. Therefore, the most likely mode of Jane Seymour's delivery was vaginal rather than cesarean. For decades creationists have claimed that Charles Darwin sought the skulls of full-blooded Aboriginal Tasmanian people when only four were left alive. It is said that Darwin letters survive which reveal this startling and distasteful truth. Tracing these claims back to their origins, however, reveals a different, if not unfamiliar story. This article investigates the relationship between two evaluative claims about agents' degrees of belief: (i) that it is better to have more rather than less accurate degrees of belief and (ii) that it is better to have less rather than more probabilistically incoherent degrees of belief. We show that, for suitable combinations of inaccuracy measures and incoherence measures, both claims are compatible, although not equivalent; moreover, certain ways of becoming less incoherent always guarantee improvements in accuracy. Incompatibilities between particular incoherence and inaccuracy measures can be exploited to argue against particular ways of measuring either inaccuracy or incoherence. The first part of this article finds Craver's mutual manipulability theory (MM) of constitution inadequate, as it definitionally ties constitution to the feasibility of ideal experiments, which, however, are unrealizable in principle. As an alternative, the second part develops an abductive theory of constitution (NDC), which exploits the fact that phenomena and their constituents are unbreakably coupled via common causes. The best explanation for this fact is the existence of an additional dependence relation, namely, constitution. NDC has important ramifications for constitutional discovery most notably, that there is no experimentum cruets for constitution, not even under ideal discovery circumstances. We show that previous results from epistemic network models by Kevin J. S. Zollman and Erich Kummerfeld showing the benefits of decreased connectivity in epistemic networks are not robust across changes in parameter values. Our findings motivate discussion about whether and how such models can inform real world epistemic communities. In many fields in the life sciences investigators refer to downward or top-down causal effects. Craver and I defended the view that such cases should be understood in terms of a constitution relation between levels in a mechanism and intralevel causal relations (occurring at any level). We did not, however, specify when entities constitute a higher level mechanism. In this article I appeal to graph-theoretic representations of networks, now widely employed in systems biology and neuroscience, and associate mechanisms with modules that exhibit high clustering. As a result of interconnections within clusters, mechanisms often exhibit complex dynamic behaviors that constrain how individual components respond to external inputs, a central feature of top-down causation. Clade selection is unpopular with philosophers who otherwise accept multilevel selection theory. Clades cannot reproduce, and reproduction is widely thought necessary for evolution by natural selection, especially of complex adaptations. Using microbial evolutionary processes as heuristics, I argue contrariwise that (1) Glade growth (proliferation of contained species) substitutes for Glade reproduction in the evolution of complex adaptation, (2) Glade-level properties favoring persistence species richness, dispersal, divergence, and possibly intraclade cooperation are not collapsible into species-level traits, (3) such properties can be maintained by selection on clades, and (4) Glade selection extends the explanatory power of the theory of evolution. Philosophical defenses of cognitive/evolutionary psychological accounts of racialism claim that classification based on phenotypical features of humans was common historically and is evidence for a species-typical, cognitive mechanism for essentializing. They conclude that social constructionist accounts of racialism must be supplemented by cognitive/evolutionary psychology. This article argues that phenotypical classifications were uncommon historically until such classifications were socially constructed. Moreover, some philosophers equivocate between two different meanings of "racial thinking." The article concludes that social constructionist accounts are far more robust than psychological accounts for the origins of racialism. Many have suggested that the transformation standardly referred to as 'time reversal' in quantum theory is not deserving of the name. I argue on the contrary that the standard definition is perfectly appropriate and is indeed forced by basic considerations about the nature of time in the quantum formalism. Constructive field theory aims to rigorously construct concrete, nontrivial solutions to Lagrangians used in particle physics. I examine the relationship of solutions in constructive field theory to both axiomatic and Lagrangian quantum field theory (QFT). I argue that Lagrangian QFT provides conditions for what counts as a successful constructive solution and other information that guides constructive field theorists to solutions. Solutions matter because they describe the behavior of QFT systems and thus what QFT says the world is like. Constructive field theory clarifies existing disputes about which parts of QFT are philosophically relevant and how rigor relates to these disputes. Many astronomers seem to believe that we have discovered that Pluto is not a planet. I contest this assessment. Recent discoveries of trans-Neptunian Pluto-sized objects do not militate for Pluto's expulsion from the planets unless we have prior reason for not simply counting these newly-discovered objects among the planets. I argue that this classificatory controversy which I compare to the controversy about the classification of the platypus illustrates how our classificatory practices are laden with normative commitments of a distinctive kind. I conclude with a discussion of the relevance of such "norm-ladenness" to other controversies in the metaphysics of classification, such as the monism/pluralism debate. (C) 2017 The Author. Published by Elsevier Ltd. Quine is routinely perceived as having changed his mind about the scope of the Duhem-Quine thesis, shifting from what has been called an 'extreme holism' to a more moderate view. Where the Quine of 'Two Dogmas of Empiricism' argues that "the unit of empirical significance is the whole of science" (1951, 42), the later Quine seems to back away from this "needlessly strong statement of holism" (1991, 393). In this paper, I show that the received view is incorrect. I distinguish three ways in which Quine's early holism can be said to be wide-scoped and show that he has never changed his mind about any one of these aspects of his early view. Instead, I argue that Quine's apparent change of mind can be explained away as a mere shift of emphasis. (C) 2016 Elsevier Ltd. All rights reserved. Contemporary scholars set the Greek conception of an immanent natural order in opposition to the seventeenth century mechanistic conception of extrinsic laws imposed upon nature from without. By contrast, we argue that in the process of making the concept of law of nature, forms and laws were coherently used in theories of natural causation. We submit that such a combination can be found in the thirteenth century. The heroes of our claim are Robert Grosseteste who turned the idea of corporeal form into the common feature of matter, and Roger Bacon who described the effects of that common feature. Bacon detached the explanatory principle from matter and rendered it independent and therefore external to natural substances. Our plausibility argument, anchored in close reading of the relevant texts, facilitates a coherent conception of both 'natures' and 'laws'. (C) 2016 Elsevier Ltd. All rights reserved. As it is standardly conceived, Inference to the Best Explanation (IBE) is a form of ampliative inference in which one infers a hypothesis because it provides a better potential explanation of one's evidence than any other available, competing explanatory hypothesis. Bas van Fraassen famously objected to IBE thus formulated that we may have no reason to think that any of the available, competing explanatory hypotheses are true. While revisionary responses to the Bad Lot Objection concede that IBE needs to be reformulated in light of this problem, reactionary responses argue that the Bad Lot Objection is fallacious, incoherent, or misguided. This paper shows that the most influential reactionary responses to the Bad Lot Objection do nothing to undermine the original objection. This strongly suggests that proponents of IBE should focus their efforts on revisionary responses, i.e. on finding a more sophisticated characterization of IBE for which the Bad Lot Objection loses its bite. (C) 2017 Elsevier Ltd. All rights reserved. The work of Thomas Kuhn has been very influential in Anglo-American philosophy of science and it is claimed that it has initiated the historical turn. Although this might be the case for English speaking countries, in France an historical approach has always been the rule. This article aims to investigate the similarities and differences between Kuhn and French philosophy of science or 'French epistemology'. The first part will argue that he is influenced by French epistemologists, but by lesser known authors than often thought. The second part focuses on the reactions of French epistemologists on Kuhn's work, which were often very critical. It is argued that behind some superficial similarities there are deep disagreements between Kuhn and French epistemology. This is finally shown by a brief comparison with the reaction of more recent French philosophers of science, who distance themselves from French epistemology and are more positive about Kuhn. Based on these diverse appreciations of Kuhn, a typology of the different positions within the philosophy of science is suggested. (C) 2017 Elsevier Ltd. All rights reserved. Exploitation of space must benefit from the latest advances in robotics. On-orbit servicing is a clear candidate for the application of autonomous rendezvous and docking mechanisms. However, during the last three decades most of the trials took place combining extravehicular activities (EVAs) with telemanipulated robotic arms. The European Space Agency (ESA) considers that grasping and refuelling are promising near-mid-term capabilities that could be performed by servicing spacecraft. Minimal add-ons on spacecraft to enhance their serviceability may protect them for a changing future in which satellite servicing may become mainstream. ESA aims to conceive and promote standard refuelling provisions that can be installed in present and future European commercial geostationary orbit (GEO) satellite platforms and scientific spacecraft. For this purpose ESA has started the ASSIST activity addressing the analysis, design and validation of internal provisions (such as modifications to fuel, gas, electrical and data architecture to allow servicing) and external provisions (such as integrated berthing fixtures with peripheral electrical, gas, liquid connectors, leak check systems and corresponding optical and radio markers for cooperative rendezvous and docking). This refuelling approach is being agreed with European industry (OHB, Thales Alenia Space) and expected to be consolidated with European commercial operators as a first step to become an international standard; this approach is also being considered for on-orbit servicing spacecraft, such as the SpaceTug, by Airbus DS. This paper describes in detail the operational means, structure, geometry and accommodation of the system. Internal and external provisions will be designed with the minimum possible impact on the current architecture of GEO satellites without introducing additional risks in the development and commissioning of the satellite. End-effector and berthing fixtures are being designed in the range of few kilos and linear dimensions around 15 cm. A central mechanical part is expected to perform first a soft docking followed by a motorized retraction ending during a hard docking phase using aligning pins. Mating and de-mating will be exhaustively analysed to ensure robustness of operations. Leakage-free valves would allow for the transfer of fuel to the serviced spacecraft. The validation of the ASSIST system through dedicated environmental tests in a vacuum chamber together with dynamic testing using an air-bearing table will allow for the demonstration of concept feasibility and its suitability for becoming a standard of the on-orbit space industry. Kerosene distribution before its ignition in a scramjet combustor with dual cavity was measured using kerosene-PLIF under transverse injection upstream of the cavity and different injection pressures. The simulated flight condition is Ma 5.5, and the isolator entrance has a Mach number of 2.52, a total pressure of 1.6 MPa and a stagnation temperature of 1486 K. Effects of injection pressure on fuel distribution characteristics were analyzed. The majority of kerosene is present in the cavity shear layer as well as its upper region. Kerosene extends gradually into the cavity, almost, at a constant angle. Large scale structures are evident on the windward side of kerosene. The cavity shear layer plays an important role in determining the kerosene distribution and its entrainment into the cavity. The middle part of cavity is the most suitable location for ignition as a result of a favorable local equivalent ratio. As the injection pressure increases, the penetration height gets higher with the rate of increase getting slower at higher injection pressure. Meanwhile, the portion of kerosene entrained into cavity through shear layer becomes smaller as injection pressure increases. However, the kerosene entrained into cavity still increase due to the increased mass flow rate of kerosene. Long-term spaceflight needs reliable Biological life support systems (BLSS) to supply astronauts with enough food, fresh air and recycle wasters, but the knowledge about the operation pattern and controlling strategy is rear. For this purpose, a miniaturized enclosed aquatic ecosystem was developed and flown on the Chinese spaceship Shenzhou-8. The system with a total volume of about 60 mL was separated into two chambers by means of a gas transparent membrane. The lower chamber was inoculated with Euglena gracilis cells, and the upper chamber was cultured with Chlorella cells and three snails. After 17.5 days flight, the samples were analyzed. It was found that all snails in the ground module (GM) were alive, while in the flight module (FM) only one snail survived. The total cell numbers, assimilation of nutrients like nitrogen and phosphorus, soluble proteins and carbohydrate contents showed a decrease in FM than in GM. The correlation analysis showed upper chambers of both FM and GM had the same positive and negative correlation factors, while differential correlation was found in lower chambers. These results suggested primary productivity in the enclosed system decreased in microgravity, accompanied with nutrients assimilation. The FM chamber endured lacking of domination species to sustain the system development and GM chamber endured richness in population abundance. These results implied photosynthesis intensity should be reduced to keep the system healthy. More Chlorella but less Euglena might be a useful strategy to sustain system stability. It is the first systematic analysis of enclosed systems in microgravity. After two decades of slightly declining growth rate, the population of cataloged objects around the Earth increased by more than 56% in just a couple of years, from January 2007 to February 2009, due to two collisions in space involving the catastrophic destruction of three intact satellites (Fengyun 1C, Cosmos 2251 and Iridium 33) in high inclination orbits. Both events had occurred in the altitude range already most affected by previous launch activity and breakups. In 2011 a detailed analysis had been carried out to analyze the consequences of these fragmentations, in particular concerning the evolution of the collision risk for the Iridium and COSMO-SkyMed satellite constellations. Five years after such first assessment, the cataloged objects environment affecting the two constellations was revisited to evaluate how the situation had evolved due to the varying contribution of the above mentioned breakup fragments and the space activities carried out in the meantime. Being distributed, at 778 km, over six nearly polar orbit planes separated by just 30 at the equator, the Iridium satellites represent a very good gauge for checking the evolution of the environment in the most critical low Earth region. In approximately five years, from May 2011 to June 2016, the average flux of cataloged objects on the Iridium satellites increased by about 14%, to 1.59x10(-5) m(-2) per year. The cataloged fragments of Fengyun 1C, Cosmos 2251 and Iridium 33 still accounted for, on average, 54% of the total flux. More than 39% of the latter was associated with the Fengyun 1C fragments, about 11% with the Cosmos 2251 fragments and less than 4% with the Iridium 33 fragments. Specifically concerning the mutual interaction among the Iridium 33 debris and the parent constellation, the progressive dispersion and rather fast decay of the fragments below the Iridium operational altitude, coupled with a slow differential plane precession and low average relative velocities with respect to four of the six constellation planes, determined in five years, on average, a decline of the flux by about 31%, i.e. to about 5.75x10(-7) m(-2) per year. The decrease occurred in each constellation plane, even though with different rates and percentages, due to the varying relative orbit geometry. From May 2011 to June 2016, the mean flux of cataloged objects on the COSMO-SkyMed satellites, at 623 km, increased by about 26%, to 7.24x10(-6) m(-2) per year. The Fengyun 1C, Cosmos 2251 and Iridium 33 cataloged fragments accounted for, on average, about 1/4 of the total, with 12% due to Fengyun 1C, 8% to Cosmos 2251 and 4% to Iridium 33. In this work the electromagnetic characterization of composite materials reinforced with carbon and metallic nanoparticles is presented. In particular, the electric permittivity and the magnetic permeability as a function of the frequency are used to evaluate the electromagnetic absorption capability of the nanocomposites. The aim is the study of possible applications in advanced coaling able to tune the electromagnetic reflectivity of satellite surfaces in specific frequency ranges, in a special way for those surfaces that for some reason could be exposed to the antenna radiation pattern. In fact, the interference caused by the spurious electromagnetic multipath due to good electric conductive satellite surface components could in turn affect the main radiation lobe of TLC and Telemetry antennas, thus modifying its main propagation directions and finally increasing the microwave channel pathloss. The work reports the analysis of different nanostructured materials in the 2-10 GHz frequency range. The employed nanopowders are of carbon nanotubes, cobalt, argent, titanium, nickel, zinc, copper, iron, boron, bismuth, hafnium, in different weight percentages versus the hosting polymeric matrix. The materials are classified as a function of their electromagnetic losses capability by taking into account of both electric and magnetic properties. The possibility to design multi-layered structures optimized to provide specific microwave response is finally analyzed by the aid of swam intelligence algorithm. This novel technique is in general interesting for metrological purpose and remote sensing purposes, and can be effectively used in aerospace field for frequency selective materials design, in order to reduce the aircraft/spacecraft radar observability at certain frequencies. For over half a century space exploration has been dominated by engineering and technology driven practices. This paradigm leaves limited room for art and design. Yet in other parts of our lives, art and design play important roles: they stimulate new ideas and connect people to their experiences and to each other at a deeper level, while affecting our worldview as we evolve our cognitive models. We develop these models through circular. conversations with our environment, through perception and making sense through our sensory systems and responding back through language and interactions. Artists and designers create artifacts through conversation cycles of sense-giving and sense-making, thus increasing variety in the world in the form of evolving messages. Each message becomes information when the observer decodes it, through multiple sense making and re-sampling cycles. The messages form triggers to the cognitive state of the observer. Having a shared key between the artist/designer and the observer-for example, in the form of language, gestures, and artistic/design styles-is fundamental to encode and decode the information, in conversations. Art, design, science, and engineering, are all creative practices. Yet, they often speak different languages, where some parts may correspond, while others address a different variety in a cybernetic sense. These specialized languages within disciplines streamline communications, but limit variety. Thus, different languages between disciplines may introduce communication blocks. Nevertheless, these differences are desired as they add variety to the interactions, and could lead to novel discourses and possibilities. We may dissolve communication blocks through the introduction of boundary objects in the intersection of multiple disciplines. Boundary objects can ground ideas and bridge language diversity across disciplines. These artifacts are created to facilitate circular cybernetic conversations, supporting convergence towards common shared languages between the actors. The shared language can also create new variety that evolves through conversations between the participants. Misunderstandings through conversations can also lead to new ideas, as they stimulate questions and may suggest novel solutions. In this paper we propose new categorizations for boundary objects, drawn from design and cybernetic approaches. We evidence these categories with a number of space-related object examples. Furthermore, we discuss how these boundary objects facilitate communications between diverse audiences, ranging from scientists, and engineers, to artists, designers, and the general public. Starting from the 1990s, the mini/micro-satellites around 50-200 kg become one of the research focuses of space industry. Different from the mini-satellites developed at early ages, modern mini/micro-satellites widely incorporate micro-electronics and micro-mechanisms, emphasizing multi-functionality and system integration. As a result, they have relatively high power/volume ratio. Also, to reduce the fuel consumption, the application of micro-electrical propulsion systems on mini/micro-satellites is increasing, which pushes the requirement for electrical power even higher. It is hard for the surface mounted solar cells and stationary solar arrays which were widely used by micro satellites at early ages to satisfy these elevating power requirements of modern mini/micro-satellites. In response to these requirements, Solar Array Drive Assemblies (SADA) which used to be standard equipments of large spacecrafts have gradually been incorporated in the mini/micro-satellites to rotate the solar arrays for maximum sunlight acquisition, and tremendously reduce the size and mass of the solar arrays. Lately, a new micro-SADA with integrated mechanisms and electronics has been developed by Beijing Institute of Control Engineering. This SADA features: Mechanisms and electronics integrated in one capsule instead of the separate mechanisms and electronic boxes as in large spacecrafts. High torque/weight ratio stepper motor & high precision mini-transmission. Long life high power/weight ratio multi-channel slip ring. High reliability measures against severe mechanical, electrical and thermal conditions. This SADA has gone through functional test, mechanical tests (vibration, acceleration, and shock, etc.), thermal-vacuum cycling test, and electromagnetic compatibility test, and demonstrated excellent functionality. The design as well as the development and test processes is summarized in this paper. Since the beginning of space exploration, probes have been sent to other planets or moons with the associated challenge of landing on these bodies. For a soft landing several damping methods like landing legs or airbags have been used. A new and potentially less complex and lighter way to reduce the shock loads at touchdown is the use of a crushable shield underneath the lander platform. This crushable shield could be made for example out of an energy absorbing materials like an aluminum honeycomb core with a High Performance Polyethylene cover sheet. The design is particularly advantageous since no moving parts nor other mechanisms are required, thus making the shield very robust and fail safe. The only mission that has used this technique is the ESA/Roscosmos-mission "ExoMars" which started in 2016. The development of such a crushable shock absorber implies and requires assessment of materials, manufacturing processes, the setup of a numerical simulation and the experimental validation in a test lab. In an independent research project (Marslander) a representative engineering mockup of the landing platform has been built and tested at the Landing & Mobility Test Facility (LAMA) to support the numerical simulation model with experimental data. This paper is focusing on the hardware tests. Results of the above stated development and testing processes will be presented and discussed. A noncooperative target with large inertia grasped by space robot may contain a large unkonwn initial angular momentum, which will cause the compound system unstable. Unloading the unkonwn angular momentum of the compound system is a necessary and diffcult task. In the paper, a coordinated stabilization scenario is introduced to reduce the angular momentum, which has two stages, Momentum Reduction and Momentum Redistribution. For the Momentum Reduction, a modified adaptive sliding mode control algorithm is proposed and used to reduce the unknown angular momentum of target, which uses a new signum function and time delay estimation to assure fast convergence and achieve good performance with small chattering effect. Finally, a plane dual-arm space robot is simulated, the numerical simulations show that the proposed control algorithm is able to stabilize a noncooperative target with large inertia successfully, while the attitude disturbance of base is small. The control algorithm also has a good robust performance. A new thermal protection system for atmospheric earth re-entry is proposed. This concept combines the advantages of both reusable and ablative materials to establish a new hybrid concept with advanced capabilities. The solution consists of the design and the integration of a dual shield resulting on the overlapping of an external thin ablative layer with a Ceramic Matrix Composite (CMC) thermo-structural core. This low density ablative material covers the relatively small heat peak encountered during re-entry the CMC is not able to bear. On the other hand the big advantage of the CMC based TPS is of great benefit which can deal with the high integral heat for the bigger time period of the re-entry. To verify the solution a whole testing plan is envisaged, which as part of it includes thermal shock test by infra-red heating (heating flux up to 1 MW/m(2)) and vibration test under launcher conditions (Volna and Ariane 5). Sub-scale tile samples (100x100 mm(2)) representative of the whole system (dual ablator/ceramic layers, insulation, stand-offs) are specifically designed, assembled and tested (including the integration of thermocouples). Both the thermal and the vibration test are analysed numerically by simulation tools using Finite Element Models. The experimental results are in good agreement with the expected calculated parameters and moreover the solution is qualified according to the specified requirements. This exploratory study investigates (i) inter-individual variations of affective states before a parabolic flight (i.e., PF) on the basis of quality of adaptation to physical demands, and (ii) intra-individual variations of affective states during a PF. Mood-states, state-anxiety and salivary cortisol were assessed in two groups with a different quality of adaptation (an Adaptive Group, i.e., AG, and a Maladaptive Group, i.e., MG) before and during a PF. Before PF, MG scored higher on mood states (Anger-Hostility, Fatigue-Inertia) than AG. During the flight, while AG seemed to present "normal" affective responses to the demanding environment (e.g., increase in salivary cortisol), MG presented increases in mood states such as Confusion-Bewilderment or Tension-Anxiety. The findings suggest that the psychological states of MG could have disturbed their ability to integrate sensory information from an unusual environment, which led to difficulties in coping with the physical demands of PF. In this paper, optimal two-impulse Earth-Moon transfers using the lunar gravity assist (LGA) are designed and investigated in the restricted four-body problem (RFBP) with the Sun, the Earth, and the Moon as primaries. Using the double LGA orbit as the initial guess, we propose an optimization method of the Earth-Moon transfer using the LGA in the RFBP. In our design process, the optimization problem in the RFBP is decomposed into several easier optimization subproblems in the easier models to reduce the difficulty of the optimization problem. Then, the optimal LGA transfers are displayed and analyzed. The results indicate that the LGA is an important technique in the Earth-Moon transfer design, which can efficiently save the fuel cost, but the LGA transfer duration must be larger than a period of the Earth-Moon system. In this paper it is intended to discuss an approach to reduce design costs for subsequent missions by introducing modularity, commonality and multi-mission capability and thereby reuse of mission individual investments into the design of lunar exploration infrastructural systems. The presented approach has been developed within the German Helmholtz-Alliance on Robotic Exploration of Extreme Environments (ROBEX), a research alliance bringing together deep-sea and space research to jointly develop technologies and investigate problems for the exploration of highly inaccessible terrain be it in the deep sea and polar regions or on the Moon and other planets. Although overall costs are much smaller for deep sea missions as compared to lunar missions, a lot can be learned from modularity approaches in deep sea research infrastructure design, which allows a high operational flexibility in the planning phase of a mission as well as during its implementation. The research presented here is based on a review of existing modular solutions in Earth orbiting satellites as well as science and exploration systems. This is followed by an investigation of lunar exploration scenarios from which we derive requirements for a multi-mission modular architecture. After analyzing possible options, an approach using a bus modular architecture for dedicated subsystems is presented. The approach is based on exchangeable modules e.g. incorporating instruments, which are added to the baseline system platform according to the demands of the specific scenario. It will be described in more detail, including arising problems e.g. in the power or thermal domain. Finally, technological building blocks to put the architecture into practical use will be described more in detail. In this study, the RNG-Large Eddy Simulation (RNG-LES) methodology of a synthesis gas turbulent combustion in a round jet burner is investigated, using OpenFoam package. In this regard, the extended EDC extinction model of Aminian et al. for coupling the reaction and turbulent flow along with various reaction kinetics mechanisms such as Skeletal and GRI-MECH 3.0 have been utilized. To estimate precision and error accumulation, we used the Smirinov's method and the results are compared with the available experimental data under the same conditions. As a result, it was found that the GRI-3.0 reaction mechanism has the least computational error and therefore, was considered as a reference reaction mechanism. Afterwards, we investigated the influence of various working parameters including the inlet flow temperature and inlet velocity on the behavior of combustion. The results show that the maximum burner temperature and pollutant emission are affected by changing the inlet flow temperature and velocity. Ballisticly connecting halo orbits to science orbits in the circular-restricted three-body problem is investigated. Two classes of terminal science orbits are considered: low-altitude, tight orbits that are deep in the gravity well of the secondary body, and high-altitude, loose orbits that are strongly perturbed by the gravity of the primary body. General analytic expressions are developed to provide a minimum bound on impulse cost in both the circular restricted and the Hill's approximations. The equations are applied to a broad range of planetary moons, providing a mission design reference. Systematic grid search methods are developed to numerically find feasible transfers from halo orbits at Europa, confirming the analytical lower bound formulas. The two-impulse capture options in the case of Europa reveal a diverse set of potential solutions. Tight captures result in maneuver costs of 425-550 m/s while loose captures are found with costs as low as 30 m/s. The terminal orbits are verified to avoid escape or impact for at least 45 days. Solar oscillation, which causes the sunlight intensity and spectrum frequency change, has been studied in great detail, both observationally and theoretically. In this paper, owing to the existence of solar oscillation, the time delay between the sunlight coming from the Sun directly and the sunlight reflected by the other celestial body such as the satellite of planet or asteroid can be obtained with two optical power meters. Because the solar oscillation time delay is determined by the relative positions of the spacecraft, reflective celestial body and the Sun, it can be adopted as the navigation measurement to estimate the spacecraft's position. The navigation accuracy of single solar oscillation time delay navigation system depends on the time delay measurement accuracy, and is influenced by the distance between spacecraft and reflective celestial body. In this paper, we combine it with the star angle measurement and propose a solar oscillation time delay measurement assisted celestial navigation method for deep space exploration. Since the measurement model of time delay is an implicit function, the Implicit Unscented Kalman Filter (IUKF) is applied. Simulations demonstrate the effectiveness and superiority of this method. Reaction wheels, as one of the most commonly used actuators in satellite attitude control systems, are prone to malfunction which could lead to catastrophic failures. Such malfunctions can be detected and addressed in time if proper analytical redundancy algorithms such as parameter estimation and control reconfiguration are employed. Major challenges in parameter estimation include speed and accuracy of the employed algorithm. This paper presents a new approach for improving parameter estimation with adaptive unscented Kalman filter. The enhancement in tracking speed of unscented Kalman filter is achieved by systematically adapting the covariance matrix to the faulty estimates using innovation and residual sequences combined with an adaptive fault annunciation scheme. The proposed approach provides the filter with the advantage of tracking sudden changes in the system non-measurable parameters accurately. Results showed successful detection of reaction wheel malfunctions without requiring a priori knowledge about system performance in the presence of abrupt, transient, intermittent, and incipient faults. Furthermore, the proposed approach resulted in superior filter performance with less mean squared errors for residuals compared to generic and adaptive unscented Kalman filters, and thus, it can be a promising method for the development of fail-safe satellites. Solar energy collection and conversion is of great significance to the power transmission of the Space Solar Power Station (SSPS), and has influences on the overall system, technologically and economically. For the proposed SSPS-OMEGA concept, the original conceptual design had non-uniform energy distribution and excessive energy density in local areas, which would cause decreases in its optical and electric performance. Aiming at this point, firstly, this paper evaluates the optical performance of the OMEGA concept via ray trace technique. Secondly, the generatrix geometry design of the photovoltaic (PV) cell array based on Bezier curve is carried out to obtain optimal optical performance available for efficient response to sunrays. After that numerical examples achieve good collection efficiency and suitable energy distribution. Finally, modular construction for the main concentrator and its influence on optical performance are investigated. Moreover, the effect of the orbital motion and tracking error are analyzed to provide reference for the realization of the OMEGA. Direct numerical simulation of a three-dimensional spatially-developing supersonic turbulent round jet flame in heated coflow has been conducted. The lifted flame is auto-ignited and stabilized with a flame base. The flame surface presents to be quite corrugated due to interactions with turbulence. The mean heat release rate conditioned on curvature of the flame surface presents to be larger for positive than negative. The supersonic jet spreads slower than subsonic case. The mean center line velocity increases slightly far field due to heat release, which is definitely different from the non-reacting flows where the velocity decays monotonously. The large fluctuations of the species mass fraction and temperature in the mixture fraction space result in large discrepancy when using the first order CMC closure where the fluctuations neglected. Correction of the reaction source terms using conditional PDF approach based on doubly condition moment closure is proved to be effective, however the accuracy of the singly conditioned chemical source terms depends to a large extent on accurate assumption for the pdf of progress variable. The Beta pdf performs better in the far field of the jet flame, while very poor in the region near the nozzle. The results of a lunar massdriver mission and system analysis are discussed and show a strong case for a permanent lunar settlement with a site near the lunar equator. A modular massdriver concept is introduced, which uses multiple acceleration modules to be able to launch large masses into a trajectory that is able to reach Earth. An orbital mechanics analysis concludes that the launch site will be in the Oceanus Procellarum a flat, Titanium rich lunar mare area. It is further shown that the bulk of massdriver components can be manufactured by collecting lunar minerals, which are broken down into its constituting elements. The mass to orbit transfer rates of massdriver case study are significant and can vary between 1.8 kt and 3.3 megatons per year depending on the available power. Thus a lunar massdriver would act as a catalyst for any space based activities and a game changer for the scale of feasible space projects. In this work, a study about the influence of the Sun on optimal two -impulse Earth-to-Moon trajectories for interior transfers with moderate time of flight is presented considering the three-body and the four-body models. The optimization criterion is the total characteristic velocity which represents the fuel consumption of an infinite thrust propulsion system. The optimization problem has been formulated using the classic planar circular restricted three-body problem (PCR3BP) and the planar bi-circular restricted four-body problem (PBR4BP), and, it consists of transferring a spacecraft from a circular low Earth orbit (LEO) to a circular low Moon orbit (LMO) with minimum fuel consumption. The Sequential Gradient Restoration Algorithm (SGRA) is applied to determine the optimal solutions. Numerical results are presented for several final altitudes of a clockwise or a counterclockwise circular low Moon orbit considering a specified altitude of a counterclockwise circular low Earth orbit. Two types of analysis are performed: in the first one, the initial position of the Sun is taken as a parameter and the major parameters describing the optimal trajectories are obtained by solving an optimization problem of one degree of freedom. In the second analysis, an optimization problem with two degrees of freedom is considered and the initial position of the Sun is taken as an additional unknown. Guidance, navigation, and control of a hypersonic vehicle landing on the Mars rely on precise state feedback information, which is obtained from state estimation. The high uncertainty and nonlinearity of the entry dynamics make the estimation a very challenging problem. In this paper, a new adaptive cubature Kalman filter is proposed for state trajectory estimation of a hypersonic entry vehicle. This new adaptive estimation strategy is based on the measure of nonlinearity of the stochastic system. According to the severity of nonlinearity along the trajectory, the high degree cubature rule or the conventional third degree cubature rule is adaptively used in the cubature Kalman filter. This strategy has the benefit of attaining higher estimation accuracy only when necessary without causing excessive computation load. The simulation results demonstrate that the proposed adaptive filter exhibits better performance than the conventional third-degree cubature Kalman filter while maintaining the same performance as the uniform high degree cubature Kalman filter but with lower computation complexity. At present, very few CubeSats have flown in space featuring propulsion systems. Of those that have, the literature is scattered, published in a variety of formats (conference proceedings, contractor websites, technical notes, and journal articles), and often not available for public release. This paper seeks to collect the relevant publically releasable information in one location. To date, only two missions have featured propulsion systems as part of the technology demonstration. The IMPACT mission from the Aerospace Corporation launched several electrospray thrusters from Massachusetts Institute of Technology, and BricSAT-P from the United States Naval Academy had four micro-Cathode Arc Thrusters from George Washington University. Other than these two missions, propulsion on CubeSats has been used only for attitude control and reaction wheel desaturation via cold gas propulsion systems. As the desired capability of CubeSats increases, and more complex missions are planned, propulsion is required to accomplish the science and engineering objectives. This survey includes propulsion systems that have been designed specifically for the CubeSat platform and systems that fit within CubeSat constraints but were developed for other platforms. Throughout the survey, discussion of flight heritage and results of the mission are included where publicly released information and data have been made available. Major categories of propulsion systems that are in this survey are solar sails, cold gas propulsion, electric propulsion, and chemical propulsion systems. Only systems that have been tested in a laboratory or with some flight history are included. New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low. The influence of gas flow, current level, and different shapes of anode on the oscillation amplitude and the characteristics of the hollow cathode discharge were investigated. The average plasma potential, temporal measurements of plasma potential, ion density, the electron temperature, as well as waveforms of plasma potential for test conditions were measured. At the same time, the time-resolved images of the plasma plume were also recorded. The results show that the potential oscillations appear at high discharge current or low flow rate. The potential oscillation boundaries, the position of maximum amplitude of plasma potential, and the position where the highest ion density was observed, were found. Both of the positions are affected by different shapes of anode configurations. This high amplitude of potential oscillations is ionization-like instabilities. The xenon ions ionized in space was analyzed for the fast potential rise and spatial dissipation of the space xenon ions was the reason for the gradual potential delay. Pulse excitation is the key to measure the pressure-coupling response function of composite propellant. It is also a key trigger factor for nonlinear combustion instability. This paper aims at understanding characteristics of pulse excitation in T-burners. Pulse excitation is provided by black powder (BP). D-2 law is used to calculate BP burning properties. Firstly, the experimental pressure history of a pulse excitation is analyzed. Pressure pulse and mean pressure increment are introduced to describe pulse excitation. Secondly, the modified zero dimension model and one-dimension model of pressure pulse are established based on energy conservation and modification. The results of models indicate that the modified zero-dimensional model can accurately predict the pressure pulse. The modified zero-dimension model demonstrates that the pressure pulse is determined by pulse build-up time threshold, volume coefficient, effective weight fraction of BP, weight of BP et. al. When burning time of BP is larger than the threshold, volume coefficient is equal to 2, and effective weight fraction of BP is less than 1. The pressure pulse is approximately linear correlation with weight and effective weight fraction of BP. Otherwise, volume coefficient is larger than 2, and effective weight fraction of BP is equal to 1. The pressure pulse is approximately linear correlation with volume coefficient and BP weight. Thirdly, a zero dimensional prediction model of mean pressure is established based on conservations of energy and mass. The prediction models of pressure pulse and mean pressure are validated by T-burner experiments. Finally, effects of BP burning properties on pressure pulse and mean pressure increment are studied. The results show that both pressure pulse and mean pressure increment increase with increasing BP weight, linearly. The pressure pulse is more sensitivity to the variations of burning time of BP. As burning time of BP decreases, the mean pressure increment gradually increases to the maximum, and the pressure pulse can become a very large value. Star Identification (Star-ID) is a complex problem, mainly because some of the observations are not generated by actual stars, but by reflecting debris, other satellites, visible planets, or by electronic noise. For this reason, the capability to discriminate stars from non-stars is an important aspect of Star-ID robustness. Usually, the Star-ID task is performed by first attempting identification on a small group of observed stars (a kernel) and, in case of failure, replacing that kernel with another, until a kernel made only of actual stars is found. This work performs a detailed analysis of kernel generator algorithms, suitable for on-board implementation in terms of speed and robustness, for kernels of three (triad) and four (quad) stars. Three new kernel generator algorithms and, in addition to the existing expected time to discovery, three new metrics for robustness evaluation are proposed. The proposed algorithms are fast, robust to find good kernels, and do not require pre-stored data. This paper presents the photometric solutions of two marginal contact binaries in Small Magellanic cloud (SMC). The data was acquired from Optical Gravitational Lensing Experiment (OGLE) catalogue and the photometric solutions were obtained using Wilson-Devinney (WD) v2003 program. Results reveal that these systems belong to the short period contact binaries of W-subtype. High mass-ratio and low contact factor of both the systems indicates that they have recently evolved into a marginal contact stage. Estimated absolute dimensions and evolutionary stages of these variables are in good agreement with physical properties of Low Mass Contact Binaries (LMCB). In this paper, we present a study of the effects generated by exposure to UV-C radiation on nanocomposite films made of graphene nanoplatelets dispersed in an epoxy matrix. The nanocomposite films, at different nanoparticle size and concentration, were fabricated on Mylar substrate using the spin coating process. The effects of UV-C irradiation on the surface hydrophobicity and on the electrical properties of the epoxy/graphene films were investigated using contact angle measurements and electrical impedance spectroscopy, respectively. According to our results, the UV-C irradiation selectively degrades the polymer matrix of the nanocomposite films, giving rise to more conductive and hydrophobic layers due to exposure of the graphene component of the composite material. The results presented here have important implications in the design of spacecraft components and structures destined for long-term space missions. The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator. The gas-liquid interaction process of a liquid jet in supersonic crossflow with a Mach number of 1.94 was investigated numerically using the Eulerian-Lagrangian method. The KH (Kelvin-Helmholtz) breakup model was used to calculate the droplet stripping process, and the secondary breakup process was simulated by the competition of RT (Rayleigh-Taylor) breakup model and TAB (Taylor Analogy Breakup) model. A correction of drag coefficient was proposed by considering the compressible effects and the deformation of droplets. The location and velocity models of child droplets after breakup were improved according to droplet deformation. It was found that the calculated spray features, including spray penetration, droplet size distribution and droplet velocity profile agree reasonably well with the experiment. Numerical results revealed that the streamlines of air flow could intersect with the trajectory of droplets and are deflected towards the near-wall region after they enter into spray zone around the central plane. The analysis of gas-liquid relative velocity and droplet deformation suggested that the breakup of droplets mainly occurs around the front region of the spray where gathered a large number of droplets with different sizes. The liquid trailing phenomenon of jet spray which has been discovered by the previous experiment was successfully captured, and a reasonable explanation was given based on the analysis of gas-liquid interaction process. Streaked point sources are a common occurrence when imaging unresolved space objects from both ground and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the comerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high comerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine. The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission. The International Academy of Astronautics (IAA) Symposia on the History of Rocketry and Astronautics have been held annually at the International Astronautical Congresses since 1967. During these past 50 years nearly 800 papers have been presented and subsequently published in the proceedings. With a 20-year rule imposed for historical presentations, the first 10 symposia concentrated on pre-World War II and early 1950s activities. A surprisingly large number of papers on early, less well-known Soviet Russian contributions to rocketry and astronautics were presented in the first symposia, despite the ongoing Space Race between the U.S and USSR. Another important element in these symposia involved memoir papers offered by pre- and post-war rocket and astronautics pioneers from many countries, and the participation of many of these pioneers in person. In sum, the history of national space and rocket projects from some 40 countries were presented over the years in IAA History Symposia. These 50 symposia have provided a platform for scholars and professional and non-professional historians to meet and discuss the history of rocketry and astronautics, and to personally interview many space pioneers, most of whom today are deceased. Their personal recollections have since been shared with a large audience. Over time, IAA history papers divided into recognizable periods: ancient times through the 19th century, and the 20th and 21st centuries, which separate among actions and events that took place before 1945, in 1945 to 1957, and after 1957 (which marked the beginning of the space age). Proceedings of the IAA History Symposia have been published in English, ultimately in the History Series of the American Astronautical Society (AAS) and its publishing arm, Univelt Inc., under an agreement secured with the IAA. This paper presents an overview of the IAA History Symposia. It examines the early years of the history committee and its first symposium, the evolution of subsequent symposia, and it recognizes those individuals who shaped these symposia and the publication of its proceedings. We describe an error type that we call the naturalizing error: an appeal to nature as a self-justified description dictating or limiting our choices in moral, economic, political, and other social contexts. Normative cultural perspectives may be subtly and subconsciously inscribed into purportedly objective descriptions of nature, often with the apparent warrant and authority of science, yet not be fully warranted by a systematic or complete consideration of the evidence. Cognitive processes may contribute further to a failure to notice the lapses in scientific reasoning and justificatory warrant. By articulating this error type at a general level, we hope to raise awareness of this pervasive error type and to facilitate critiques of claims that appeal to what is "natural" as inevitable or unchangeable. The paper analyzes the philosophical consequences of the recent discovery of direct violations of the time-reversal symmetry of weak interactions. It shows that although we have here an important case of the time asymmetry of one of the fundamental physical forces which could have had a great impact on the form of our world with an excess of matter over antimatter, this asymmetry cannot be treated as the asymmetry of time itself but rather as an asymmetry of some specific physical process in time. The paper also analyzes the consequences of the new discovery for the general problem of the possible connections between direction (arrow) of time and time-asymmetric laws of nature. These problems are analyzed in the context of Horwich's Asymmetries in time: problems in the philosophy of science (1987) argumentation, trying to show that existence of a time-asymmetric law of nature is a sufficient condition for time to be anisotropic. Instead of Horwich's sufficient condition for anisotropy of time, it is stressed that for a theory of asymmetry of time to be acceptable it should explain all fundamental time asymmetries: the asymmetry of traces, the asymmetry of causation (which holds although the electrodynamic, strong and gravitational interactions are invariant under time reversal), and the asymmetry between the fixed past and open future. It is so because the problem of the direction of time has originated from our attempts to understand these asymmetries and every plausible theory of the direction of time should explain them. In this paper, I present a philosophical analysis of interdisciplinary scientific activities. I suggest that it is a fruitful approach to view interdisciplinarity in light of the recent literature on scientific representations. For this purpose I develop a meta-representational model in which interdisciplinarity is viewed in part as a process of integrating distinct scientific representational approaches. The analysis suggests that present methods for the evaluation of interdisciplinary projects places too much emphasis non-epistemic aspects of disciplinary integrations while more or less ignoring whether specific interdisciplinary collaborations puts us in a better, or worse, epistemic position. This leads to the conclusion that there are very good reasons for recommending a more cautious, systematic, and stringent approach to the development, evaluation, and execution of interdisciplinary science. The notion of truthlikeness (verisimilitude, approximate truth), coined by Karl Popper, has very much fallen into oblivion, but the paper defends it. It can be regarded in two different ways. Either as a notion that is meaningful only if some formal measure of degree of truthlikeness can be constructed; or as a merely non-formal comparative notion that nonetheless has important functions to fulfill. It is the latter notion that is defended; it is claimed that such a notion is needed for both a reasonable backward-looking and a reasonable forward-looking view of science. On the one hand, it is needed in order to make sense of the history of science as containing a development; on the other, it is needed in order to understand present-day sciences as containing knowledge-seeking activities. The defense of truthlikeness requires also a defense of two other notions: quasi-comparisons and regulative ideas, which is supplied in this paper as well. Observation and experiment as categories for analysing scientific practice have a long pedigree in writings on science. There has, however, been little attempt to delineate observation and experiment with respect to analysing scientific practice; in particular, scientific experimentation, in a systematic manner. Someone who has presented a systematic account of observation and experiment as categories for analysing scientific experimentation is Ian Hacking. In this paper, I present a detailed analysis of Hacking's observation versus experiment account. Using a range of cases from various fields of scientific enquiry, I argue that the observation versus experiment account is not an adequate framework for delineating scientific experimentation in a systematic manner. I argue that the Carroll-Chen cosmogonic model does not provide a plausible scientific explanation of the past hypothesis (the thesis that our universe began in an extremely low-entropy state). I suggest that this counts as a welcomed result for those who adopt a Mill-Ramsey-Lewis best systems account of laws and maintain that the past hypothesis is a brute fact that is a non-dynamical law. In this paper we discuss three interrelated questions. First: is explanation in mathematics a topic that philosophers of mathematics can legitimately investigate? Second: are the specific aims that philosophers of mathematical explanation set themselves legitimate? Finally: are the models of explanation developed by philosophers of science useful tools for philosophers of mathematical explanation? We argue that the answer to all these questions is positive. Our views are completely opposite to the views that Mark Zelcer has put forward recently. Throughout this paper, we show why Zelcer's arguments fail. This paper examines the rotational motion of a nearly axisymmetric rocket type system with uniform burn of its propellant. The asymmetry comes from a slight difference in the transverse principal moments of inertia of the system, which then results in a set of nonlinear equations of motion even when no external torque is applied to the system. It is often difficult, or even impossible, to generate analytic solutions for such equations; closed form solutions are even more difficult to obtain. In this paper, a perturbation-based approach is employed to linearize the equations of motion and generate analytic solutions. The solutions for the variables of transverse motion are analytic and a closed-form solution to the spin rate is suggested. The solutions are presented in a compact form that permits rapid computation. The approximate solutions are then applied to the torque-free motion of a typical solid rocket system and the results are found to agree with those obtained from the numerical solution of the full non-linear equations of motion of the mass varying system. We unify and extend classical results from function approximation theory and consider their utility in astrodynamics. Least square approximation, using the classical Chebyshev polynomials as basis functions, is reviewed for discrete samples of the to-be-approximated function. We extend the orthogonal approximation ideas to n-dimensions in a novel way, through the use of array algebra and Kronecker operations. Approximation of test functions illustrates the resulting algorithms and provides insight into the errors of approximation, as well as the associated errors arising when the approximations are differentiated or integrated. Two sets of applications are considered that are challenges in astrodynamics. The first application addresses local approximation of high degree and order geopotential models, replacing the global spherical harmonic series by a family of locally precise orthogonal polynomial approximations for efficient computation. A method is introduced which adapts the approximation degree radially, compatible with the truth that the highest degree approximations (to ensure maximum acceleration error < 10(-9) m s(-2), globally) are required near the Earth's surface, whereas lower degree approximations are required as radius increases. We show that a four order of magnitude speedup is feasible, with efficiency optimized using radial adaptation. Trajectory design in chaotic regimes allows for the exploitation of system dynamics to achieve certain behaviors. For example, for the Transiting Exoplanet Survey Satellite (TESS) mission, the selected science orbit represents a stable option well-suited to meet the mission objectives. Extended analysis of particular solutions nearby in the phase space reveals transitions into desirable terminal modes induced by natural dynamics. This investigation explores the trajectory behavior and borrows from flow-based analysis strategies to characterize modes of such a motion. Perturbed initial states from a TESS-like orbit are evolved to supply motion suitable for contingency analysis. Through the associated analysis, mechanisms are identified that drive the spacecraft into particular modes and supply conditions necessary for such transitions. Autonomous, low-thrust guidance for active disposal of geosynchronous debris, subject to collision avoidance with the local debris population, is studied. A bisection method is employed to determine trajectory modifications to avoid a conjuncting debris object by a range of distances, assuming a range of collision lead times. A parametric study is performed, in which re-orbit thrust accelerations are varied from 10(-6) to 10(-3) m/s(2), to demonstrate how the continuous-thrust level impacts the required lead time to achieve a desired debris miss distance. The lowest thrust levels considered show that a 6-12 hour lead time is required to achieve a 1-10 km debris separation at the predicted collision time. This paper proposes a novel adaptive hierarchical sliding mode control for the attitude regulation of the multi-satellite inline tethered system, where the input saturation is taken into account. The governing equations for the attitude dynamics of the three-satellite inline tethered system are derived firstly by utilizing Lagrangian mechanics theory. Considering the fact that the attitude of the central satellite can be adjusted by using the simple exponential stabilization scheme, the decoupling of the central satellite and the terminal ones is presented, and in addition, the new adaptive sliding mode control law is applied to stabilize the attitude dynamics of the two terminal satellites based on the synchronization and partial contraction theory. In the adaptive hierarchical sliding mode control design, the input is modeled as saturated input due to the fact that the flywheel torque is bounded, and meanwhile, an adaptive update rate is introduced to eliminate the effect of the saturated input and the external perturbation. The proposed control scheme can be applied on the two-satellite system to achieve fixed-point rotation. Numerical results validate the effectiveness of the proposed method. Scientific models are frequently discussed in philosophy of science. A great deal of the discussion is centred on approximation, idealisation, and on how these models achieve their representational function. Despite the importance, distinct nature, and high presence of toy models, they have received little attention from philosophers. This paper hopes to remedy this situation. It aims to elevate the status of toy models: by distinguishing them from approximations and idealisations, by highlighting and elaborating on several ways the Kac ring, a simple statistical mechanical model, is used as a toy model, and by explaining why toy models can be used to successfully carry out important work without performing a representational function. (C) 2016 Elsevier Ltd. All rights reserved. Objective: probability in quantum mechanics is often thought to involve a stochastic process whereby an actual future is selected from a range of possibilities. Everett's seminal idea is that all possible definite futures on the pointer basis exist as components of a macroscopic linear superposition. I demonstrate that these two conceptions of what is involved in quantum processes are linked via two alternative interpretations of the mind -body relation. This leads to a fission, rather than divergence, interpretation of Everettian theory and to a novel explanation of why a principle of indifference does not apply to self location uncertainty for a post -measurement, pre -observation subject, just as Sebens and Carroll claim. Their Epistemic Separability Principle is shown to arise out of this explanation and the derivation of the Born rule for Everettian theory is thereby put on a firmer footing. (C) 2017 Elsevier Ltd. All rights reserved. This paper traces the emergence, evolution and subsequent entrenchment of the historical style in the shifting scene of modern cosmological inquiry. It argues that the historical style in cosmology was forged in the early decades of the 20th century and continued to evolve in the century that followed. Over time, the scene of cosmological inquiry has gradually become dominated and entirely constituted by historicist explanations. Practices such as forwards and backwards temporal extrapolation (thinking about the past evolutionary history of the universe with different initial conditions and other parameters) are now commonplace. The non-static geometrization of the cosmos in the early 20th century led to inquires thinking about the cosmos in evolutionary terms. Drawing on the historical approach of Gamow (and contrasting this with the ahistorical approach of Bondi), the paper then argues that the historical style became a major force as inquirers began scouring the universe for fossils and other relics as a new form of scientific practice cosmic palaeontology. By the 1970s the historical style became the bedrock of the discipline and the presupposition of new lines of inquiry. By the end of the 20th century, the historical style was pushed to its very limits as temporal reasoning began to occur beyond a linear historical narrative. With the atemporal 'ensemble' type multiverse proposals, a certain type of ahistorical reasoning has been reintroduced to cosmological discourse, which, in a sense, represents a radical de-historicization of the historical style in cosmology. Some are now even attempting to explain the laws of physics in terms of their historicity. (C) 2017 Elsevier Ltd. All rights reserved. I point out a radical indeterminism in potential-based formulations of Newtonian gravity once we drop the condition that the potential vanishes at infinity (as is necessary, and indeed celebrated, in cosmological applications). This indeterminism, which is well known in theoretical cosmology but has received little attention in foundational discussions, can be removed only by specifying boundary conditions at all instants of time, which undermines the theory's claim to be fully cosmological, i.e., to apply to the Universe as a whole. A recent alternative formulation of Newtonian gravity due to Saunders (Philosophy of Science 80 (2013) pp. 22-48) provides a conceptually satisfactory cosmology but fails to reproduce the Newtonian limit of general relativity in homogenous but anisotropic universes. I conclude that Newtonian gravity lacks a fully satisfactory cosmological formulation. (C) 2017 Elsevier Ltd. All rights reserved. I argue that some important elements of the current cosmological model are 'conventionalist' in the sense defined by Karl Popper. These elements include dark matter and dark energy; both are auxiliary hypotheses that were invoked in response to observations that falsified the standard model as it existed at the time. The use of conventionalist stratagems in response to unexpected observations implies that the field of cosmology is in a state of 'degenerating problemshift' in the language of Imre Lakatos. I show that the 'concordance' argument, often put forward by cosmologists in support of the current paradigm, is weaker than the convergence arguments that were made in the past in support of the atomic theory of matter or the quantization of energy. (C) 2017 The Author. Published by Elsevier Ltd. We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV. (C) 2017 Elsevier Ltd. All rights reserved. We present a reconstruction of the studies on the Foundations of Quantum Mechanics carried out in Italy at the turn of the 1960s. Actually, they preceded the revival of the interest of the American physicists towards the foundations of quantum mechanics around mid-1970s, recently reconstructed by David Kaiser in a book of 2011. An element common to both cases is the role played by the young generation, even though the respective motivations were quite different. In the US they reacted to research cuts after the war in Vietnam, and were inspired by the New Age mood. In Italy the dissatisfaction of the young generations was rooted in the student protests of 1968 and the subsequent labour and social fights, which challenged the role of scientists. The young generations of physicists searched for new scientific approaches and challenged their own scientific knowledge and role. The criticism to the foundations of quantum mechanics and the perspectives of submitting them to experimental tests were perceived as an innovative research field and this attitude was directly linked to the search for an innovative and radical approach in the history of science. All these initiatives gave rise to booming activity throughout the 1970s, contributing to influence the scientific attitude and the teaching approach. (C) 2016 Elsevier Ltd. All rights reserved. No-conspiracy is the requirement that measurement settings should be probabilistically independent of the elements of reality responsible for the measurement outcomes. In this paper we investigate what role no-conspiracy generally plays in a physical theory; how it influences the semantical role of the event types of the theory; and how it relates to such other concepts as separability, compatibility, causality, locality and contextuality. (C) 2016 Elsevier Ltd. All rights reserved. The (Strong) Free Will Theorem (rwr) of Conway and Kochen (2009) on the one hand follows from uncontroversial parts of modern physics and elementary mathematical and logical reasoning, but on the other hand seems predicated on an undefined notion of free will (allowing physicists to 'freely choose" the settings of their experiments). This makes the theorem philosophically vulnerable, especially if it is construed as a proof of indeterminism or even of libertarian free will (as Conway & Kochen suggest). However, Cator and Landsman (Foundations of Physics 44, 781-791, 2014) previously gave a reformulation of the Fwr that does not presuppose indeterminism, but rather assumes a mathematically specific form of such "free choices" even in a deterministic world (based on a non-probabilistic independence assumption). In the present paper, which is a philosophical sequel to the one just mentioned, I argue that the concept of free will used in the latter version of the FWT is essentially the one proposed by Lewis (1981), also known as 'local miracle compatibilism' (of which I give a mathematical interpretation that might be of some independent interest also beyond its application to the rm.). As such, the (reformulated) FWT in my view challenges compatibilist free will a la Lewis (albeit in a contrived way via bipartite Epg-type experiments), falling short of supporting libertarian free will. (C) 2016 Elsevier Ltd. All rights reserved. I defend the idea that objects and events in three-dimensional space (so-called local beables) are part of the derivative ontology of quantum mechanics, rather than its fundamental ontology. The main objection to this idea stems from the question of how it can endow local beables with physical salience, as opposed to mere mathematical definability. I show that the responses to this objection in the previous literature are insufficient, and I provide the necessary arguments to render them successful. This includes demonstrating the legitimacy of dynamical considerations in the derivation of local beables and responding to the threat stemming from the availability of different sets of local beables in the context of the GRW theory. (C) 2016 Elsevier Ltd. All rights reserved. David Deutsch (forthcoming) offers a solution to the Epistemic Problem for Everettian Quantum Theory. In this note I raise some problems for the attempted solution. Crown Copyright (C) 2016 Published by Elsevier Ltd. All rights reserved. The apparent dichotomy between quantum jumps on the one hand, and continuous time evolution according to wave equations on the other hand, provided a challenge to Bohr's proposal of quantum jumps in atoms. Furthermore, Schrodinger's time-dependent equation also seemed to require a modification of the explanation for the origin of line spectra due to the apparent possibility of superpositions of energy eigen-states for different energy levels. Indeed, Schrodinger himself proposed a quantum beat mechanism for the generation of discrete line spectra from superpositions of eigenstates with different energies. However, these issues between old quantum theory and Schrodinger's wave mechanics were correctly resolved only after the development and full implementation of photon quantization. The second quantized scattering matrix formalism reconciles quantum jumps with continuous time evolution through the identification of quantum jumps with transitions between different sectors of Fock space. The continuous evolution of quantum states is then recognized as a sum over continually evolving jump amplitudes between different sectors in Fock space. In today's terminology, this suggests that linear combinations of scattering matrix elements are epistemic sums over ontic states. Insights from the resolution of the dichotomy between quantum jumps and continuous time evolution therefore hold important lessons for modern research both on interpretations of quantum mechanics and on the foundations of quantum computing. They demonstrate that discussions of interpretations of quantum theory necessarily need to take into account field quantization. They also demonstrate the limitations of the role of wave equations in quantum theory, and caution us that superpositions of quantum states for the formation of qubits may be more limited than usually expected. (C) 2016 The Author. Published by Elsevier Ltd. The Intergovernmental Panel on Climate Change (IPCC) has, in its most recent Assessment Report (AR5), articulated guidelines for evaluating and communicating uncertainty that include a qualitative scale of confidence. We examine one factor included in that scale: the "degree of agreement." Some discussions of the degree of agreement in ARE suggest that the IPCC is employing a consensus-oriented social epistemology. We consider the application of the degree of agreement factor in practice in ARE. Our findings, though based on a limited examination, suggest that agreement attributions do not so much track the overall consensus among investigators as the degree to which relevant research findings substantively converge in offering support for IPCC claims. We articulate a principle guiding confidence attributions in AR5 that centers not on consensus but on the notion of support. In concluding, we tentatively suggest a pluralist approach to the notion of support. (C) 2016 Elsevier Ltd. All rights reserved. The article deals with manuals for travelers who went to Africa and Asia for the sake of geographic exploration. These are widely neglected sources for the history of European exploration and the emergence of geography as an academic discipline. The article argues that these manuals are essential for an understanding of the travelers' socialization as members of the scientific project of geography and their contributions to geographical knowledge production. This paper analyses the interferences between knowledge production, space and colonial claims from translocal, actor-based perspectives. Due to its 'thickness' the examined material, found particularly at the Perthes collection (Gotha/Germany), allows multifaceted views on a topic which influences our scientific knowledge-based world views. In his writings the Swiss naturalist Emil Goldi underlined his point of view that was both 'Brazilianized' and scientific. Especially his letters and articles between the 1880s and the 1900s are testimony to his positioning in situ ('an Ort und Stelle') against the cartographic 'alienated'-colonial perspectives of French scientists, especially those of his counterpart Henri Coudreau. Sometimes it reads more like an adventure story than a scholarly debate. By working with and using the internationally renowned periodical Petermanns Geographische Mitteilungen (Perthes), Goldi managed to design an image of the spatial situation that convinced the deciding people and thus to obtain for Brazil a huge territory in the Eastern part of the Guyanas. The English whaler William Scoresby, Jr. (1790-1857) made use of his annual voyages to the Greenland Sea for distinguished scientific work, detailed records and the production of amazing maps. Due to his intensive contacts to scientists as Robert Jameson and politicians as Joseph Banks and John Barrow his research achieved a great deal of attention and set a benchmark for at least half a century. Scoresby combined the adventurous world of Arctic fishery with academic sciences. He attained the northernmost point anybody reached in his time, he extended the cartographic knowledge and forced the conquest and utilisation of the oceans for commercial fishing. But his biography enquires also about who got an opportunity for research and for what. Especially it demonstrates the strong impact practical knowledge of a whaler could have on geographic research and Arctic cartography. This essay deals with the medical recipe as an epistemic genre that played an important role in the cross-cultural transmission of knowledge. The article first compares the development of the recipe as a textual form in Chinese and European premodern medical cultures. It then focuses on the use of recipes in the transmission of Chinese pharmacology to Europe in the second half of the seventeenth century. The main sources examined are the Chinese medicinal formulas translatedpresumablyby the Jesuit Michael Boym and published in Specimen Medicinae Sinicae (1682), a text that introduced Chinese pulse medicine to Europe. The article examines how the translator rendered the Chinese formulas into Latin for a European audience. Arguably, the translation was facilitated by the fact that the recipe as a distinct epistemic genre had developed, with strong parallels, in both Europe and China. Building on these parallels, the translator used the recipe as a shared textual format that would allow the transfer of knowledge between the two medical cultures. The debate about the superiority of ancient versus modern culture, known as the Querelle des anciens et des modernes, also found expression in conflicting positions about the developing mathematical methods of natural philosophy. Isaac Newton explicitly referred to the authority of Euclidean geometry as a justification for the conservative form of the proofs in his Principia Mathematica, where he avoided the use of analytic geometry and infinitesimal calculus, the central innovations of seventeenth-century mathematics, as much as possible. Rather, he modeled his proofs, just like the overall structure of the treatise, as closely as possible on Euclid's geometry. A century later, however, Joseph-Louis Lagrange announced in the introduction to his Mechanique Analytique that no geometrical diagrams would be found there and that Newtonian mechanics was presented exclusively in the form of analytic equations. This essay analyzes the relationship of this radical change in the theoretical methodology of mechanics to the actors' ideas about ancient science and its authority. It also discusses the consequent development of a conception of ancient science as distinct from modern science and the relation of this conception to a history of science in our contemporary sense. The United States Patent Office of the 1850s offers a rare opportunity to analyze the early gendering of science. In its crowded rooms, would-be scientists shared a workplace with women earning equal pay for equal work. Scientific men worked as patent examiners, claiming this new occupation as scientific in opposition to those seeking to separate science and technology. At the same time, in an unprecedented and ultimately unsuccessful experiment, female clerks were hired to work alongside male clerks. This article examines the controversies surrounding these workers through the lens of manners and deportment. In the unique context of a workplace combining scientific men and working ladies, office behavior revealed the deep assumption that the emerging American scientist was male and middle class. In the late nineteenth century, Argentine intellectual elites turned to world's fairs as a place to contest myths of Latin American racial inferiority and produce counternarratives of Argentine whiteness and modernity. This essay examines Argentine anthropological displays at three expositions between 1878 and 1892 to elucidate the mechanisms and reception of these projects. Florentino Ameghino, Francisco Moreno, and others worked deliberately and in conjunction with political authorities to erase the indigenous tribes from the national identity, even while using their bodies and products to create prehistory and garner intellectual legitimacy. Comparison of the three fairs also demonstrates how the representation of Amer-Indians and their artifacts shifted in accordance with local political needs and evolving international theories of anthropogenesis. The resulting analysis argues for the importance of considering the former colonies of the Global South in understanding the development of pre-twentieth-century anthropology and world's fairs, particularly when separating them from their imperial context. This essay investigates a hitherto-unexamined collaboration between two of the founders of modern history of science, Henry Guerlac and I. Bernard Cohen, and two economists, Paul Samuelson and Rupert Maclaurin. The arena in which these two disciplines came together was the Bowman Committee, one of the committees that prepared material for Vannevar Bush's ScienceThe Endless Frontier. The essay shows how their collaboration helped to shape the committee's recommendations, in which different models of science confronted each other. It then shows how, despite this success, the basis for long-term collaboration of economists and historians of science disappeared, because the resulting linear model of science and technology separated the study of scientific and economic progress into noncommunicating boxes. This paper explores the social, medical, institutional and enumerative histories of blindness in British India from 1850 to 1950. It begins by tracing the contours and causes of blindness using census records, and then outlines how colonial physicians and observers ascribed both infectious aetiologies and social pathologies to blindness. Blindness was often interpreted as the inevitable consequence of South Asian ignorance, superstition and backwardness. This paper also explores the social worlds of the Blind, with a particular focus on the figure of the blind beggar. This paper further interrogates missionary discourse on Indian' blindness and outlines how blindness was a metaphor for the perceived civilisational inferiority and religious failings of South Asian peoples. This paper also describes the introduction of institutions for the Blind in addition to the introduction of Braille and Moon technologies. The influence of a range of actors is discernible in nutrition projects during the period after the Second World War in the South Pacific. Influences include: international trends in nutritional science, changing ideas within the British establishment about state responsibility for the welfare of its citizens and the responsibility of the British Empire for its subjects; the mixture of outside scrutiny and support for projects from post-war international and multi-governmental organisations, such as the South Pacific Commission. Nutrition research and projects conducted in Fiji for the colonial South Pacific Health Service and the colonial government also sought to address territory-specific socio-political issues, especially Fiji's complex ethnic poli,tics. This study examines the subtle ways in which nutrition studies and policies reflected and reinforced these wider socio-political trends. It suggests that historians should approach health research and policy as a patchwork of territorial, international, and regional ideas and priorities, rather than looking for a single causality. In recent years there has been growing acknowledgement of the place of workhouses within the range of institutional provision for mentally disordered people in nineteenth-century England. This article explores the situation in Bristol, where an entrenched workhouse-based model was retained for an extended period in the face of mounting external ideological and political pressures to provide a proper lunatic asylum. It signified a contest between the modernising, reformist inclinations of central state agencies and local bodies seeking to retain their freedom of action. The conflict exposed contrasting conceptions regarding the nature of services to which the insane poor were entitled. Bristol pioneered establishment of a central workhouse under the old Poor Law; St Peter's Hospital' was opened in 1698. As a multi-purpose welfare institution its clientele included lunatics' and idiots', for whom there was specific accommodation from before the 1760s. Despite an unhealthy city centre location and crowded, dilapidated buildings, the enterprising Bristol authorities secured St Peter's Hospital's designation as a county lunatic asylum in 1823. Its many deficiencies brought condemnation in the national survey of provision for the insane in 1844. In the period following the key lunacy legislation of 1845, the Home Office and Commissioners in Lunacy demanded the replacement of the putative lunatic asylum within Bristol's workhouse by a new borough asylum outside the city. The Bristol authorities resisted stoutly for several years, but were eventually forced to succumb and adopt the prescribed model of institutional care for the pauper insane. In recent decades, historians of English psychiatry have shifted their major concerns away from asylums and psychiatrists in the nineteenth century. This is also seen in the studies of twentieth-century psychiatry where historians have debated the rise of psychology, eugenics and community care. This shift in interest, however, does not indicate that English psychiatrists became passive and unimportant actors in the last century. In fact, they promoted Lunacy Law reform for a less asylum-dependent mode of psychiatry, with a strong emphasis on professional development. This paper illustrates the historical dynamics around the professional development of English psychiatry by employing Andrew Abbott's concept of professional development. Abbott redefines professional development as arising from both abstraction of professional knowledge and competition regarding professional jurisdiction. A profession, he suggests, develops through continuous re-formation of its occupational structure, mode of practice and political language in competing with other professional and non-professional forces. In early twentieth-century England, psychiatrists promoted professional development by framing political discourse, conducting a daily trade and promoting new legislation to defend their professional jurisdiction. This professional development story began with the Lunacy Act of 1890, which caused a professional crisis in psychiatry and led to inter-professional competition with non-psychiatric medical service providers. To this end, psychiatrists devised a new political rhetoric, early treatment of mental disorder', in their professional interests and succeeded in enacting the Mental Treatment Act of 1930, which re-instated psychiatrists as masters of English psychiatry. In 2014 the World Health Organization (WHO) was widely criticised for failing to anticipate that an outbreak of Ebola in a remote forested region of south-eastern Guinea would trigger a public health emergency of international concern (pheic). In explaining the WHO's failure, critics have pointed to structural restraints on the United Nations organisation and a leadership vacuum' in Geneva, among other factors. This paper takes a different approach. Drawing on internal WHO documents and interviews with key actors in the epidemic response, I argue that the WHO's failure is better understood as a consequence of Ebola's shifting medical identity and of triage systems for managing emerging infectious disease (EID) risks. Focusing on the discursive and non-discursive practices that produced Ebola as a problem' for global health security, I argue that by 2014 Ebola was no longer regarded as a paradigmatic EID and potential biothreat so much as a neglected tropical disease. The result was to relegate Ebola to the fringes of biosecurity concerns just at the moment when the virus was crossing international borders in West Africa and triggering large urban outbreaks for the first time. Ebola's fluctuating medical identity also helps explain the prominence of fear and rumours during the epidemic and social resistance to Ebola control measures. Contrasting the WHO's delay over declaring a pheic in 2014, with its rapid declaration of pheics in relation to H1N1 swine flu in 2009 and polio in 2014, I conclude that such missed alarms' may be an inescapable consequence of pandemic preparedness systems that seek to rationalise responses to the emergence of new diseases. Within the colonial setting of the Belgian Congo, the process of cutting the body, whether living or dead, lent itself to conflation with cannibalism and other fantastic consumption stories by both Congolese and Belgian observers. In part this was due to the instability of the meaning of the human body and the human corpse in the colonial setting. This essay maps out different views of the cadaver and personhood through medical technologies of opening the body in the Belgian Congo. The attempt to impose a specific reading of the human body on the Congolese populations through anatomy and related Western medical disciplines was unsuccessful. Ultimately, practices such as surgery and autopsy were reinterpreted and reshaped in the colonial context, as were the definitions of social and medical death. By examining the conflicts that arose around medical technologies of cutting human flesh, this essay traces multiple parallel narratives on acceptable use and representation of the human body (Congolese or Belgian) beyond its medical assignation. James Thomson's The Seasons is arguably a poem about seeing its practices, theories, and connotations. Much lauded for its visual qualities, The Seasons reflects a unique adaptation of the natural philosophical discourse of putrefaction. Thomson's cautiously optimistic view of experimental philosophy is evident in portions of The Seasons devoted to optical technology, such as the microscope and the prism. In addition to this focus on the material practices of experiment, Thomson investigates the intellectual and imaginative visual possibilities afforded by the putrefaction that haunts the landscape of Summer. The putrefying body is at once decaying and a remnant of its former self. Putrefaction is most vividly apprehended by experimenters through the olfactory sense-rotting things stink-but Thomson converts putrefaction into an exclusively visual metaphor. Putrefaction offers Thomson the language and framework to visualize the landscape and its fraught relationship to commercial and political interests. The discourse of putrefaction as an optical technology in Summer opens up the space to register the losses and costs of Britannia's "progress." This essay focuses on the wax tableaus of Gaetano Giulio Zumbo (1656-1701), which represent the human body in various stages of putrefaction. Focusing on descriptions of these works in travel accounts and in fiction, this essay discusses the eighteenth century viewer's fascination with wax's peculiar properties as a sculptural medium. Paradoxically, wax's "lifelike" qualities seemed most powerful when they depicted decay's destruction of the human form. Zumbo's work, therefore, inspires a contemplation of the relationship between a realist aesthetics and the spectacle of corruption. From 1880 to 1882, two impresarios toured the decaying body of a dead whale throughout the United States as a sideshow exhibition called the "Prince of Whales." The dead whale show lay at the intersection of two important vectors in environmental humanities research: oceans and energy. This essay explores some of the epistemological and ontological problems posed by the whale show, demonstrating that methods and theories drawn from the history of the senses, media studies, and infrastructure studies can enrich the environmental humanities by offering new approaches to materiality and putrefaction. This essay examines debates about carrion eating in late nineteenth- and early twentieth-century India. Although proscriptions against carrion eating among the noncaste Hindus were entangled in Indian anticolonial, nationalist, and cow-protection movements, "Gut Ecology" places the subject in the material contexts of bacteriology, the study of zoonotic disease, and the emergence of meat science. The essay focuses on an exchange of letters (1933) between M. K. Gandhi and Dr. G. V. Deshmukh, the first president of the Indian Medical Association, in order to explore historical and theoretical relationships among affective, political, and scientific culture. Southern California scholars have been engaged in criticizing and analyzing the myth of the California dream for decades, yet none of these scholars have analyzed the importance of smell to that myth. Drawing from written historical accounts of the smells from two Southern California locales, this essay argues that odor has been important in the production, maintenance, and imagination of California as paradise. "Bad odours make myth and life possible, but they also threaten to undo both." -Nils Bubantl This essay presents a model of a scientific field, as constituted by a domain of objective phenomena and a community of practitioners, interfaced by laboratory instrumentation and machinery. The relations between items in the domain, as well as those between the cognitive tools (concepts, statements, problems, classifications) that shape the practices of the community are postulated to be relations of exteriority, that is, relations that do not determine the identity of what they relate. This move allows the model to avoid holism. The essay then goes on to contrast the proposed model with Kuhn holistic theory of paradigms to highlight its advantages. The aim of this paper is to engage with the interplay between representational content and design in chemistry and to explore some of its epistemological consequences. Constraints on representational content arising from the aspectual structure of representation can be manipulated by design. Designs are epistemologically important because representational content, hence our knowledge of target systems in chemistry, can change with design. The significance of this claim is that while it has been recognised that the way one conveys information makes a difference to the inferences one can draw from representations in spite of the invariance of informational content, the present paper argues that in chemistry and biochemistry it is often the case that designs have cognitive priority relative to informational content. In this paper we wish to raise the following question: which conceptual obstacles need to be overcome to arrive at a scientific and theoretical understanding of the mind? In the course of this examination, we shall encounter methodological and explanatory challenges and discuss them from the point of view of the philosophy of chemistry and quantum mechanics. This will eventually lead us to a discussion of emergence and metaphysics, thereby focusing on the status of objects. The question remains whether this could be interpreted in terms of a re-description or dissolution of seemingly troubling problems in the philosophy of mind, or whether it further emphasizes the problematic: the ubiquitous and irreducible role of mind and consciousness in scientific (and other) activities. Although during the last decades the philosophy of chemistry has greatly extended its thematic scope, the main difficulties appear in the attempt to link the chemical description of atoms and molecules and the description supplied by quantum mechanics. The aim of this paper is to analyze how the difficulties that threaten the continuous conceptual link between molecular chemistry and quantum mechanics can be overcome or, at least, moderated from the perspective of BM. With this purpose, in "The quantum-mechanical challenges" section the foundational incompatibility between chemical and SQM descriptions will be briefly recalled. "Bohmian mechanics" section will be devoted to explain the main features of BM. In "Empirical equivalence and underdetermination" section, the consequences of the empirical equivalence between SQM and BM will be discussed. Finally, in the Conclusion, we will stress the scope of the obtained conclusions and the philosophical difficulties that still remain even after adopting BM for foundational purposes. Electronegativity is a quantified, typical chemical concept, which correlates the ability of chemical species (atoms, molecules, ions, radicals, elements) to attract electrons during their contact with other species with measurable quantities such as dissociation energies, dipole moments, ionic radii, ionization potentials, electron affinities and spectroscopic data. It is applied to the description and explanation of chemical polarity, reaction mechanisms, other concepts such as acidity and oxidation, the estimation of types of chemical compounds and periodicity. Although this concept is very successful and widely used, and in spite of the fact that it is still subject to scientific investigations, neither a more than intuitive definition nor a generally accepted, logically clear and standardized quantification model has been developed. In the present work, electronegativity is presented and discussed with respect to its main conceptual and operational continuities and discontinuities. We try to analyze the epistemological status of electronegativity, conceived as a typical notion of chemical sciences. Under 'epistemological status' we subsume the issues of its reference, its historical persistence, and the relationship between its measurement and quantification. The concept of living has changed in time along the history of biology and its specificity has been associated or to a particular matter, active such as the chemical one, or was considered as a product of the spatial organization of a passive matter. Today, these two paths can be merged in the chemical perspective that takes account of the general reflections on the complexity and on the systemic, in the "systemic complexity" approach. Rural practices of silk cultivation came under increasing scrutiny from French savants, state administrators and agricultural improvers during the eighteenth century. The lacklustre performance of the French sericultural industry greatly concerned state administrators at mid century, who viewed dependence on foreign supplies of raw silk as a major source of political and economic weakness. Practitioners of natural history came to champion the disciplined observation of silkworms as vital to the reform of the domestic sericultural industry. Amateur naturalists and provincial savants turned their attention to the anatomical structure and behavior of silkworms, in an attempt to codify sericultural practices that were consistent with the "natural ceconomy" of these precious insects. The various techniques that constituted the rapidly expanding field of natural history dissection, microscopic observation, experimental manipulation and classification provided a set of tools for making the French sericultural industry materially viable in the face of stiff international competition. Encouraged and rewarded by various bodies of the royal administration, namely by the Bureau de Commerce and the Intendance de Languedoc, these investigations were configured as civic-minded and patriotic pursuits that would contribute to the regeneration of the patrie. By enhancing domestic productivity and public welfare, natural history seemed to offer state administrators a solution to the problems that racked French state and society under the ancien regime. Patriotism provided a powerful social and moral validation for the practice of scientific observation, at a time when natural history was still widely ridiculed as an antisocial pastime for the idle and the eccentric. Richard Owen (1804-1892) is one of the most important British biologists of the nineteenth century, making significant contributions in the field of comparative anatomy. However, one aspect of his scientific output continues to be overlooked, namely his contributions to parasitology and the influence parasites had on formulating his ideas on comparative anatomy and sexual reproduction. An overview of Owen's writings on parasites is presented delineated into three phases, a primary research phase during the 1830s including descriptions of the important human parasite Trichinella spiralis, a secondary phase of using parasites as models for his biological theories during the 1840s and 1850s, and a latter phase dominated by the controversy surrounding priority for the discovery of T. spiralis. Owen is considered as a pioneer of parasitology research whose popularization of the field provided a receptive environment in the UK that facilitates the ground breaking research in tropical parasitology undertaken by other researchers during the late nineteenth century. A bibliography of Owen's publications on parasites is included as an appendix. Seven species of mesoplodont whales (genus Mesoplodon Gervais, 1850) named after the nineteenth century are based on valid descriptions. A checklist listing the original description and type material for each of these species is provided. Additional data given include type locality and illustrative sources, type material holding institution and type registration number. External morphology was recorded for all type specimens except Andrews' Beaked Whale (Mesoplodon bowdoini) and the Pygmy Beaked Whale (Mesoplodon peruvianus). Augustin Augier's "Arbre botanique" ("botanical tree") (1801), a diagram representing the natural order of plants in the shape of a family tree, is today a standard reference in histories of systematics and phylogenetic trees. The previously unidentified author was a nobleman from Saint-Tropez, a schoolteacher and a priest in the Societe de l'Oratoire de Jesus et de Marie immaculee. His biography and two previously unnoticed publications, as well as his correspondence with the Institut national in Paris, are discussed. Knowledge of Augier's identity, his life and works sheds new light upon his taxonomic theories, and helps us to understand his "Arbre botanique". Long before the tree was made into an icon of evolutionism, Augier used it to demonstrate the beauty and perfect order of divine creation. Little is known of Henry (Harry) Macdonald Kyle and his scientific contributions even within some fisheries research circles. A graduate of St Andrews University and a protege of William Carmichael McIntosh, in 1903 the year after its inception he was appointed as Biological Secretary to the International Council for the Exploration of the Sea (ICES), based in Copenhagen. An expert on flatfishes (notably Plaice), he worked with Walter Garstang at Plymouth but latterly, in extensive collaboration with Ernst Ehrenbaum at Hamburg, he produced definitive works analysing the fishery statistics of European fisheries addressing, in particular, the issue of overfishing. In the late 1920s and early 1930s, however, he became alienated from his wife and children, even in correspondence referring to himself as an outcast (though later he was reconciled to at least one of his sisters). Little is known of the final decades of his life, apart from the fact that he died in his native Scotland. An accomplished linguist, he translated the works of many of his Danish and German colleagues into English. He published his magnum opus on the fisheries of Great Britain and Ireland in German. His earlier book, The biology of fishes, while published in English, was dedicated to his German friend Ehrenbaum. For whatever reasons he found life in Germany more conducive to his work and, in some recent literature, he has even been assumed to have been German. An analysis is presented of the material, some 308 items mainly British birds and mammals, submitted for preservation by account holders of R. & G. W. Raine Brothers, taxidermists in Carlisle, between 1918 and 1943. Possible impacts on local bird and mammal populations are discussed. Early reports of large bones from slate mines in the Middle Jurassic rocks at Stonesfield, Oxfordshire are reviewed, along with previously unpublished accounts of the workings. The material that formed the basis for publication of the genus Megalosaurus Buckland and Conybeare, 1824 is documented. The lectotype, a partial right lower jaw, was acquired by Sir Christopher Pegge, Dr Lees Reader in Anatomy at Christ Church, Oxford in 1797. The paralectotype sacrum was acquired by an Oxford undergraduate, Philip Barker Webb, sometime prior to 1814, as revealed by a letter to William Buckland from George Griffin, a Stonesfield well-sinker and mason, in which this specimen is mentioned. Another letter to Buckland from David Oliver, also of Stonesfield, records the discovery of further large bones, and annotations by Buckland indicate their purchase. The reptilian nature of the bones was confirmed during a visit to Oxford by the great French comparative anatomist Georges Cuvier in 1818. The presence of a giant reptile in the Stonesfield Slate became widely known in the English geological community. The six year delay between recognition and publication probably reflects Buckland's other commitments and priorities. Although Buckland largely disappears from the record at the end of 1849, we note one final reference to Megalosaurus in 1854, in the form of a letter to Buckland from Benjamin Waterhouse Hawkins, in which he requests dimensions of Megalosaurus bones to aid construction of the life-sized model of Megalosaurus that can still be seen at Crystal Palace Park in south London. A manuscript memoir of Hugh Miller (1802-1856), geologist, writer and newspaper editor, is attributed to his son Hugh Miller FGS (1850-1896). It is published here, apparently for the first time. It was written sometime in 1881-1896, more probably 1882-1895. Its intended place of publication is discussed. It is an interesting contribution to Miller biography, written by a family member and providing some new information and anecdotes. Nothing was known about the source of the specimen that was the basis for Richard Harlan's description of the tortoise Testudo elephantopus beyond the facts that it was from the Galapagos and was alive in the possession of Philadelphia businessman Whitton Evans before 5 September 1826. From published and archival sources we propose that there is compelling circumstantial evidence that this tortoise was taken on Charles (Floreana, Santa Maria) Island, Galapagos, in September 1825 by Evans's ship America, Isaiah Eldredge master, on its way to Honolulu and Canton. The historically important name Testudo elephantopus Harlan 1826 should therefore take precedence for the extinct tortoise of Charles Island. In the kitchen record books of the L'Estrange family in the sixteenth and seventeenth centuries, there are references to a bird, widely shot on the Norfolk coast, called a Spowe. On the basis of the similarity to the Icelandic name, J. H. Gurney (sen.) and Fisher (in their "An account of birds found in Norfolk" published in 1846) assumed this to be the Whimbrel (Numenius phaeopus) as have all ornithological texts ever since. Internal evidence from the kitchen records strongly suggest that the Spowe was a winter visitor, not a passage migrant, thus throwing considerable doubt on Gurney and Fisher's ascription. We suggest that it is much more likely that the Spowe was the Bar-tailed Godwit (Limosa lapponica). Maison Verreaux was one of the longest established businesses dealing in natural history specimens with a catalogue offering thousands of species. Polish naturalists were major contributors to Maison Verreaux particularly of neotropical and Siberian specimens. This article presents an account of the decline and end of the enterprise through the letters of Wladyslaw Taczanowski to Antoni Waga, Benedykt Dybowski, Konstanty Branicki and Konstanty Jelski. In 1887 Dutch archivist A. J. Servaas van Rooijen published a transcript of a hand-written copy of an anonymous missive or letter, dated 1631, about a horrific famine and epidemic in Surat, India, and also an important description of the fauna of Mauritius. The missive may have been written by a lawyer acting on behalf of the Dutch East India Company (VOC). It not only gives details about the famine, but also provides a unique insight into the status of endemic and introduced Mauritius species, at a time when the island was mostly uninhabited and used only as a replenishment station by visiting ships. Reports from this period are very rare. Unfortunately, Servaas van Rooijen failed to mention the location of the missive, so its whereabouts remained unknown; as a result, it has only been available as a secondary source. Our recent rediscovery of the original hand-written copy provides details about the events that took place in Surat and Mauritius in 1631-1632. A full English translation of the missive is appended. Wladyslaw Emanuel Lubomirski (1824-1882) was a Polish amateur naturalist who amassed a large collection of molluscs; this included specimens, partly collected by Konstanty Roman Jelski (1837-1896) and Jan Stanislaw Sztolcman (Stolzmann) (1854-1928) in the Neotropics. Jelski travelled through French Guiana and Peru between 1865 and 1879. Sztolcman joined him in 1875 and worked in Peru and Ecuador until 1881. Graphium weiskei goodenovii Rothschild, 1915 (Lepidoptera: Papilionidae) has been known for over a century only from two male specimens: one in the Natural History Museum, London; the other in the Oxford University Museum of Natural History (OUMNH). Endemic to Goodenough Island, in the D'Entrecasteaux group, Papua New Guinea, it was first collected on the summit of 'Oiamadawa'a (Mount Madawaa, Mount Madara' a) in 1912 by New Zealand anthropologist Diamond Jenness. The second specimen, which became the holotype, was collected in mountains in the south of the island by Albert Stewart Meek, one of Walter, Lord Rothschild's most prolific collector/explorers for his museum at Tring in Hertfordshire. In each case, capture of specimens was sufficiently notable to be recorded contemporaneously by the captors. These data, and maps and photographs made by the collectors suggest that the butterfly was widespread at moderate to high elevations on Goodenough Island. The authors climbed 'Oiamadawa'a in 2015 and collected further specimens, now deposited in OUMNH. Leaders and their teams often differ in their perceptions of organizational issues, which have been suggested to influence both employee well-being and performance. The present study examined leader-team perceptual distance regarding organizational learning and its consequences for employee work performance. Sixty-eight leaders and their teams from the Swedish forest industry participated in the study. Polynomial regression with response surface analyses revealed that the perceptual distance between leaders and their teams regarding organizational learning was related to lowered work performance, beyond the influence of employee ratings alone. The analyses also indicated that work performance tended to decrease when the leader rated organizational learning as higher than the team. Our findings suggest that it is important for organizations to minimize the perceptual distance between the leaders and their teams and that further research on the construct of leader-team perceptual distance is warranted. (C) 2017 Elsevier Inc. All rights reserved. We investigate how family involvement in the ownership, management, or governance of a business affects its engagement in earnings management both directly and indirectly through its corporate social responsibility (CSR) activities. Using a sample of S&P 500 companies, we find that family firms tend to have higher CSR performance, which can help them to maintain legitimacy and preserve socio-emotional wealth. Family firms also engage in less accrual-based earnings management, although they are indistinguishable from non-family firms in terms of real earnings management. In contrast to previous research, we find that CSR performance is not significantly associated with either accrual-based or real earnings management behavior after we account for the effect of family involvement. Our findings suggest that the association between CSR performance and family involvement is the primary driver of the relation between CSR performance and earnings management documented in previous research. (C) 2017 Elsevier Inc. All rights reserved. This study examines the impact of psychological contract violation (PCV) on customer intention to reuse online retailer websites via the mediating mechanisms of trust and satisfaction. The moderating role of perceived structural assurance (SA) is also investigated. An empirical study conducted among online shoppers confirms the indirect effects of PCV on customers' intention to reuse via trust and satisfaction. The findings also support the moderating impact of perceived SA in the network of relationships. The study underscores the importance of SA as a trust-building mechanism for mitigating the deleterious effects of PCV among online customers, although the role of SA in preserving satisfaction is found to be limited. The findings suggest that online retailers may benefit by investing in SA and addressing the negative effects of PCV proactively rather than simply relying on post failure service recovery mechanisms. (C) 2017 Elsevier Inc. All rights reserved. Given the size of the Hispanic bilingual market in the United States, it is important to understand the relative effectiveness of using English versus Spanish when advertising to these consumers. This research proposes that Hispanic bilinguals' cultural stereotypes about the users of Spanish living in America are a potent determinant of which language is most effective in advertising. Depending on the favorableness of these cultural stereotypes, our results show that Spanish may be persuasively superior, inferior, or functionally equivalent to English in creating favorable attitudes toward the advertised product. The uniqueness of cultural stereotypes about Spanish users in shaping the influence of an ad's language is underscored by our findings that cultural stereotypes about English users do not exert similar effects in determining the relative persuasiveness of advertising in English or Spanish. The paper offers suggestions for advertising practice and future research. (C) 2017 Elsevier Inc. All rights reserved. Although consumers do not usually take kindly to price increases, their perceptions of fairness of price increases are contingent on relevant factors. This study investigates consumers' perceptions of the fairness of retail price increase by a domestic versus a foreign brand, as moderated by consumers' ethnocentricity, bias toward inferring a profit motive from a price increase (i.e., "profit stickiness"), and relevant contextual information. Over the course of two sets of experiments, the authors find that ethnocentricity does not necessarily lead to the intuitively expected favorable (unfavorable) bias toward (against) a domestic (foreign) brand's derision to raise prices, subject to profit stickiness and contextual information. These findings have implications for theory, practice, and further research. (C) 2017 Published by Elsevier Inc. This study examines whether consumers' perceptions of corporate social responsibility (CSR) activities can predict behavioral loyalty, and how two attitudinal constructs drawing from the means-end chain model involvement and commitment mediate this relationship. A field study of 634 customers of an Australian professional football team was conducted by combining attitudinal surveys with actual behavioral data collected one year later. The results revealed a positive mediating effect of involvement on the relationship between perceived CSR and behavioral loyalty. However, when the effect of involvement on behavioral loyalty was mediated by commitment, the indirect effect of perceived CSR turned negative. The findings of this study indicate that the contribution of CSR initiatives to behavioral loyalty is not as robust as past research suggests, and is also contingent upon specific psychological states activated by consumers' perceptions of such initiatives. (C) 2017 Elsevier Inc. All rights reserved. Previous work has conceptualized workplace pro-environmental behaviors within the organizational citizenship behavior framework and a scale to measure these behaviors has been developed. The goal of the present research was to address conceptual and psychometric issues of this scale by: (a) conceptualizing organizational environmental citizenship behavior within the dominant target-based framework, (b) developing and refining a new, more comprehensive measure of organizational environmental citizenship behavior and (c) validating this new measure by providing evidence for its content, construct, convergent, discriminant, concurrent, incremental concurrent and nomological validity, and its internal and temporal stability. To this end, six separate studies (N = 652) were conducted, which together produced a psychometrically acceptable measure of organizational environmental citizenship behavior. Theoretical and practical implications from this research and direction for future research are discussed. (C) 2017 Elsevier Inc. All rights reserved. We investigate the impact of a novel method called "virtual mirroring" to promote employee self-reflection and impact customer satisfaction. The method is based on measuring communication patterns, through social network and semantic analysis, and mirroring them back to the individual. Our goal is to demonstrate that self-reflection can trigger a change in communication behaviors, which lead to increased customer satisfaction. We illustrate and test our approach analyzing e-mails of a large global services company by comparing changes in customer satisfaction associated with team leaders exposed to virtual mirroring (the experimental group). We find an increase in customer satisfaction in the experimental group and a decrease in the control group (team leaders not involved in the virtual mirroring process). With regard to the individual communication indicators, we find that customer satisfaction is higher when employees are more responsive, use a simpler language, are embedded in less centralized communication networks, and show more stable leadership patterns. (C) 2017 Elsevier Inc. All rights reserved. Unlike established firms, new ventures often lack the resources and structure necessary to simultaneously pursue exploration and exploitation activities in the process of developing and introducing new products into markets. Thus, it remains unclear whether and how ambidexterity (i.e., simultaneous pursuit of exploration and exploitation activities) can develop in new ventures. This study posits that product development alliances and the transactive memory systems of entrepreneurial teams contribute to new venture ambidexterity. Moreover, we propose that the two mechanisms reinforce one another. Data collected from 148 new Chinese ventures support these hypotheses. (C) 2017 Elsevier Inc. All rights reserved. Central consumers in a group often are influential, because their social prominence commands conformity from other members. Yet, there can be another contradictory effect of centrality, such that other members regard it as a threat to their attitudinal freedom and express reactance instead of conformity. Whether a group member conforms or reacts to the evaluation of a more central member might depend on the strength of their relationship, which determines the social cost of disagreeing. We provide evidence of such an interaction between centrality and relational strength with an experiment where participants with preexisting affective ties of varying strengths taste a snack in groups (Study 1) and a field study where participants connected by instrumental ties consume a complex service (Study 2). A scenario-based experiment manipulating centrality and strength of ties provides further evidence that reactance underlies the observed effects (Study 3). (C) 2017 Elsevier Inc. All rights reserved. The extent of the economic distance between a firm's home origin and its foreign direct investment (FDI) is an important strategic decision for the investing firm. This study fills an important knowledge gap by investigating the home institutional antecedents of FDI economic distance. Drawing insights from comparative institutionalism, we argue that home-country states vary in both their power to coordinate the economy and the external and internal channels through which they exercise that power. These variations have implications on a firm's motivation and capability to escape external dependencies on the home-country state by investing in economically distant foreign locations. Empirically, using a dataset of 891 new international entrants from 2004 to 2011, we found support for our hypotheses that home-country state power is positively associated with FDI economic distance, and that the influence of the home-country state is contingent on the state's governance quality and its ownership in firms. (C) 2017 Elsevier Inc. All rights reserved. Marketing research addressing the role of arousal in attitude formation and change mostly looks at arousal as a merely conscious emotion. However, a substantial body of research, in cognitive psychology and neuroscience, now offers insights on the implicit, subliminal reactions of individuals to external stimuli, sustaining that unconscious emotions may drive to different attitudinal responses. Following a conceptualization of conscious and unconscious arousal and its influence on product attitude formation, this study provides empirical evidence of the hypothesised relationships through a laboratory experiment on 160 subjects. By employing electrodermal activity, a physiological measure, to assess unconscious arousal and self-reported scales to assess conscious arousal, the study reveals that conscious and unconscious arousal are two independent emotional responses and they influence attitude toward the product differently. The study extends theory on emotions and provides an initial step toward using physiological measures to evaluate consumer emotional response to new products. (C) 2017 Elsevier Inc All rights reserved. This article introduces the concept of mindfulness meditation as an on-the-spot intervention to be used in specific workplace situations. It presents a model of when, why, and how on-the-spot mindfulness meditation is likely to be helpful or harmful for aspects of job performance. The article begins with a brief review of the mindfulness literature and a rationale for why mindfulness could be used on-the-spot in the workplace. It then delineates consequences of on-the-spot mindfulness interventions on four aspects of job performance - escalation of commitment, counterproductive work behaviors, negotiation performance, and motivation to achieve goals. The article closes with three necessary conditions for an on-the-spot mindfulness intervention to be effectively used, as well as suggestions for how organizations, managers, and employees can facilitate the fulfillment of these necessary conditions. Possible negative consequences of mindfulness and which types of meditation to use are considered. Taken together, these arguments deepen our understanding of state mindfulness and introduce a new manner in which mindfulness can be used in the workplace. (C) 2017 Elsevier Inc. All rights reserved. Drawing on the conservation of resources(COR) theory, we theorize that organizational justice influences in-role performance by embedding employees into the organization. Using a sample of 236 employee-supervisor dyads from diverse industries in India, we found that organizational embeddedness mediated the relationship between distributive and procedural justice and in-role performance. We further found that the degree of association between the dimensions of organizational justice and the components of organizational embeddedness varied; procedural justice was a stronger predictor of the fit dimension than distributive justice was and distributive justice was a stronger predictor of the sacrifice dimension than procedural justice was. We discuss the theoretical and practical implications of our findings. (C) 2017 Elsevier Inc All rights reserved. We analyze what a second business degree reveals about the investment behavior of mutual fund managers. Specifically, we compare the investment risk and style of managers with both a CFA designation and an MBA degree to managers with only one of these qualifications. We document that managers with both degrees take fewer risks, follow less extreme investment styles, and achieve less extreme performance outcomes. Our results are consistent with the explanation that managers with a certain personal attitude that makes them take fewer risks and invest more conventionally choose to gain both qualifications. We rule out several alternative explanations: our results are not driven by the respective contents of the MBA and the CFA program, by the manager's skill, or by the fund family's investment policy. (C) 2017 Elsevier Inc. All rights reserved. This study contributes to the literature by offering a novel analytical approach to solving complex interactions of tourism expenditure antecedents, advancing the theoretical reasoning behind the way in which socioeconomic indicators of prosperity combine to explain tourism expenditure on an international scale. The study explored a variety of configurations sufficient for simulation of both high and low scores of outbound tourism expenditures that have policy implications in both destination countries and countries of origin. We used complexity theory and fuzzy set qualitative comparative analysis (fsQCA) to analyze a composite score of 5-year data for 105 countries. The predictive validity results indicated the capacity of the proposed model to predict future outcome using other samples. The results expand our knowledge of the asymmetrical relationships of tourism expenditure and its antecedents.(C) 2017 Elsevier Inc. All rights reserved. The sharing economy has shifted the way in which goods and services are consumed - from exclusive ownership toward collective usage with economic benefits. Current literature addresses consumer motives to participate in commercial sharing of goods and services with a physical manifestation. In contrast, this study shows the relevance of intangibility for sharing services and empirically examines consumers' motives, perceptions, and experiences in the context of a new insurance model. A qualitative investigation reveals three main characteristics of intangible service sharing: financial benefits as a main motivator for participation, emerging weak social and symbolic values in a controlled environment, and a network of strangers as a crucial precondition for sharing. The work contributes to research on the sharing economy as well as to managerial considerations for the design of sharing services. In particular, managers need to balance between community development and the preservation of anonymity when promoting sharing services based on intangible elements. (C) 2017 Elsevier Inc. All rights reserved. Concern has been expressed as to how obesity is framed as an individual responsibility easily solved with common sense. Such research has questioned the appropriateness of a size-based emphasis to public health. Moving away from the emphasis on the individual, this paper critically reviews consumer marketing techniques in the presentation of portion sizes, given what is known about human cognitive and physical limitations around food choice. Through a micro study of portion size in three products, cereals, cereal bars and yogurts, claims are made regarding marketing techniques of obfuscation in portion size presentation that at a macro level link to earlier critiques of marketing mystification. Findings suggest a number of specific obfuscators that could lead to passive overconsumption. The paper concludes that regulators should shift their emphasis away from the individual to examining marketing mystification and techniques of obfuscation. Information presentation should be more appropriate and consistent across brands within a product category. (C) 2016 Elsevier Inc. All rights reserved. Health concerns about overconsumption of large portions apply to a wide range of highly calorific foods and drinks. Yet, amongst all products, sugar-sweetened soft drinks and especially sugared soda are the ones which seem to raise the most ire because they contain little or no nutritional value beyond their sugar content and because of the way that vendors encourage excessive consumption by pricing jumbo-size portions to look like bargains while making smaller portions appear overpriced. This paper considers the logic of such extreme value size pricing and reveals why this marketing practice can harm economic welfare beyond public health concerns. The paper shows why policy interventions, including portion cap rules and soda taxes, seeking to reduce portion sizes and curb the consumption of large-size sugary drinks might fail when they do not fully take into account or appreciate the strategic responses that vendors might adopt to retain value size pricing. (C) 2016 Published by Elsevier Inc. Weight loss surgery that mechanically restricts consumers' bodies to limit their food intake is booming in a context of globesity (World Health Organization). Based on a Foucauldian analysis, this study contends that self-transformative experiences arise from normalizing practices that the advocates of a repressive medical bio power overlook. This article problematizes the idea of resistance and normativity by emphasizing the existence of various forms of agency, not all of which are predicated on the term of resistance. The author proposes 'embodied transformation' as escaping docile embodiment versus embodied resistance (the negative view of subjectification as subjecting) to tell of normalizing practices that become a locus of discovery of creative potentialities within restrictive contexts. The study detects two types of agency. Consumers' discourses are driven by a temporal or narrative structure vs. a spatial or connective form of becoming. (C) 2016 Elsevier Inc. All rights reserved. Before food portions are determined at home, they are determined at the supermarket. Building on the notion of implied social norms, this research proposes that allocating or partitioning a section of a shopping cart for fruits and vegetables (produce) may increase their sales. First, a concept test for on-line shopping (Study 1) shows that a large produce partition led people to believe that purchasing larger amounts of produce was normal. Next, an in-store study in a supermarket (Study 2) shows that the amount of produce a shopper purchased was in proportion to the size of this partition-the larger the partition, the larger the purchases (especially in a nutrition reinforced environment). Using partitioned or divided shopping carts (such as half-carts) could be useful to retailers who want to sell more high-margin produce, but they could also be useful to consumers who can simply divide their own shopping cart in half with their jacket, purse, or briefcase. Divided shopping carts may lead to healthier shoppers and to healthier profits. (C) 2017 Elsevier Inc All rights reserved. Larger portions of tempting food spur consumption, yet the question remains whether altering food granularity (i.e., dividing a fixed portion into more, smaller versus fewer, larger partitions) also drives consumption. As current insights on the impact of food granularity on consumption are contradictory, this paper offers clarification by zooming in on the different ways in which food granularity has been operationalized in extant research. One can achieve a finer (vs. coarser) food granularity either by partitioning food into more, smaller morsels (vs. fewer larger morsels), or by grouping similarly sized food morsels together into more, smaller packages (vs. fewer, larger packages). Hence, this article introduces the operationalization mode of food granularity (i.e., partitioning vs. grouping) as a variable moderating the effect of food granularity (i.e., fine vs. coarse) on consumption. Consumers are expected to eat less when tempting foods are partitioned in more, smaller portions (as opposed to fewer, larger portions). However, consumers eat more when such tempting foods are grouped into more, smaller packages (as opposed to fewer, larger packages). Both a meta-analytic review (Study 1) and a lab experiment (Study 2) confirm this anticipated interaction effect. In addition, Study 2 shows that the extent of experienced selfcontrol conflict underlies this combined effect of food granularity and operationalization mode on consumption. This study also shows that (un)restrained eating is an additional moderator of the interaction. These findings highlight that the manner used to divide tempting foods has important implications for consumption, which is relevant in light of the current obesity epidemic. (C) 2017 Elsevier Inc. All rights reserved. The purpose of this research is to examine how perceived food healthfulness and package partitioning interact to impact intended and actual consumption. Across three studies, findings indicate that both intended consumption and actual consumption of the perceptually healthier food items increase when packaging is not partitioned. Further, partitioning does not change the intended or actual consumption of foods perceived as less healthy. Accordingly, perceptually healthy foods tend to be consumed more when servings are not partitioned, suggesting a positive health halo leading to a "healthy = eat more" consumption pattern. The role of affect regulation theory and, more specifically, guilt, in this process is examined. These findings have implications for marketers, food manufacturers, and public policymakers interested in reducing obesity. (C) 2016 Elsevier Inc. All rights reserved. As food portion sizes increase, so too does the amount of energy consumed. The purpose of this study was therefore to determine whether the portion size preferences of individuals could be reduced. Across two experiments, this paper shows that a personally threatening health message that has been endorsed by a digestive system featuring anthropomorphic cues can reduce portion size preferences for energy dense foods and beverages, but only among those who feel powerless. This effect emerges because partially anthropomorphizing an internal body system transforms that system into an agent of social influence. The powerless, who are more sensitive to social influence than the powerful, will consequently be more attuned to threatening health information that has been endorsed by this partially anthropomorphized body system, shaping their behavioral preferences. Anthropomorphizing elements of the self may therefore represent a novel means for motivating behavior change. (C) 2017 Elsevier Inc. All rights reserved. A portion of food is usually considered as the norm for consumption. Due to the portion size effect, people tend to eat more when they are served a larger, as opposed to a smaller, portion. Here, spontaneous simulations of the experience of eating a portion of food by consumers (i.e., simulated eating) helped to reduce this portion size effect. Those participants who reported more eating simulations selected a smaller percentage of food from the very large portion. However, the quantity of food selected from this very large portion was nevertheless still larger than from the medium portion. Thus, simulated eating reduced but did not eliminate entirely the portion size effect. However, when the participants were encouraged to deliberatively imagine the sensory experiences associated with eating a portion of food (imagined eating), initial portion size no longer influenced the amount of food selected. Potential implications of these results for the consumer, for the food industry, and for public health are discussed. (C) 2016 Elsevier Inc. All rights reserved. This research investigates the unexplored consequences of food presentation on consumers' portion size perceptions and consumption. The findings show that consumers perceive portions as smaller and eat more when foods are presented vertically (i.e., stacked on the plate) versus horizontally (i.e., spread across the plate). The effect of presentation on portion size perceptions occurs because consumers use the surface area of the portion as a heuristic for overall portion size and, for equal volumes of food, portions presented vertically have a smaller surface area. Surface area is used as a heuristic for overall portion size presumably because (1) when looking down at a plate of food on a dining table, the surface area of the portion is more salient than the height and (2) through experience consumers learn that the surface area of the portion is often positively correlated with overall portion size. The results of this research underscore the importance of food presentation and identify viewing angle as a factor to consider when evaluating portion size. (C) 2016 Elsevier Inc. All rights reserved. This article engages the much-debated role of mathematics in Bacon's philosophy and inductive method at large. The many references to mathematics in Bacon's works are considered in the context of the humanist reform of the curriculum studiorum and, in particular, through a comparison with the kinds of natural and intellectual subtlety as they are defined by many sixteenth-century authors, including Cardano, Scaliger and Montaigne. Additionally, this article gives a nuanced background to the subtlety' commonly thought to have been eschewed by Bacon and by Bacon's self-proclaimed followers in the Royal Society of London. The aim of this article is ultimately to demonstrate that Bacon did not reject the use of mathematics in natural philosophy altogether. Instead, he hoped that following the Great Instauration a kind of non-abstract mathematics could be founded: a kind of mathematics which was to serve natural philosophy by enabling men to grasp the intrinsic subtlety of nature. Rather than mathematizing nature, it was mathematics that needed to be naturalized'. The Catalogue of Scientific Papers, published by the Royal Society of London beginning in 1867, projected back to the beginning of the nineteenth century a novel vision of the history of science in which knowledge was built up out of discrete papers each connected to an author. Its construction was an act of canon formation that helped naturalize the idea that scientific publishing consisted of special kinds of texts and authors that were set apart from the wider landscape of publishing. By recovering the decisions and struggles through which the Catalogue was assembled, this essay aims to contribute to current efforts to denaturalize the scientific paper as the dominant genre of scientific life. By privileging a specific representation of the course of a scientific life as a list of papers, the Catalogue helped shape underlying assumptions about the most valuable fruits of a scientific career. Its enumerated lists of authors' periodical publications were quickly put to use as a means of measuring scientific productivity and reputation, as well as by writers of biography and history. Although it was first conceived as a search technology, this essay locates the Catalogue's most consequential legacy in its uses as a technology of valuation. Following the conquest of Algiers and its surrounding territory by the French army in 1830, officers noted an abundance of standing stones in this region of North Africa. Although they attracted considerably less attention among their cohort than more familiar Roman monuments such as triumphal arches and bridges, these prehistoric remains were similar to formations found in Brittany and other parts of France. The first effort to document these remains occurred in 1863, when Laurent-Charles Feraud, a French army interpreter, recorded thousands of dolmens and stone formations south-west of Constantine. Alleging that these constructions were Gallic, Feraud hypothesized the close affinity of the French, who claimed descent from the ancient Gauls, with the early inhabitants of North Africa. After Feraud's claims met with scepticism among many prehistorians, French scholars argued that these remains were constructed by the ancestors of the Berbers (Kabyles in contemporary parlance), whom they hypothesized had been dominated by a blond race of European origin. Using craniometric statistics of human remains found in the vicinity of the standing stones to propose a genealogy of the Kabyles, French administrators in Algeria thereafter suggested that their mixed origins allowed them to adapt more easily than the Arab population to French colonial governance. This case study at the intersection of prehistoric archaeology, ancient history and craniology exposes how genealogical (and racial) classification made signal contributions to French colonial ideology and policy between the 1860s and 1880s. In 1946, the British biochemist Joseph Needham returned from a four-year stay in China. Needham scholars have considered this visit as a revelatory period that paved the way for his famous book series Science and Civilization in China (SCC). Surprisingly, however, Needham's actual time in China has remained largely unstudied over the last seventy years. As director of the Sino-British Scientific Cooperation Office, Needham travelled throughout Free China to promote cooperation between British and Chinese scientists to contain the Japanese invasion during the Second World War. By rediscovering Needham's peregrinations, this paper re-examines the origins of his fascination for China. First, it contests the widely held idea that this Chinese episode is quite separate and different from Needham's first half-life as a leftist scientist. Second, it demonstrates how the political and philosophical commitments he inherited from the social relations of science movement, and his biochemical research, shaped his interest in China's past. Finally, this paper recounts these forgotten years to reveal their implications for his later pursuits as historian of science and as director of the natural-science division of UNESCO. It highlights how, while in China, Needham co-constituted the philosophical tenets of his scientific programme at UNESCO and the conceptual foundations of his SCC. This round table discussion takes the diversity of discourse and practice shaping modern knowledge about childhood as an opportunity to engage with recent historiographical approaches in the history of science. It draws attention to symmetries and references among scientific, material, literary and artistic cultures and their respective forms of knowledge. The five participating scholars come from various fields in the humanities and social sciences and allude to historiographical and methodological questions through a range of examples. Topics include the emergence of children's rooms in US consumer magazines, research on the unborn in nineteenth-century sciences of development, the framing of autism in nascent child psychiatry, German literary discourses about the child's initiation into writing, and the sociopolitics of racial identity in the photographic depiction of African American infant corpses in the early twentieth century. Throughout the course of the paper, childhood emerges as a topic particularly amenable to interdisciplinary perspectives that take the history of science as part of a broader history of knowledge. In this conversation held at the 2016 Millstein Governance Forum at Columbia Law School, Ira Millstein, a leading authority on corporate governance and founding chair of the Millstein Center for Global Markets and Corporate Ownership, discusses his new book, The Activist Director, with Geoff Colvin of Fortune Magazine. In explaining why he wrote the book, Millstein said that it is important for boards of directors to understand the key role they must play to secure the future of our corporations, and for shareholders to recognize, encourage, and support this role. The role of directors has changed significantly over the years. Yet corporate performance, broadly speaking, has not lived up to expectations, and Millstein attributes this in part to the failure of directors to adapt and evolve quickly and decisively enoughwhich in turn has helped to fuel the rise of activist investors. Much of the problem stems from the tendency of boards to view themselves as oversight organizations that review and challenge management at arm's length, as opposed to truly engaging with management to make better decisions. To address this problem, Millstein makes the case for activist directors who will partner with management, think deliberately and critically about the company's strategy, and work for the longterm interest of the corporation. And to provide financial incentives for directors to reinforce their commitment to the corporations they serve, Millstein favors an increase in compensation for directors that is tied to long-term performance. As an early example of what became an activist board, Millstein describes his own experience with the board of General Motors in the late 1980s and early 1990s when it confronted managerial failure and ended up replacing the CEO. In the current environment of activist investors, activist boards must give serious consideration to shareholder proposals for change, without succumbing to pressure for shortsighted cutbacks in value-adding investment and while ensuring that management is focused on long-term growth and innovation. Directors must have the courage and commitment to carry out the course of action they deem to be in the best longterm interests of the corporation. The former dean of the University of Virginia's Darden School explores how business schools must adapt to prepare future business leaders to assume the leadership responsibilities necessary to respond effectively to financial crises. The article begins with a statement by Milton Friedman and Anna Schwarz in their Monetary History of the United States about the failure of U.S. policy makers to prevent the collapse of the U.S. banking system during the Great Depression. Then turning to the crisis of 2008, the author draws on recent accounts of the leadershipboth effective and ineffectiveprovided by policymakers to support Friedman and Schwartz's contention that the success of countries in responding to crises depends on the presence of one or more outstanding individuals willing to assume responsibility and leadership. After citing Nassim Taleb's characterization of the financial system as inherently fragile, the article offers a number of insights about the kind of leadership that is likely to prove effective in protecting such systems. Using the responses of policymakers like Bernanke, Paulson, and Geithner as examples, the author observes that successful leaders rank priorities and set direction, mobilize collective action, choose whether and how to use the panoply of tools at their disposal, and attempt to respond in a comprehensive, coordinated way to all aspects of a crisis using a flexible set of approaches and methods that he identifies as Ad Hoc-racy. With such insights in mind, the author goes on to suggest that changes in current research and teaching about leadership are likely to take the form of the following six stretches: From local to global: Global in the context of crises refers to thinking systemically about connectedness and the tendency of trouble to travel within systems of finance.From a single-discipline to an interdisciplinary focus: As examples, the dynamics of mobilizing collective action can be illuminated by research on group decision-making and bargaining theory; and process management research holds insights for the conduct of Ad Hoc-racy.From passive learning to active learning: Because so much of crisis leadership is a response to contingencies, exercising students' skills in financial whack-a-mole seems like a valuable complement to more traditional pedagogical styles. Better for students to explore the consequences of bad judgment in the classroom than in the markets.From scholarship about problems of interest to scholarship about problems of consequence: If only because the costs associated with risks to the financial systems are so large, financial leadership is consequential and warrants inquiry.From field mastery to growth of wisdom: Although mastery of the tools and concepts of finance is necessary for a successful career in the field, we need financial leaders who can harness such concepts and analytical insights to sound judgment in the pursuit of larger goals, including, when appropriate, the greatest social good.From preparation for followership to preparation for leadership: The popularity of finance on campuses and the high salaries of finance recruits attest to the value of what higher education delivers in this field. But what the students and recruiters want is preparation for the first job in the fieldthat is, training that prepares students to follow. We can do better than that. In this roundtable that took place at the 2016 Millstein Governance Forum at Columbia Law School, four directors of public companies discuss the changing role and responsibilities of corporate boards. In response to increasingly active investors who are looking to management and boards for more information and greater accountability, the four panelists describe the growing demands on boards for both competence and commitment to the job. Despite considerable improvements since the year 2000, and especially since the 2008 financial crisis, the clear consensus is that U.S. corporate directors must become more like owners of the corporation who truly represent the long-term interests of all of the shareholders. But if activist investors appear to pose the most formidable new challenge for corporate directorsone that has the potential to lead to shortsighted managerial decision-makingthere has been another, less visible development that should be welcomed by wellrun companies that are investing in their future growth as well as meeting investors' expectations for current performance. According to Raj Gupta, who serves on the boards of HewlettPackard, Delphi Automotive, Arconic, and the Vanguard Group, [O]ld-fashioned institutional shareholdersmany of them long-term buyers and holdersnow account for a remarkably large percentage of the ownership of U.S. companies. If you looked closely, I think you would find that, in every large U.S. public company today, as few as 20 large institutional shareholders control 50 to 75% of the stock. And although they're not activists who resort to the proxy process, they are also not the silent shareholders of the past. Although these investors are bringing enormous pressure to bear on management, the good news for managements and boards is that such concentration of ownership should make it far easier for them to build trust by making their case directly to the investment community, and so win the market support that can give companies the breathing room to carry out their long-term business plans. Another piece of encouraging news for boards is that, by drawing on a more diverse pool of directors than in the pastone that includes a rapidly growing number of women, people of color, and internationalsthey are finding themselves with a breadth of perspective and experience that has better prepared them to respond to changes in an increasingly global and volatile business environment. This article documents the gradual movement of General Motors away from the partnership concept that dominated U.S. corporate pay policy in the first half of the 20th century and toward the competitive pay concepts that have prevailed since then. The partnership concept was achieved by paying managers bonuses in the form of GM shares, with the amounts paid out of a single company-wide bonus pool and based on a fixed share of profit (after subtracting a charge for the cost of capital). Thanks to this EVA-like bonus scheme, GM's managers effectively became partners with the company's shareholders, sharing the wealth in good times but also the pain in troubled times. What's more, the authors also show that, from the establishment of the program in 1918 through the 1950s, the directors went to great lengthsincluding several bouts of innovative (and often complex) problem-solvingto achieve their compensation objectives while maintaining such fixed-share bonuses. But the sharing philosophy and associated compensation practices were gradually supplanted by competitive pay practices from the 1960s onward. The authors show that by the late 1970s, GM had a board of directors with modest shareholdings, in contrast to the board in the early post-war period, whose directors had large stakes. As a consequence, directors began acting less like stewards of capital and more like employees whose financial rewards came not from returns on GM's stock but from the fees they received for their services. This fundamental change in board compensation almost certainly contributed to the gradual abandonment of fixed-profit sharing for GM's managers. In its place, the board implemented competitive pay policies that, while coming to dominate executive pay policy in the U.S. and abroad, have largely divorced executive pay from changes in shareholder wealth. In the case of GM, this growing separation of pay from performance was accompanied by a significant decline in corporate returns on operating capital as well as stock returns over time. In this discussion that took place in Helsinki last June, three European financial economists and a leading authority on U.S. corporate governance consider the relative strengths and weaknesses of the world's two main corporate financing and governance systems: the Anglo-American market-based system, with its dispersed share ownership, lots of takeovers, and an otherwise vigorous market for corporate control; and the relationship-based, or main bank, system associated with Japan, Germany, and continental Europe generally. The distinguishing features of the relationship-based system are large controlling shareholders, including the main banks themselves, and few takeovers or other signs of a well-functioning corporate control market. Given the steady increase in the globalization of business and international diversification by large institutional investors, the panelists were asked to address the question: can we expect one of these two systems to prevail over time, or will both systems continue to coexist, while seeking to adopt some of the most valuable aspects of the other? The consensus was that, in Germany as well as continental Europe, corporate financing and governance practices have already begun to look much like those in the U.S. and U.K., with much less reliance on bank loans and greater use of bonds and public equity. And these financing changes have resulted in major changes in ownership structures that have seen local main banks largely supplanted by foreign institutional investorssome of whom have demanded a greater voice in how companies are run. Moreover, Finnish economist Tom Berglund may well have provided a blueprint for the dominant European governance system of the future in describing the Nordic model as a compromise between the Anglo-American and German modelsone that maintains a strict hierarchy of interests putting the shareholders at the top, with strong minority shareholder protection, and gives shareholders the right to convene meetings with the board and executive management. But even with this movement toward the market-based system, the remarkable persistence of family-controlled publicly traded companies in Germany and much of continental Europe appears to reflect deep-seated cultural attitudes and practices that are expected to continue to resist the incursion of Anglo-American principles and methods. The widespread use of dual-class stock, the pervasiveness of stateowned enterprises, and even the German practice of labor representation on boards (known as codetermination) are all seen as likely to persisteven though one member of the panel views each of these three practices as having played a major role in the Volkswagen emissions scandal. And given the surprising resurgence of one of these features in the U.S.that is, the dual-class ownership structures created by the IPOs of companies like Google and Facebookno one was confident in predicting their disappearance. Like many of today's listed German companies that have been owned and operated by the same family for centuries, some of America's most promising and successful companies also appear to feel the need for at least some protection against the vagaries of the market. Since Jensen and Meckling's formulation of the theory of agency costs in 1976, corporate finance and governance scholars have produced a large body of research that attempts to identify the most important features and practices of effective corporate governance systems. But for all the research that has been done in the past 40 years, many practitioners continue to see a disconnect between theory and practice, between the questions researched and the questions that need to be answered. In this roundtable, Martijn Cremers begins by challenging the conventional view that limiting agency costs is the main challenge confronted by boards of directors in representing shareholder interests and, hence, the proper focus of most governance scholarship. Especially in today's economy, with the high values assigned to growth companies, the most important function of corporate governance may instead be to overcome the problem of American short termism that he attributes to inadequate shareholder commitment to long-term cooperation. And he buttresses his argument with the findings of his own recent research suggesting that obstacles to the workings of the corporate control market like staggered boards and supermajority voting requirements may actually improve long-run corporate performance by lengthening the decision-making horizon of boards and the managements they supervise. Vik Khanna discusses Indian Corporate Social Responsibility (CSR) spending and its effects in light of a recent law requiring Indian companies of a certain size to devote at least 2% of their after-tax profit to CSR initiatives. One unintended effect of this mandate, which took effect in 2010, was that all Indian companies that were spending more than the prescribed 2% of profits cut their expenditure back to that minimum, suggesting that CSR and advertising are substitutes to some extent, and that such legal mandates can discourage CSR spending by early adapters or leaders. Nevertheless, Khanna also found evidence of social norms developing in support of CSR, including a spreading perception that such spending can help some companies achieve strategic goals. Jeff Gordon closes by arguing that, to the extent investors are short-sighted, their short-sightedness is likely to be justified by their recognition that public company directors have neither the information nor the incentives to do an effective job of monitoring corporate managements. The best solution to the problems with U.S. corporate governance is to replace today's thinly informed directors with activist directors who more closely resemble the directors of private-equity owned firms. Such directors would spend far more time with, and be much more knowledgeable about, corporate management and operationsand they would have much more of their personal wealth at stake in the form of company stock. The Liquefied Natural Gas (LNG) industry has grown significantly since it began a half-century ago, and it will continue to diversify both its sources of supply and the contractual arrangements between suppliers and users. Economic theory says that contracting modes adapt to facilitate gains in efficiency, and that this process of adaptation responds to changes in technological, market, and regulatory factors. When an industry relies heavily on highly specialized assets with limited alternative uses, as is true of LNG, the use of longterm contracts (or vertical integration) will generally be more efficient than short-term dealings. But once conditions begin to encourage vigorous competition among buyers and sellers, it becomes increasingly economical to rely on shorter-term (and spot) markets for exchange. The history of the LNG industry supports these theoretical predictions, and illustrates the transition from one contracting mode to another. For most of its history, the specialization and scale of LNG assets dictated the predominant use of long-term contracting. In recent years, however, market and regulatory changes have raised the demand for short-term and spot contracting, which in turn has provided the impetus for a virtuous cycle of market liquidity. As buyers and sellers have become increasingly able to obtain or dispose of LNG in an active market, they have needed less protection against the opportunism of trading partners that long-term contracts have provided in the past. Given this self-reinforcing process, it is likely that the LNG market will soon look nothing like it did as recently as a decade ago. Buyers and sellers will rely on shorter-term contracts, and the longerterm contracts that do exist will be linked to spot LNG prices rather than crude oil. Consumers and producers will also benefit from more flexible pricing that more accurately reflects rapidly changing fundamentals of supply and demand. Emergence of novel genome engineering technologies such as clustered regularly interspaced short palindromic repeat (CRISPR) has refocused attention on unresolved ethical complications of synthetic biology. Biosecurity concerns, deontological issues and human right aspects of genome editing have been the subject of in-depth debate; however, a lack of transparent regulatory guidelines, outdated governance codes, inefficient time-consuming clinical trial pathways and frequent misunderstanding of the scientific potential of cutting-edge technologies have created substantial obstacles to translational research in this area. While a precautionary principle should be applied at all stages of genome engineering research, the stigma of germline editing, synthesis of new life forms and unrealistic presentation of current technologies should not arrest the transition of new therapeutic, diagnostic or preventive tools from research to clinic. We provide a brief review on the present regulation of CRISPR and discuss the translational aspect of genome engineering research and patient autonomy with respect to the "right to try" potential novel non-germline gene therapies. In recent years, the publication of the studies on the transmissibility in mammals of the H5N1 influenza virus and synthetic genomes has triggered heated and concerned debate within the community of scientists on biological dual-use research; these papers have raised the awareness that, in some cases, fundamental research could be directed to harmful experiments, with the purpose of developing a weapon that could be used by a bioterrorist. Here is presented an overview regarding the dual-use concept and its related international agreements which underlines the work of the Australia Group (AG) Export Control Regime. It is hoped that the principles and activities of the AG, that focuses on export control of chemical and biological dual-use materials, will spread and become well known to academic researchers in different countries, as they exchange biological materials (i.e. plasmids, strains, antibodies, nucleic acids) and scientific papers. To this extent, and with the aim of drawing the attention of the scientific community that works with yeast to the so called Dual-Use Research of Concern, this article reports case studies on biological dual-use research and discusses a synthetic biology applied to the yeast Saccharomyces cerevisiae, namely the construction of the first eukaryotic synthetic chromosome of yeast and the use of yeast cells as a factory to produce opiates. Since this organism is considered harmless and is not included in any list of biological agents, yeast researchers should take simple actions in the future to avoid the sharing of strains and advanced technology with suspicious individuals. We analyzed stable patients' views regarding synthetic biology in general, the medical application of synthetic biology, and their potential participation in trials of synthetic biology in particular. The aim of the study was to find out whether patients' views and preferences change after receiving more detailed information about synthetic biology and its clinical applications. The qualitative study was carried out with a purposive sample of 36 stable patients, who suffered from diabetes or gout. Interviews were transcribed verbatim, translated and fully anonymized. Thematic analysis was applied in order to examine stable patients' attitudes towards synthetic biology, its medical application, and their participation in trials. When patients were asked about synthetic biology in general, most of them were anxious that something uncontrollable could be created. After a concrete example of possible future treatment options, patients started to see synthetic biology in a more positive way. Our study constitutes an important first empirical insight into stable patients' views on synthetic biology and into the kind of fears triggered by the term "synthetic biology." Our results show that clear and concrete information can change patients' initial negative feelings towards synthetic biology. Information should thus be transmitted with great accuracy and transparency in order to reduce irrational fears of patients and to minimize the risk that researchers present facts too positively for the purposes of persuading patients to participate in clinical trials. Potential participants need to be adequately informed in order to be able to autonomously decide whether to participate in human subject research involving synthetic biology. In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, but may also offer a platform on which to examine the mechanisms of biological or artificially intelligent psychiatric disease. The possibility of mental illnesses occurring in artificially intelligent individuals necessitates the consideration that at some level, they may have achieved a mental capability of consciousness, sentience and rationality such that they can subsequently become dysfunctional. The deeper philosophical understanding of these conditions in mankind and artificial intelligences might therefore offer reciprocal insights into mental health and mechanisms that may lead to the prevention of mental dysfunction. To develop a method for exposing and elucidating ethical issues with human cognitive enhancement (HCE). The intended use of the method is to support and facilitate open and transparent deliberation and decision making with respect to this emerging technology with great potential formative implications for individuals and society. Literature search to identify relevant approaches. Conventional content analysis of the identified papers and methods in order to assess their suitability for assessing HCE according to four selection criteria. Method development. Amendment after pilot testing on smart-glasses. Based on three existing approaches in health technology assessment a method for exposing and elucidating ethical issues in the assessment of HCE technologies was developed. Based on a pilot test for smart-glasses, the method was amended. The method consists of six steps and a guiding list of 43 questions. A method for exposing and elucidating ethical issues in the assessment of HCE was developed. The method provides the ground work for context specific ethical assessment and analysis. Widespread use, amendments, and further developments of the method are encouraged. There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology. The debate on whether and how the Internet can protect and foster human rights has become a defining issue of our time. This debate often focuses on Internet governance from a regulatory perspective, underestimating the influence and power of the governance of the Internet's architecture. The technical decisions made by Internet Standard Developing Organisations (SDOs) that build and maintain the technical infrastructure of the Internet influences how information flows. They rearrange the shape of the technically mediated public sphere, including which rights it protects and which practices it enables. In this article, we contribute to the debate on SDOs' ethical responsibility to bring their work in line with human rights. We defend three theses. First, SDOs' work is inherently political. Second, the Internet Engineering Task Force (IETF), one of the most influential SDOs, has a moral obligation to ensure its work is coherent with, and fosters, human rights. Third, the IETF should enable the actualisation of human rights through the protocols and standards it designs by implementing a responsibility-by-design approach to engineering. We conclude by presenting some initial recommendations on how to ensure that work carried out by the IETF may enable human rights. Virtue-based approaches to engineering ethics have recently received considerable attention within the field of engineering education. Proponents of virtue ethics in engineering argue that the approach is practically and pedagogically superior to traditional approaches to engineering ethics, including the study of professional codes of ethics and normative theories of behavior. This paper argues that a virtue-based approach, as interpreted in the current literature, is neither practically or pedagogically effective for a significant subpopulation within engineering: engineers with high functioning autism spectrum disorder (ASD). Because the main argument for adopting a character-based approach is that it could be more successfully applied to engineering than traditional rule-based or algorithmic ethical approaches, this oversight is problematic for the proponents of the virtue-based view. Furthermore, without addressing these concerns, the wide adoption of a virtue-based approach to engineering ethics has the potential to isolate individuals with ASD and to devalue their contributions to moral practice. In the end, this paper gestures towards a way of incorporating important insights from virtue ethics in engineering that would be more inclusive of those with ASD. Environmental risk assessment is often affected by severe uncertainty. The frequently invoked precautionary principle helps to guide risk assessment and decision-making in the face of scientific uncertainty. In many contexts, however, uncertainties play a role not only in the application of scientific models but also in their development. Building on recent literature in the philosophy of science, this paper argues that precaution should be exercised at the stage when tools for risk assessment are developed as well as when they are used to inform decision-making. The relevance and consequences of this claim are discussed in the context of the threshold of the toxicological concern approach in food toxicology. I conclude that the approach does not meet the standards of an epistemic version of the precautionary principle. A new trend in the production technology of solid biof uels has appeared. There is a wide consensus that most solid biofuels will be produced according to the new production methods within a few years. Numerous samples were manufactured from agro-residues according to conventional methods as well as new methods. Robust analyses that reviewed the hygienic, environmental, financial and ethical aspects were performed. The hygienic and environmental aspect was assessed by robust chemical and technical analyses. The financial aspect was assessed by energy cost breakdown. The ethical point of view was built on the above stated findings, the survey questionnaire and critical discussion with the literature. It is concluded that the new production methods are significantly favourable from both the hygienic and environmental points of view. Financial indicators do not allow the expressing of any preference. Regarding the ethical aspect, it is concluded that the new methods are beneficial in terms of environmental responsibility. However, it showed that most of the customers that took part in the survey are price oriented and therefore they tend to prefer the cheaper-conventional alternative. In the long term it can be assumed that expansion of the new technology and competition among manufacturers will reduce the costs. A retraction notice is an essential scientific historical document because it should outline the reason(s) why a scientific manuscript was retracted, culpability (if any) and any other factors that have given reason for the authors, editors, or publisher, to remove a piece of the literature from science's history books. Unlike an expression of concern (EoC), erratum or corrigendum, a retraction will usually result in a rudimentary vestige of the work. Thus, any retraction notice that does not fully indicate a set of elements related to the reason and background for the retraction serves as a poor historical document. Moreover, poorly or incompletely worded retraction notices in fact do not serve their intended purpose, i.e., to hold all parties accountable, and to inform the scientific and wider public of the problem and reason for the paper's demise. This paper takes a look at the definitions and the policies of clauses for retractions, EoCs, errata and corrigenda in place by 15 leading science, technology and medicine (STM) publishers and four publishing-related bodies that we believe have the greatest influence on the current fields of science, technology and medicine. The primary purpose was to assess whether there is a consistency among these entities and publishers. Using an arbitrary 5-scale classification system, and evaluating the different categories of policies separately, we discovered that in almost all cases (88.9 %), the wording used to define these four categories of polices differs from that of the Committee on Publication Ethics (COPE), which is generally considered to be the guiding set of definitions in science publishing. In addition, as much as 61 % deviation in policies (wording and meaning), relative to COPE guidelines, was discovered. When considering the average pooled deviation across all categories of policies, we discovered that there was either no deviation or a small deviation, only in the wording, in the definition of policies when compared to the COPE guidelines in 1 out of 3 ethical bodies, and in 40 % (6 out of 15) STM publishers. Moderate deviation from the COPE guidelines was detected in 26.7 % of STM publishers and one ethical body but a large deviation in one ethical body and 20 % of STM publishers was observed. Two STM publishers (13.3 %) did not report any information about these policies. Even though in practice, editors and publishers may deviate from these written definitions when dealing with case-by-case issues, we believe that it is essential, to serve as a consistent guide for authors and editors, that the wording be standardized across these entities. COPE and these entities also have the responsibility of making it clear that these definitions are merely suggestions and that their application may be subjected to subjective interpretation and application. Professionals in environmental fields engage with complex problems that involve stakeholders with different values, different forms of knowledge, and contentious decisions. There is increasing recognition of the need to train graduate students in interdisciplinary environmental science programs (IESPs) in these issues, which we refer to as "social ethics." A literature review revealed topics and skills that should be included in such training, as well as potential challenges and barriers. From this review, we developed an online survey, which we administered to faculty from 81 United States colleges and universities offering IESPs (480 surveys were completed). Respondents overwhelmingly agreed that IESPs should address values in applying science to policy and management decisions. They also agreed that programs should engage students with issues related to norms of scientific practice. Agreement was slightly less strong that IESPs should train students in skills related to managing value conflicts among different stakeholders. The primary challenges to incorporating social ethics into the curriculum were related to the lack of materials and expertise for delivery, though challenges such as ethics being marginalized in relation to environmental science content were also prominent. Challenges related to students' interest in ethics were considered less problematic. Respondents believed that social ethics are most effectively delivered when incorporated into existing courses, and they preferred case studies or problem-based learning for delivery. Student competence is generally not assessed, and respondents recognized a need for both curricular materials and assessment tools. This paper provides an empirically informed perspective on the notion of responsibility using an ethical framework that has received little attention in the engineering-related literature to date: ethics of care. In this work, we ground conceptual explorations of engineering responsibility in empirical findings from engineering student's writing on the human health and environmental impacts of "backyard" electronic waste recycling/disposal. Our findings, from a purposefully diverse sample of engineering students in an introductory electrical engineering course, indicate that most of these engineers of tomorrow associated engineers with responsibility for the electronic waste (e-waste) problem in some way. However, a number of responses suggested attempts to deflect responsibility away from engineers towards, for example, the government or the companies for whom engineers work. Still other students associated both engineers and non-engineers with responsibility, demonstrating the distributed/collective nature of responsibility that will be required to achieve a solution to the global problem of excessive e-waste. Building upon one element of a framework for care ethics adopted from the wider literature, these empirical findings are used to facilitate a preliminary, conceptual exploration of care-ethical responsibility within the context of engineering and e-waste recycling/disposal. The objective of this exploration is to provide a first step toward understanding how care-ethical responsibility applies to engineering. We also hope to seed dialogue within the engineering community about its ethical responsibilities on the issue. We conclude the paper with a discussion of its implications for engineering education and engineering ethics that suggests changes for educational policy and the practice of engineering. The libration points in the Sun-Earth or Earth-Moon systems have potential applications in observing solar activities (EL1), space-based observatories and infrared astronomy (EL2), half-way space-stations for lunar exploration (LL1) and communications for satellites on Moon's far side (LL2). Different from the traditional design methodology for libration point missions, new attitude pointing schemes are proposed in this paper to guide the design of spacecraft configurations based on the ideas of removing rotating components as much as possible. The fixed installations of transmission antenna, radiating surface and solar array make the SSO platforms of satellite be qualified for the EL1- or EL2-point mission with further advances in fixing the array and removing the battery. The fixed transmission antenna and zero-incident angle array make the GEO platforms be qualified for the LL1- or LL2-point mission with improvements on killing the worst incident angle of +/- 23 degrees 26' and the umbra/penumbra at vernal equinox. Addressed are the structure of attitude determine and control system, the analytic algorithm to derive the orbital frame and the performance of this control system. (C) 2017 Elsevier Masson SAS. All rights reserved. Control Momentum Gyros (CMGs) have many advantages over other actuators for attitude control of a spacecraft. Compared with Single Gimbal Control Moment Gyroscopes (SGCMGs), the Variable Speed Control Moment Gyros (VSCMGs) can be easily implemented. The VSCMGs have great advantages in performing complex mission of attitude control and easing power shortage of the entire spacecraft. In this paper, the singularity avoidance characteristic of the steering law of Weighted Pseudo-inverse with Null Motion (WPINM) is firstly analyzed by the method of the inner product. A new steering law is designed based on optimization theory, which can satisfy the requirement of attitude control and energy storage. At last, the singular point can be avoided by the new design steering law based on the simulation result. (C) 2017 Elsevier Masson SAS. All rights reserved. Vortex generators (VGs) are usually employed to improve the aerodynamic performance for both spatial or energy issues; such as aircrafts and wind turbine blades. These structures present poor aerodynamic performance in the sections close to the hub enabling the lift to decay under critical conditions. One way to overcome this drawback is the use of VGs, avoiding or delaying the boundary layer separation. The main goal of this work is to characterize the size of the primary vortex generated by a single VG on a flat plate by Computational Fluid Dynamics simulations using OpenFOAM code. This is performed by assessing the half-life radius of the vortex and comparing it with experimental results. In addition, a prediction model based on two elementary parameters has been developed to describe in a simple way the evolution of the size of the primary vortex downstream of the vane for four different incident angles. (C) 2017 Elsevier Masson SAS. All rights reserved. The aim of the paper is to establish and demonstrate the significant capacity and performances of a well predictive numerical environment, developed for multidisciplinary design optimization (MDO) purposes, which presents one more contribution in cases of fluid-structure interaction (FSI) numerical modeling. Numerical modeling of fluid-structure interaction was conducted through closely coupled aerodynamic and structural computational domains, with very good overall computational reliability and accuracy. Various available experimental results, which were obtained mostly for calibration purposes, have been used for computational fluid dynamics (CFD) and computational structural mechanics (CSM) validation and verification, in order to assure that numerical optimization could be carried out with acceptable accuracy. The numerical optimization procedure was applied on the short range ballistic missile fin configuration, which was developed for scientific and internal experimental, testing and calibration purposes in Military Technical Institute (VTI) in Belgrade. The proposed monolithic, multimodular and on commercial code based numerical environment, with adopted multipoint regimes and multicriteria settings was used for aerodynamic-structural optimization. The multidiscipline aerodynamic shape optimization, with respect to predefined objectives and constrains, was carried out in order to achieve global improvement of initial aerodynamic-structural responses of the mentioned configuration. The multidisciplinary feasible method proposed in this paper, is a single level method driven by an embedded surrogate-based evolutionary optimizer. The developed algorithm enabled an increased number of feasible optimal geometries of fin, while its special feature was the overall improvement of the missile initial geometry, with decreased costs of experimental and numerical resources. A special challenge of this research was to overcome scaling between available the wind tunnel missile model geometry, used for aerodynamic experiments, and the real fin geometry model, used for static structural experiments. (C) 2017 Elsevier Masson SAS. All rights reserved. This study aims to improve the accuracy and efficiency when dealing with thermal and mechanical response in functionally graded cylinder, which is very important in modern aerospace industry. The cell-based smoothed radial point interpolation method (CS-RPIM) is formulated for such analysis. In CS-RPIM, triangular meshes are utilized to discretize problem domains, which can be easily generated. Each triangular element is then partitioned into several smoothing cells. Field functions are constructed by RPIM shape functions and system equations are obtained based on these smoothing cells. Finally, the performances of CS-RPIM are fully investigated through several numerical examples. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper aims to develop a high-altitude solar-powered hybrid airship, which combines aerodynamic lift and buoyancy with buoyancy force and cruises continuously. A multi-lobed configuration is employed and the solar irradiance and photovoltaic array model are proposed. After introducing all the parts mass of the hybrid airship and design procedure, an optimal problem with the constraint of energy and the equilibrium between buoyancy and weight are proposed. Through a hybrid optimization search method, the solution is obtained and the sensitivities of geographical location and seasons are discussed. The results show a comprehensive influence of the constraint both in latitude and wind field environment. (C) 2017 Elsevier Masson SAS. All rights reserved. A method is presented to filter out errors in multidimensional databases. The method does not require any a priori information about the nature of the errors, which need not be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method presented is based on an improved multidimensional extension by the authors (2016) [21] of a seminal gappy reconstruction method, due to Everson and Sirovich (1995) [18], who developed a two-dimensional method, based on SVD, able to reconstruct lost information at known database positions. The improved gappy reconstruction method is evolved in this paper as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resulting method filters out O(1) errors in an efficient fashion, for both random and systematic errors. Also, the method performs well both when errors are concentrated and when they are spread along the database. The method is illustrated and tested in several toy model and aerodynamic databases, obtained by discretizing a transcendental function and CFD-calculating the pressure on the surface of a wing for varying values of the angle of attack. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper addresses both active and passive flutter suppressions for highly flexible wings using piezoelectric transduction. An active aeroelastic formulation is used in the studies, featuring a geometrically nonlinear beam formulation coupled with 2-D unsteady aerodynamic equations. The piezoelectric effect is involved in the dynamic nonlinear beam equations, allowing for the aeroelastic studies on multifunctional wings for both piezoelectric energy harvesting and active actuation. In this study, the active piezoelectric actuation is applied as the primary approach for the flutter suppression, with the energy harvesting, as a secondary passive approach, concurrently working to provide an additional damping effect on the wing vibration. The multifunctional system may also convert wing vibration energy to electric energy as an additional function. Moreover, a Linear Quadratic Gaussian controller is developed for the active control of wing limit-cycle oscillations due to the flutter instability. In the numerical studies, both the active and passive flutter suppression approaches are enabled for a highly flexible wing. The impact of the piezoelectric actuator and energy harvester placement on the wing flutter characteristic is explored. This paper presents a comprehensive approach to effectively suppress the aeroelastic instability of highly flexible piezoelectric wings, while allowing to harvest the residual vibration energy. The active multifunctional wing technology that is explored in the paper has the potential to improve the aircraft performance from both aeroelastic stability and energy consumption aspects. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper presents a wing aerostructural optimization framework based on the Individual Discipline Feasible (IDF) architecture. Using the 1DF architecture the aerodynamic and the structure disciplines are decoupled in the analysis level and the optimizer is responsible for the consistency of the design. The SU2 CFD code is used for the aerodynamic analysis and the FEMWET software is used for the structural analysis. The SU2 code is modified in a way to receive the structural deformation as inputs and compute the sensitivity of the outputs, e.g. drag, with respect to the deformation. An Airbus A320 type aircraft is used as a test case for the optimization. A reduction of the aircraft fuel weight of 11% is achieved. This reduction was attained by increasing the wing span, reducing the wing sweep, improving the lift distribution and improving airfoil shapes. (C) 2017 Elsevier Masson SAS. All rights reserved. Quickstep is relatively a new technique for aerospace composite processing. Thermoset resins (prepregs) have been frequently designed by autoclave method requiring low ramp rate curing of 2-3 K min(-1). However, ramp rate up to 15 K min(-1) has been achieved via Quickstep processing. This technique allows alteration in chemo-rheology of resin system and so influences the reaction progress. In this attempt, Fourier transform infrared spectroscopy (FTIR), differential scanning calorimetry (DSC), and dynamic mechanical thermal analysis (DMTA) were used to monitor the cure progress of 977-2A epoxy resin and carbon fiber reinforced composite. The curing reaction progress of 977-2A epoxy/carbon fiber was considered for the first time by comparing Quickstep processing and autoclave method. According to DSC results, the reaction progress in Quickstep technique was comparable to that of autoclave curing. Moreover, DMTA of Quickstep cured samples showed increase in glass transition temperature (Tg) due to increased cross-linking density at greater hold time (upper cure temperature). FTIR was used to monitor the conversion of representative functional groups versus applied Quickstep and autoclave curing steps. The structural analysis depicted that the Quickstep curing path for 977-2A resin was different than the autoclave curing; however the final cross-linked structure was similar to that of autoclave cured samples. (C) 2017 Elsevier Masson SAS. All rights reserved. Nowadays, composite materials find a large application in several engineering fields, spanning from automotive to aerospace sectors. In the latter, especially in aircraft civil transportation, severe fireproof requirements must be accomplished, taking into account that the second most frequent cause of fatal accidents involving airplanes, was the post-impact fire/smoke, as reported by the European Aviation Safety Agency (EASA) in 2014. In the light of this, experimental research is of crucial importance in the understanding thermal behavior of composites for aircraft components, when exposed to high temperature and fire conditions. In this context, a thermal degradation study is carried out for two carbon-reinforced resins: the well known thermosetting phenolic and a thermoplastic polyether-ketone-ketone (PEKK), recently developed specifically for this kind of application. The aim is to evaluate the PEKK behavior and to understand the impact of composite nature in terms of structural strength under fire. To this end, thermogravimetric analyses were performed for three different non-isothermal heating programs, between 30 and 1000 degrees C. Under inert atmosphere one single global reaction is observed for carbon-PEKK between 500-700 degrees C, while two for carbon-phenolic, whose pyrolysis begins around 200 degrees C. This better PEKK strengthening is attributed to the ether and ketone bonds between the three aromatic groups of the monomer. As expected, under oxidative atmosphere, the kinetic process becomes more complex, making more difficult the detecting of single-step reactions, especially for carbon phenolic. Nevertheless, the oxidative process of carbon-PEKK seems to be driven by three consecutive global reactions. The activation energy is estimated by means of both integral (Starink) and differential (Friedman) isoconversional methods, as a function of the extent of conversion, corresponding to the identified reaction intervals. For carbon-PEKK in inert conditions, with Starink a mean value of 207.71 +/- 6.57 kJ/mol was estimated, while 213.88 +/- 20.04 kJ/mol with Friedman. This expected slight difference depends on the nature of the considered mathematical approaches. The difficulty in activation energy estimation for polymeric materials prefers the use of at least two different methods, allowing for the identification of an activation energy range, for a resin of which no data are available in the literature. For the decomposition model evaluation, the so-called compensation effect method was implemented, as well as the single-step-based approach proposed by Friedman. The evaluation of a possible decomposition expression has been achieved only for carbon-PEKK under inert conditions, since the considered methods are valid and applicable only for well defined single-step reactions. In fact, the three reactions of the oxidative case cannot be considered as single-step processes. Moreover, the higher difference in the estimated activation energy between Starink and Friedman suggests to check the achieved results by implementing further isoconversional methods, to understand the most reliable for polymer-based carbon composites degradation analysis under oxidative atmosphere. However, the observed higher thermal performance of PEKK resin, attributed to its chemical structure, increases the interest toward its use as matrix for aerospace composite materials, that can be subjected to hazardous environments. (C) 2017 Elsevier Masson SAS. All rights reserved. The aim of this paper is to report the results of the fluid-structure interaction study of a lightly cambered blade in a cascade under the influence of various inflow conditions and structural parameters, in a cost-effective manner. This work could be viewed as a preliminary design tool that provides the behavioral trend of the blade motion which will be useful when a detailed analysis has to be performed. The methodology employed to formulate the aerodynamic model for lightly cambered airfoils follows the Whitehead's aerodynamic theory for a cascade of flat plates. The aerodynamic loads acting on a blade in a cascade of airfoils are computed to provide the required conditions for the structural model. The aeroelastic model thus formulated is employed to predict the structural response of a blade in a cascade subjected to both steady and unsteady flows. The possibilities of a blade undergoing pure bending or torsion, or coupled bending-torsion flutter are investigated in the present study. The utility of the already developed aerodynamic model is demonstrated by investigating the structural behavior of three different blade curvature profiles - Double Circular Arc (DCA), NACA 65, and NACA a = 1.0 mean lines. The effects of various blade structural parameters such as mechanical damping, center of gravity and elastic axis offset, and cascade geometric parameters such as stagger angle and blade spacing on the blade aeroelastic response are analyzed. The flutter boundary for a range of frequency ratios is then examined. It has been found that compressor blades with frequency ratios close to unity are vulnerable to coupled bending-torsion flutter under the influence of an incoming steady flow. The growing torsional vibrations are in general registered for higher values of air velocity compared to the range of velocities favoring bending flutter. The influence of different compressor unsteady flows - rotating stall and surge, on the aeroelastic behavior of a lightly cambered blade is also examined in the present study. (C) 2017 Elsevier Masson SAS. All rights reserved. An analysis scheme and a mission system model were applied to the evaluation of the military utility of efforts to reduce infrared signature in the conceptual design of survivable aircraft. The purpose is twofold: Firstly, to contribute to the development of a methodological framework for assessing the military utility of spectral design, and secondly to assess the threat from advances in LWIR sensors and their use in surface-to-air-missile systems. The modeling was specifically applied to the problem of linking the emissivity of aircraft coatings to mission accomplishment. The overall results indicate that the analysis scheme and mission system model applied are feasible for assessing the military utility of spectral design and for supporting decision-making in the concept phase. The analysis of different strike options suggests that LWIR sensors will enhance the military utility of low emissive paint, at least for missions, executed in clear weather conditions. Furthermore, results corroborate and further clarify the importance of including earthshine when modeling. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper presents the results of a static analysis on reinforced thin-walled tapered structures using refined one-dimensional models. The structural model is based on a one-dimensional formulation derived from the Carrera Unified Formulation. This formulation provides a quasi three-dimensional solution, thanks to the use of polynomial expansions to describe the displacement field over the cross-section. According to which type of expansion is used, various classes of refined one-dimensional elements are obtained. Lagrange expansions were used in this work. The use of these models allows each structural component to be considered separately; this methodology is called the component-wise approach. After an initial assessment of the structural model, different kinds of aeronautical structures, which gradually become more complex, have been studied. The stress and displacement fields have been obtained. The results have been compared with those obtained using commercial tools. Three-and two-dimensional models have been used for comparison purposes. The results show the capability of the present advanced one-dimensional models to achieve accurate results while avoiding high computational costs. (C) 2017 Elsevier Masson SAS. All rights reserved. User Experience (UX) design has become an important factor of product success. One of the important issues involved in UX design is how to evaluate UX. In this research, UX evaluation is quantitatively fulfilled by the cumulative prospect theory, in which UX is perceived from the perspective of the decision making procedure of two alternative design profiles. Furthermore, we study the influence of affective states on UX prospect evaluation through shaping affective parameters involved in UX design. To account for multiple sources of uncertainties, we develop a hierarchical Bayesian model via Markov chain Monte Carlo technique for parameter estimation under three affective states. Also, aircraft cabin interior design is studied as a case study to demonstrate the potential and feasibility of the proposed method. (C) 2017 Elsevier Ltd. All rights reserved. We propose a new differentially-private decision forest algorithm that minimizes both the number of queries required, and the sensitivity of those queries. To do so, we build an ensemble of random decision trees that avoids querying the private data except to find the majority class label in the leaf nodes. Rather than using a count query to return the class counts like the current state-of-the-art, we use the Exponential Mechanism to only output the class label itself. This drastically reduces the sensitivity of the query - often by several orders of magnitude - which in turn reduces the amount of noise that must be added to preserve privacy. Our improved sensitivity is achieved by using "smooth sensitivity", which takes into account the specific data used in the query rather than assuming the worst-case scenario. We also extend work done on the optimal depth of random decision trees to handle continuous features, not just discrete features. This, along with several other improvements, allows us to create a differentially private decision forest with substantially higher predictive power than the current state-of-the-art. (C) 2017 Elsevier Ltd. All rights reserved. Estimation of Distribution Algorithms (EDAs) is evolutionary algorithms with relevant performance in handling complex problems. Nevertheless, their efficiency and effectiveness directly depends on how accurate the deployed probabilistic models are, which in turn depend on methods of model building. Although the best models found in the literature are often built by computationally complex methods, whose corresponding EDAs require high running time, these methods may evaluate a lesser number of points in the search space. In order to find a better trade-off between running time (efficiency) and the number of evaluated points (effectiveness), this work uses probabilistic models built by algorithms of phylogenetic reconstruction, since some of them are able to efficiently produce accurate models. Then, an EDA, namely, Optimization based on Phylogram Analysis, and a new search technique, namely, Composed Exhaustive Search, are developed and proposed to find solutions for combinatorial optimization problems with different levels of difficulty. Experimental results show that the proposed new EDA features an interesting trade-off between running time and number of evaluated points, attaining solutions near to the best results found in the literature for each one of such performance measures. (C) 2017 Elsevier Ltd. All rights reserved. Supplier selection and inventory planning are critical and challenging tasks in Supply Chain Management. There are many studies on both topics and many solution techniques have been proposed dealing with each problem separately. In this study, we present a two-stage integrated approach to the supplier selection and inventory planning. In the first stage, suppliers are ranked based on various criteria, including cost, delivery, service and product quality using Interval Type-2 Fuzzy Sets (IT2FS)s. In the following stage, an inventory model is created. Then, an Multi-objective Evolutionary Algorithm (MOEA) is utilised simultaneously minimising the conflicting objectives of supply chain operation cost and supplier risk. We evaluated the performance of three MOEAs with tuned parameter settings, namely NSGA-II, SPEA2 and IBEA on a total of twenty four synthetic and real world problem instances. The empirical results show that in the overall, NSGA-lI is the best performing MOEA producing high quality trade-off solutions to the integrated problem of supplier selection and inventory planning. (C) 2017 Elsevier Ltd. All rights reserved. Nonnegative matrix factorization has been widely used in co-clustering tasks which group data points and features simultaneously. In recent years, several proposed co-clustering algorithms have shown their superiorities over traditional one-side clustering, especially in text clustering and gene expression. Due to the NP-completeness of the co-clustering problems, most existing methods relaxed the orthogonality constraint as nonnegativity, which often deteriorates performance and robustness as a result. In this paper, penalized nonnegative matrix tri-factorization is proposed for co-clustering problems, where three penalty terms are introduced to guarantee the near orthogonality of the clustering indicator matrices. An iterative updating algorithm is proposed and its convergence is proved. Furthermore, the high-order nonnegative matrix tri-factorization technique is provided for symmetric co-clustering tasks and a corresponding algorithm with proved convergence is also developed. Finally, extensive experiments in six real-world datasets demonstrate that the proposed algorithms outperform the compared state-of-the-art co-clustering methods. (C) 2017 Elsevier Ltd. All rights reserved. In order to increase performance in palmprint recognition systems, various devices are normally used to restrict the movement of the hand. These can cause problems, especially for those users with physical disabilities. They also cause significant hygiene problems in multi-user systems. Recently, studies on palmprint recognition systems have progressed towards the development of unconstrained, contactless and unrestricted background techniques. The most common problem encountered in these studies is the alignment arising from the free movement of the hand. Despite 3D hand-acquisition devices which offer extra recognition features to overcome this problem, the applicability of these devices is low because of their increased cost. In this study, a stereo camera was proposed. Although due to matching problems, it is difficult to achieve precise, distinct feature extraction in the unrestricted 3D environment used for palmprint recognition, the orientation of the hand in 3D space can be determined by obtaining depth information. In this study, the depth information was extracted by using the binocular stereo approach. First, the orientation of the hand was estimated by fitting a surface model associated with the eigenvectors of the depth information. Pose correction was then accomplished by establishing a relationship between the orientation and the images. The pose correction greatly relieved the perspective distortion that usually occurs within the various poses of the hands. Next, the region of interest was determined by performing segmentation on the corrected images using the Active Appearance Model (MM). The palmprint features were then extracted via Gabor-based Kernel Fisher Discriminant Analysis. In order to demonstrate the performance of the proposed approach, a new dataset was compiled from stereo images within various scenarios collected from 138 different individuals. As a result of these experimental studies, the EER values, especially on the images captured from different hand orientations in 3D, were reduced from around 14-0.75%. With the help of this suggested approach, the palmprint recognition system was transformed into a more portable form by removing the closed-box mechanisms and equipment restricting movement of the hand. This system can automatically perform pose estimation, hand segmentation and recognition processes without any special intervention. (C) 2017 Elsevier Ltd. All rights reserved. In Wireless Sensor Networks (WSNs) to address the duality between the cost-effective energy efficiency and the reliable data delivery is a relevant issue. This paper presents a novel bio-inspired routing protocol, named CB-RACO, that combines the Ant Colony Optimization (ACO) meta-heuristic with the computationally cheap and distributed community detection technique Label Propagation (LP). CB-RACO creates communities in the WSNs and meets the balance of energy consumption by routing data inside-communities through swarm intelligence. As a consequence, CB-RACO demands low memory and overhead in construction and maintenance of routing paths. Additionally, CB-RACO achieves high data delivery reliability through a data retransmission strategy based on acknowledgments between communities. We simulated CB-RACO in large-scale scenarios according to the goodput, delivery delay and energy consumption metrics. The results have shown that the proposed approach may provide significant improvement in comparison to ant-based strategies that do not rely on community structures. (C) 2017 Elsevier Ltd. All rights reserved. The Automatic Identification System (AIS) is a ship reporting system based on messages broadcast by vessels carrying an AIS transponder. The recent increase of terrestrial networks and satellite constellations of receivers is making AIS one of the main sources of information for Maritime Situational Awareness activities. Nevertheless, AIS is subject to reliability and manipulation issues; indeed, the received reports can be unintentionally incorrect, jammed or deliberately spoofed. Moreover, the system can be switched off to cover illicit operations, causing the interruption of AIS reception. This paper addresses the problem of detecting whether a shortage of AIS messages represents an alerting situation or not, by exploiting the Received Signal Strength Indicator available at the AIS Base Stations (BS). In designing such an anomaly detector, the electromagnetic propagation conditions that characterize the channel between ship AIS transponders and BS have to be taken into consideration. The first part of this work is thus focused on the experimental investigation and characterisation of coverage patterns extracted from the real historical AIS data. In addition, the paper proposes an anomaly detection algorithm to identify intentional AIS on-off switching. The presented methodology is then illustrated and assessed on a real-world dataset. (C) 2017 The Authors. Published by Elsevier Ltd. In the last years, the opinion summarization task has gained much importance because of the large amount of online information and the increasing interest in learning the user evaluation about products, services, companies, and people. Although there are many works in this area, there is room for improvement, as the results are far from ideal. In this paper, we present our investigations to generate extractive and abstractive summaries of opinions. We study some well-known methods in the area and compare them. Besides using these methods, we also develop new methods that consider the main advantages of the ones before. We evaluate them according to three traditional summarization evaluation measures: informativeness, linguistic quality, and utility of the summary. We show that we produce interesting results and that our methods outperform some methods from literature. (C) 2017 Elsevier Ltd. All rights reserved. Large graphs are scale free and ubiquitous having irregular relationships. Clustering is used to find existent similar patterns in graphs and thus help in getting useful insights. In real-world, nodes may belong to more than one cluster thus, it is essential to analyze fuzzy cluster membership of nodes. Traditional centralized fuzzy clustering algorithms incur high communication cost and produce poor quality of clusters when used for large graphs. Thus, scalable solutions are obligatory to handle huge amount of data in less computational time with minimum disk access. In this paper, we proposed a parallel fuzzy clustering algorithm named 'PGFC' for handling scalable graph data. It will be advantageous from the viewpoint of expert systems to develop a clustering algorithm that can assure scalability along with better quality of clusters for handling large graphs.The algorithm is parallelized using bulk synchronous parallel (BSP) based Pregel model. The cluster centers are initialized using degree centrality measure, resulting in lesser number of iterations. The performance of PGFC is compared with other state of art clustering algorithms using synthetic graphs and real world networks. The experimental results reveal that the proposed PGFC scales up linearly to handle large graphs and produces better quality of clusters when compared to other graph clustering counterparts. (C) 2017 Elsevier Ltd. All rights reserved. We introduce from first principles an analysis of the information content of multivariate distributions as information sources. Specifically, we generalize a balance equation and a visualization device, the Entropy Triangle, for multivariate distributions and find notable differences with similar analyses done on joint distributions as models of information channels. As an example application, we extend a framework for the analysis of classifiers to also encompass the analysis of data sets. With such tools we analyze a handful of UCI machine learning task to start addressing the question of how well do datasets convey the information they are supposed to capture about the phenomena they stand for. (C) 2017 Elsevier Ltd. All rights reserved. Algorithms for retinal vessel segmentation are powerful tools in automatic tracking systems for early detection of ophthalmological and cardiovascular diseases, and for biometric identification. In order to create more robust and reliable systems, the algorithms need to be accurately evaluated to certify their ability to emulate specific human expertise. The main contribution of this paper is an unsupervised method to detect blood vessels in fundus images using a coarse-to-fine approach. Our methodology combines Gaussian smoothing, a morphological top-hat operator, and vessel contrast enhancement for background homogenization and noise reduction. Here, statistics of spatial dependency and probability are used to coarsely approximate the vessel map with an adaptive local thresholding scheme. The coarse segmentation is then refined through curvature analysis and morphological reconstruction to reduce pixel mislabeling and better estimate the retinal vessel tree. The method was evaluated in terms of its sensitivity, specificity and balanced accuracy. Extensive experiments have been conducted on DRIVE and STARE public retinal images databases. Comparisons with state-of-the-art methods revealed that our method outperformed most recent methods in terms of sensitivity and balanced accuracy with an average of 0.7819 and 0.8702, respectively. Also, the proposed method outperformed state-of-the-art methods when evaluating only pathological images that is a more challenging task. The method achieved for this set of images an average of 0.7842 and 0.8662 for sensitivity and balanced accuracy, respectively. Visual inspection also revealed that the proposed approach effectively addressed main image distortions by reducing mislabeling of central vessel reflex regions and false-positive detection of pathological patterns. These improvements indicate the ability of the method to accurately approximate the vessel tree with reduced visual interference of pathological patterns and vessel-like structures. Therefore, our method has the potential for supporting expert systems in screening, diagnosis and treatment of ophthalmological diseases, and furthermore for personal recognition based on retinal profile matching. (C) 2017 Elsevier Ltd. All rights reserved. In the evidential reasoning approach of decision theory, different evidence weights can generate different combined results. Consequently, evidence weights can significantly influence solutions. In terms of the "psychology of economic man," decision -makers may tend to seek similar pieces of evidence to support their own evidence and thereby form alliances. In this paper, we extend the concept of evidential reasoning (ER) to evidential reasoning based on alliances (ERBA) to obtain the weights of evidence. In the main concept of ERBA, pieces of evidence that are easy for decision -makers to negotiate are classified in the same group or "alliance." On the other hand, if the pieces of evidence are not easy to negotiate, they are classified in different alliances. In this study, two negotiation optimization models were developed to provide relative importance weights based on intra-and inter -alliance evidence features. The proposed models enable weighted evidence to be combined using the ER rule. Experimental results showed that the proposed approach is rational and effective. (C) 2017 Elsevier Ltd. All rights reserved. Visual tracking methods are mostly based on single stage state estimation that limitedly caters to precise localization of target under dynamic environment such as occlusion, object deformation, rotation, scaling and cluttered background. In order to address these issues, we introduce a novel multi-stage coarse-to fine tracking framework with quick adaptation to environment dynamics. The key idea of our work is to propose two-stage estimation of object state and to develop an adaptive fusion model. Coarse estimation of object state is achieved using optical flow and multiple fragments are generated around this approximation. Precise localization of object is obtained through evaluation of these fragments using three complementary cues. Adaptation of proposed tracker to dynamic environment changes is quick due to incorporation of context sensitive cue reliability, which encompass its direct application for development of expert system for video surveillance. In addition, proposed framework caters to object rotation and scaling through a random walk state model and rotation invariant features. The proposed tracker is evaluated over eight-benchmarked color video sequences and competitive results are obtained. As an average of the outcomes, we achieved mean center location error (in pixels) of 6.791 and F-measure of 0.78. Results demonstrate that proposed tracker not only outperforms various state-of-the-art trackers but also effectively caters to various dynamic environments. (C) 2017 Elsevier Ltd. All rights reserved. Credit scoring is an effective tool for banks to properly guide decision profitably on granting loans. Ensemble methods, which according to their structures can be divided into parallel and sequential ensembles, have been recently developed in the credit scoring domain. These methods have proven their superiority in discriminating borrowers accurately. However, among the ensemble models, little consideration has been provided to the following: (1) highlighting the hyper-parameter tuning of base learner despite being critical to well-performed ensemble models; (2) building sequential models (i.e., boosting, as most have focused on developing the same or different algorithms in parallel); and (3) focusing on the comprehensibility of models. This paper aims to propose a sequential ensemble credit scoring model based on a variant of gradient boosting machine (i.e., extreme gradient boosting (XGBoost)). The model mainly comprises three steps. First, data pre-processing is employed to scale the data and handle missing values. Second, a model-based feature selection system based on the relative feature importance scores is utilized to remove redundant variables. Third, the hyper-parameters of XGBoost are adaptively tuned with Bayesian hyper-parameter optimization and used to train the model with selected feature subset. Several hyper-parameter optimization methods and baseline classifiers are considered as reference points in the experiment. Results demonstrate that Bayesian hyper-parameter optimization performs better than random search, grid search, and manual search. Moreover, the proposed model outperforms baseline models on average over four evaluation measures: accuracy, error rate, the area under the curve (AUC) H measure (AUC-H measure), and Brier score. The proposed model also provides feature importance scores and decision chart, which enhance the interpretability of credit scoring model. (C) 2017 Elsevier Ltd. All rights reserved. The main tasks in Example-based Machine Translation (EBMT) comprise of source text decomposition, following with translation examples matching and selection, and finally adaptation and recombination of the target translation. As the natural language is ambiguous in nature, the preservation of source text's meaning throughout these processes is complex and challenging. A structural semantics is introduced, as an attempt towards meaning-based approach to improve the EBMT system. The structural semantics is used to support deeper semantic similarity measurement and impose structural constraints in translation examples selection. A semantic compositional structure is derived from the structural semantics of the selected translation examples. This semantic compositional structure serves as a representation structure to preserve the consistency and integrity of the input sentence's meaning structure throughout the recombination process. In this paper, an English to Malay EBMT system is presented to demonstrate the practical application of this structural semantics. Evaluation of the translation test results shows that the new translation framework based on the structural semantics has outperformed the previous EBMT framework. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, a novel and efficient system is proposed to capture human movement evolution for complex action recognition. First, camera movement compensation is introduced to extract foreground object movement. Secondly, a mid-level feature representation called trajectory sheaf is proposed to capture the temporal structural information among low-level trajectory features based on key frames selection. Thirdly, the final video representation is obtained by training a sorting model with each key frame in the video clip. At last, the hierarchical version of video representation is proposed to describe the entire video with higher level representation. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on UCF Sports, and comparable results on several challenge benchmarks, such as Hollywood2 and HMDB51 dataset. (C) 2017 Elsevier Ltd. All rights reserved. The analysis of travel mode choice is an important task in transportation planning and policy making in order to understand and predict travel demands. While advances in machine learning have led to numerous powerful classifiers, their usefulness for modeling travel mode choice remains largely unexplored. Using extensive Dutch travel diary data from the years 2010 to 2012, enriched with variables on the built and natural environment as well as on weather conditions, this study compares the predictive performance of seven selected machine learning classifiers for travel mode choice analysis and makes recommendations for model selection. In addition, it addresses the importance of different variables and how they relate to different travel modes. The results show that random forest performs significantly better than any other of the investigated classifiers, including the commonly used multinomial logit model. While trip distance is found to be the most important variable, the importance of the other variables varies with classifiers and travel modes. The importance of the meteorological variables is highest for support vector machine, while temperature is particularly important for predicting bicycle and public transport trips. The results suggest that the analysis of variable importance with respect to the different classifiers and travel modes is essential for a better understanding and effective modeling of people's travel behavior. (C) 2017 Elsevier Ltd. All rights reserved. In medical information system, the data that describe patient health records are often time stamped. These data are liable to complexities such as missing data, observations at irregular time intervals and large attribute set. Due to these complexities, mining in clinical time-series data, remains a challenging area of research. This paper proposes a bio-statistical mining framework, named statistical tolerance rough set induced decision tree (STRiD), which handles these complexities and builds an effective classification model. The constructed model is used in developing a clinical decision support system (CDSS) to assist the physician in clinical diagnosis. The STRiD framework provides the following functionalities namely temporal pre-processing, attribute selection and classification. In temporal pre-processing, an enhanced fuzzy-inference based double exponential smoothing method is presented to impute the missing values and to derive the temporal patterns for each attribute. In attribute selection, relevant attributes are selected using the tolerance rough set. A classification model is constructed with the selected attributes using temporal pattern induced decision tree classifier. For experimentation, this work uses clinical time series datasets of hepatitis and thrombosis patients. The constructed classification model has proven the effectiveness of the proposed framework with a classification accuracy of 91.5% for hepatitis and 90.65% for thrombosis. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents a decision support system (DSS) called DSScreening to rapidly detect inborn errors of metabolism (IEMs) in newborn screening (NS). The system has been created using the Aide-DS framework, which uses techniques imported from model-driven software engineering (MDSE) and soft computing, and it is available through eGuider, a web portal for the enactment of computerised clinical practice guidelines and protocols. MDSE provides the context and techniques to build new software artefacts based on models which conform to a specific metamodel. It also offers separation of concern, to disassociate medical from technological knowledge, thus allowing changes in one domain without affecting the other. The changes might include, for instance, the addition of new disorders to the DSS or new measures to the computation related to a disorder. Artificial intelligence and soft computing provide fuzzy logic to manage uncertainty and ambiguous situations. Fuzzy logic is embedded in an inference system to build a fuzzy inference system (FIS); specifically, a single-input rule modules connected zero-order Takagi-Sugeno FIS. The automatic creation of FISs is performed by the Aide-DS framework, which is capable of embedding the generated FISs in computerized clinical guidelines. It can also create a desktop application to execute the FIS. Technologically, it supports the addition of new target languages for the desktop applications and the inclusion of new ways of acquiring data. DSScreening has been tested by comparing its predictions with the results of 152 real analyses from two groups: (1) NS samples and (2) clinical samples belonging to individuals of all ages with symptoms that do not necessarily correspond to an IEM. The system has reduced the time needed by 98.7% when compared to the interpretation time spent by laboratory professionals. Besides, it has correctly classified 100% of the NS samples and obtained an accuracy of 70% for samples belonging to individuals with clinical symptoms. (C) 2017 Elsevier Ltd. All rights reserved. Early detection of unusual events in urban areas is a priority for city management departments, which usually deploy specific complex video-based infrastructures typically monitored by human staff. However, and with the emergence and quick popularity of Location-based social networks (LBSNs), detecting abnormally high or low number of citizens in a specific area at a specific time could be done by an expert system that automatically analyzes the public geo-tagged posts. Our approach focuses exclusively on the location information linked to these posts. By applying a density-based clustering algorithm, we obtain the pulse of the city (24 h-7 days) in a first training phase, which enables the detection of outliers (unexpected behaviors) on-the-fly in an ulterior test or monitoring phase. This solution entails that no specific infrastructure is needed since the citizens are the ones who buy, maintain, carry the mobile devices and freely disclose their location by proactively sharing posts. Besides, location analysis is lighter than video analysis and can be automatically done. Our approach was validated using a dataset of geo-tagged posts obtained from Instagram in New York City for almost six months with good results. Actually, not only all the already previously known events where detected, but also other unknown events where discovered during the experiment. (C) 2017 Elsevier Ltd. All rights reserved. Path planning is an essential tool for the robots that explore the surface of Mars or other celestial bodies such as dwarf planets, asteroids, or moons. These vehicles require expert and intelligent systems to adopt the best decisions in order to survive in a hostile environment. The planning module has to take into account multiple factors such as the obstacles, the slope of the terrain, the surface roughness, the type of ground (presence of sand), or the information uncertainty. This paper presents a path planning system for rovers based on an improved version of the Fast Marching (FM) method. Scalar and vectorial properties are considered when computing the potential field which is the basis of the proposed technique. Each position in the map of the environment has a cost value (potential) that is used to include different types of variables. The scalar properties can be introduced in a component of the cost function that can represent characteristics such as difficulty, slowness, viscosity, refraction index, or incertitude. The cost value can be computed in different ways depending on the information extracted from the surface and the sensor data of the rover. In this paper, the surface roughness, the slope of the terrain, and the changes in height have been chosen according to the available information. When the robot is navigating sandy terrain with a certain slope, there is a landslide that has to be considered and corrected in the path calculation. This landslide is similar to a lateral current or vector field in the direction of the negative gradient of the surface. Our technique is able to compensate this vector field by introducing the influence of this variable in the cost function. Because of this modification, the new method has been called Fast Marching (subjected to a) vector field (FMVF). Different experiments have been carried out in simulated and real maps to test the method performance. The proposed approach has been validated for multiple combinations of the cost function parameters. (C) 2017 Elsevier Ltd. All rights reserved. Regression analysis is a machine learning approach that aims to accurately predict the value of continuous output variables from certain independent input variables, via automatic estimation of their latent relationship from data. Tree-based regression models are popular in literature due to their flexibility to model higher order non-linearity and great interpretability. Conventionally, regression tree models are trained in a two-stage procedure, i.e. recursive binary partitioning is employed to produce a tree structure, followed by a pruning process of removing insignificant leaves, with the possibility of assigning multivariate functions to terminal leaves to improve generalisation. This work introduces a novel methodology of node partitioning which, in a single optimisation model, simultaneously performs the two tasks of identifying the break-point of a binary split and assignment of multivariate functions to either leaf, thus leading to an efficient regression tree model. Using six real world benchmark problems, we demonstrate that the proposed method consistently outperforms a number of state-of-the-art regression tree models and methods based on other techniques, with an average improvement of 7-60% on the mean absolute errors (MAE) of the predictions. (C) 2017 The Authors. Published by Elsevier Ltd. Terrorism is a complex phenomenon with high uncertainties in user strategy. The uncertain nature of terrorism is a main challenge in the design of counter-terrorism policy. Government agencies (e.g., CIA, FBI, NSA, etc.) cannot always use social media and telecommunications to capture the intentions of terrorists because terrorists are very careful in the use of these environments to plan and prepare attacks. To address this issue, this research aims to propose a new framework by defining the useful patterns of suicide attacks to analyze the terrorist activity patterns and relations, to understand behaviors and their future moves, and finally to prevent potential terrorist attacks. In the framework, a new network model is formed, and the structure of the relations is analyzed to infer knowledge about terrorist attacks. More specifically, an Evolutionary Simulating Annealing Lasso Logistic Regression (ESALLOR) model is proposed to select key features for similarity function. Subsequently, a new weighted heterogeneous similarity function is proposed to estimate the relationships among attacks. Moreover, a graph-based outbreak detection is proposed to define hazardous places for the outbreak of violence. Experimental results demonstrate the effectiveness of our framework with high accuracy (more than 90% accuracy) for finding patterns when compared with that of actual terrorism events in 2014 and 2015. In conclusion, by using this intelligent framework, governments can understand automatically how terrorism will impact future events, and governments can control terrorists' behaviors and tactics to reduce the risk of future events. (C) 2017 Elsevier Ltd. All rights reserved. A model that accurately predicts, at the time of admission, the Length of Stay (LOS) for hospitalized patients could be an effective tool for healthcare providers. It could enable early interventions to prevent complications, enabling more efficient utilization of manpower and facilities in hospitals. In this study, we apply a regression tree (Cubist) model for predicting the LOS, based on static inputs, that is, values that are known at the time of admission and that do not change during patient's hospital stay. The model was trained and validated on de-identified administrative data from the Veterans Health Administration (VHA) hospitals in Pittsburgh, PA. We chose to use a Cubist model because it produced more accurate predictions than did alternative techniques. In addition, tree models enable us to examine the classification rules learned from the data, in order to better understand the factors that are most correlated with hospital LOS. Cubist recursively partitions the data set as it estimates linear regressions for each partition, and the error level differs for different partitions, so that it is possible to deduce what are the characteristics of patients whose LOS can be accurately predicted at admission, and what are the characteristics of patients for whom the LOS estimate at that point in time is more highly uncertain. For example, our model indicates that the prediction error is greater for patients who had more admissions in the recent past, and for those who had longer previous hospital stays. Our approach suggests that mapping the cases into a higher dimensional space, using a Radial Basis Function (RBF) kernel, helps to separate them by their level of Cubist error, using a Support Vector Machine (SVM). (C) 2017 Elsevier Ltd. All rights reserved. With an increasing attempt of finding latent semantics in a video dataset, trajectories have become key components since they intrinsically include concise characteristics of object movements. An approach to analyze a trajectory dataset has concentrated on semantic region retrieval, which extracts some regions in which have their own patterns of object movements. Semantic region retrieval has become an important topic since the semantic regions are useful for various applications, such as activity analysis. The previous literatures, however, have just revealed semantically relevant points, rather than actual regions, and have less consideration of temporal dependency of observations in a trajectory. In this paper, we propose a novel model for trajectory analysis and semantic region retrieval. We first extend the meaning of semantic regions that can cover actual regions. We build a model for the extended semantic regions based on a hierarchically linked infinite hidden Markov model, which can capture the temporal dependency between adjacent observations, and retrieve the semantic regions from a trajectory dataset. In addition, we propose a sticky extension to diminish redundant semantic regions that occur in a non-sticky model. The experimental results demonstrate that our models well extract semantic regions from a real trajectory dataset. (C) 2017 Elsevier Ltd. All rights reserved. Location-Based Social Networks (LBSNs) allow users to post ratings and reviews and to notify friends of these posts. Several models have been proposed for Point-of-Interest (POI) recommendation that use explicit (i.e. ratings, comments) or implicit (i.e. statistical scores, views, and user influence) information. However the models so far fail to capture sufficiently user preferences as they change spatially and temporally. We argue that time is a crucial factor because user check-in behavior might be periodic and time dependent, e.g. check-in near work in the mornings and check-in close to home in the evenings. In this paper, we present two novel unified models that provide review and POI recommendations and consider simultaneously the spatial, textual and temporal factors. In particular, the first model provides review recommendations by incorporating into the same unified framework the spatial influence of the users' reviews and the textual influence of the reviews. The second model provides POI recommendations by combining the spatial influence of the users' check-in history and the social influence of the users' reviews into another unified framework. Furthermore, for both models we consider the temporal dimension and measure the impact of time on various time intervals. We evaluate the performance of our models against 10 other methods in terms of precision and recall. The results indicate that our models outperform the other methods. (C) 2017 Elsevier Ltd. All rights reserved. The astronomer Manuel Johnson, a future President of the Royal Astronomical Society, recorded the ocean tides with his own instrument at St. Helena in 1826-1827, while waiting for an observatory to be built. It is an important record in the history of tidal science, as the only previous measurements at St. Helena had been those made by Nevil Maskelyne in 1761, and there were to be no other systematic measurements until the late 20th century. Johnson's tide gauge, of a curious but unique design, recorded efficiently the height of every tidal high and low water for at least 13 months, in spite of requiring frequent re-setting. These heights compare very reasonably with a modern tidal synthesis based on present-day tide gauge measurements from the same site. Johnson's method of timing is unknown, but his calculations of lunar phases suggest that his tidal measurements were recorded in Local Apparent Time. Unfortunately, the recorded times are found to be seriously and variably lagged by many minutes. Johnson's data have never been fully published, but his manuscripts have been safely archived and are available for inspection at Cambridge University. His data have been converted to computer files as part of this study for the benefit of future researchers. A correlation between solar wind speed at Earth and the amount of magnetic field line expansion in the corona was verified in 1989 using 22 years of solar and interplanetary observations. We trace the evolution of this relationship from its birth 15 years earlier in the Skylab era to its current use as a space weather forecasting technique. This paper is the transcript of an invited talk at the joint session of the Historical Astronomy Division and the Solar Physics Division of the American Astronomical Society during its 224th meeting in Boston, MA, on 3 June 2014. With the aim of proposing vaults and domes in contemporary architecture, this paper explores an innovative method to help designers model the brick patterning on a curved surface automatically and interactively, especially during the early design phases, when most important decisions are made. The focus is on the development of a modeling approach to patterning, capable of defining the brick courses, handling mortar thickness, and controlling all the related geometrical issues of free-form surfaces. A computational environment has been developed and implemented to simulate patterning in 3-D space, allowing the user to develop the arrangement of bricks on a desired structure. With this digital tool it is possible to model different kinds of patterns such as stretcher bond or herringbone pattern on any kind of free-form surface, increasing the accuracy and speed of construction and enabling the designer and builder to estimate the brick requirements before fabrication. Over 5000 years ago, together with the proto-hieroglyphics that helped to understand the events recorded in bas-relief, there also appeared, as we will demonstrate, a technical "description" of the plan for a fortified city wall that, though schematic, provided sufficient elements to perform a mathematical calculation of form and structure. Properly illuminated and enlarged, the image, according to the manner and technology used, allows for measurement, justifying the representative and configurative choices of the methodologically deduced three-dimensional model. Comparison with the remnants of structures of the same age and the geographic location of the relic supports and confirms such decisions. This paper looks at the spatial development of Mamluks' educational buildings (madrassas) throughout the Bahri and Burji periods (1260-1517 A.D.). The lines of inquiry aim at investigating diachronically the degree by which madrassas can demonstrate the idea of a single configurationally dominant genotype. Madrassas are scrutinized according to their geometric and spatial attributes; their spatial structure is described according to their patterns of permeability, and interpreted using geometric-syntactic and statistical analysis. Despite the variability of the madrassas' footprints, this research highlights the conventions essential in stabilizing the madrassa as a building type and identifies the regional 'court' and the local 'Jerusalem' genotypes. While the results for the first identify an integrated central zone with segregated outer environments, those of the second identify a centrifugal-extroverted plan that tries to expand its circle of presence, and maximize its opportunities of encounter. In November 2015, the Faculty of Architecture at the University of Porto and the Institute for Systems and Computer Engineering, Technology and Science concluded a 2-year research project on the use of robotic fabrication technologies in architecture and building construction. Funded by the national Foundation of Science and Technology, this was a unique and vibrant experience on a new research field for the two institutions. This paper provides a brief description of the research project and its results. Catenary arches and vaults were used in Spain during two historical periods. First, the theoretical concept was used in the eighteenth century by military engineers for the construction of gunpowder magazines. Subsequently, Catalan modernist architects, such as Antoni Gaudi i Cornet (1852-1926) and CSsar Martinell i Brunet (1888-1973) used this shape throughout their buildings. The paper assesses the geometric approximations to the catenary made by eighteenth-century military engineers and twentieth-century architects in Spain. The investigation is based on two documentary sources: the designs for gunpowder magazines found in the Coleccin de Mapas, Planos y Dibujos del Archivo General de Simancas, and the design by CSsar Martinell i Brunet for the Cooperative wine cellar in Pinell de Brai (1918), preserved in the Arxiu Histric del Col center dot legi d'Arquitectes de Catalunya. The assessment confirms the use of the concept of the chain during these two historical periods. One of the large jali screens adorning the mausoleum of Muhammad Ghaus in Gwalior (N India), built in 1565, contains panels composed of disordered composite octagons and Salomon stars. These elements show a rotational disorder with some interdependence. Analysis of these partially disordered patterns with rotatable configurations of the above elements suggested that they may be approximants of a quasiperiodic octagonal tiling based on a new type of composite tiles. Comparisons with the Amman's quasiperiodic tiling were made. Instances of similar or related periodic ornamental patterns at other northern Indian localities are analyzed as well. Today restoring ancient "camera obscura sundials" by drilling holes in building fa double dagger ades appears as an overly intrusive intervention in historical architecture. For this reason, our study proposes an innovative, low-cost gnomonic instrument, capable of adapting to any type of relationship between the fa double dagger ade where the original gnomonic hole was located) and the sundial on the floor. The tool that we have designed allows incoming sunlight to be caught by a reflection system of flat mirrors, appropriately tilted, thus producing a solar ray that exits the instrument with a different inclination. We created new angular relationships between the gnomonic hole and the astronomic data engraved along the sundial in two case studies of historic sundials that are now inactive and abandoned. The research was conducted weaving astronomy and gnomonics with geometry and mathematics, to create a 3D model to verify, plan and execute the restoration of historic sundials. An innovative mathematical analysis comparing sets of preferred ratios from authors from antiquity (Vitruvius), the Renaissance (Alberti, Serlio and Palladio), and the modern age (Fechner and Lalo) with the eleven unique and universal proportionalities sheds new light on architects' use of certain ratios to endow their creations with commensurability and beauty. Some ratios may provide more ways of representing three magnitudes, and this might provide a clue to their enduring appearance in architectural works. Common conceptions about Gothic and Renaissance architectural proportion systems contrast the mediaeval geometrical methods with the arithmetical, rational ones of the Renaissance. In this paper, the authors analyze the geometrical proportion systems in Chapter V of Compendio de Arquitectura y simetria de los templos, generally attributed to the sixteenth-century Spanish architect Rodrigo Gil de Hontan. This shows that geometrical proportioning system of the Compendio generally leads to rational proportions. However, on some occasions, irrational proportions arise from the geometrical properties of the figures used in the layout of the churches, or from deliberate choices of the author, with a remarkable disregard for the notions of commensurable or incommensurable dimensions. This is consistent with the notion, put forward by Shelby, of constructive geometry. Medieval design methods were based on a compass and ruler geometry, with no concern for rational and irrational proportions. In his "Marriages of Incommensurables: Phi Related Ratios Joined with the Square Roots of Two and Three", artist and geometer Mark A. Reynolds has found two ratios from the golden section family that generate relationships with the square roots of two and three. He includes the grids and procedures necessary for producing these ratios. For him, the significance of the constructions is that they join together ratios from two different groups of rectangles: the golden section family and the square root rectangle progression, two systems that are usually incompatible with each other. In these constructions and grids, the square root of the golden section and the golden section squared are related mathematically to the square roots of two and three, respectively, in ways that he believes have not been seen before. Reynolds calls this series of constructions, "marriages of incommensurables", and the two he presents here are part of a larger group he has been working on for some time. We consider a non-isothermal Stokes equation used to calculate the pressure distribution in a thin layer of lubricant film between two surfaces. The problem is described in 2D and 3D settings by the Stokes and heat transfer equations. Under appropriate regularity assumptions on the data, existence results for the non-isothermal Stokes is recalled. Using a formal asymptotic expansion, we obtain a generalized Reynolds equation coupled with a limit energy equation, the so-called non-isothermal Reynolds system. Then existence and uniqueness are proved for this system by using a fixed-point argument. Finally, a rigorous justification of the convergence is established. We study the 1D Boltzmann equation for a mixture of two gases on a torus with the initial condition of one gas near a vacuum and the other near a Maxwellian equilibrium state. An L-x(infinity) L-xi,beta(infinity) analysis is developed to study this mass diffusion problem, which is based on the Boltzmann equation for the single species hard sphere collision in an earlier work of the author. The decay rate of the solution is algebraic for a small time region and exponential for a large time region. Moreover, the exponential rate depends on the size of the domain. In this paper we introduce a program to construct the Green's function for the linearized compressible Navier-Stokes equations in several space dimensions. This program contains three components, a procedure to isolate global singularities in the Green's function for a multi-spatial-dimensional problem, a long wave-short wave decomposition for the Green's function and an energy method together with Sobolev inequalities. These three components together split the Green's function into singular and regular parts with the singular part given explicitly and the regular part bounded by exponentially sharp pointwise estimates. The exponentially sharp singular-regular description of the Green's function together with Duhamel's principle and results of Matsumura-Nishida on L-infinity decay yield through a bootstrap procedure an exponentially sharp space-time pointwise description of solutions of the full compressible Navier-Stokes equations in R-n( n = 2, 3). Estimates and representations of solutions of div-curl systems for planar vector fields are described. Potentials are used to represent solutions as the sum of fields that depend on the source terms and harmonic fields dependent on the boundary data. Sharp 2-norm (energy) bounds for the least energy solutions on bounded regions with Lipschitz boundary are found. Prescribed flux, tangential or mixed flux and tangential boundary conditions require different potentials. The harmonic fields are represented and estimated using Steklov eigenfunctions. Some regularity results are obtained. The aim of this paper is to reconstruct an obstacle. immersed in a fluid governed by the Brinkman equation in a three-dimensional bounded domain ohm from internal data. We reformulate the inverse problem in an optimization one by using a least square functional. We prove the existence of an optimal solution for the optimization problem. We perform the asymptotic expansion of the cost function using a straightforward way based on a penalization technique. An important advantage of this method is that it avoids the truncation method used in the literature. Finally, we make some numerical results, exploring the efficiency of the method. In this article, we study the interaction of delta shock waves for the one-dimensional strictly hyperbolic system of conservation laws with split delta function. We prove that Riemann solutions are stable under local small perturbations of the Riemann initial data. The global structure and large time asymptotic behaviour of the perturbed Riemann solutions are constructed and analyzed case by case. We study wave function synchronization of the Schrodinger-Lohe model, which describes the dynamics of the ensemble of coupled quantum Lohe oscillators with infinite states. To do this, we first derive a coupled system of ordinary differential equations for the L-x(2) inner products between distinct wave functions. For the same one-body potentials, we show that the inner products of two wave functions converge to unity for some restricted class of initial data, so complete wave function synchronization emerges asymptotically when the dynamical system approach is used. Moreover, for the family of one-body potentials consisting of real-value translations of the same base potential, we show that the inner products for a two-oscillator system follow the motion of harmonic oscillators in a small coupling regime, and then as the coupling strength increases, the inner products converge to constant values; this behavior yields convergence toward constant values for the L-x(2) differences between distinct wave functions. Soft regulation has increased its importance in science and technology governance. Despite such indisputable significance, the literature on technology policy and regulation so far seems to have dedicated only a limited attention to a systematic understanding of the factors affecting compliance with these soft rules. This article addresses this limitation. By way of a literature scoping exercise, we propose a taxonomy of the mechanisms affecting compliance with soft regulation. We subsequently apply the taxonomy as a guide to examine the opinions of a small group of scientists and company managers in the Italian nanotechnology sector. The case study does not assess compliance in a direct way, i.e., observing how organizations comply with regulation, but it explores the opinions on what the factors affecting compliance are (and why they work). Challenges like finite fossil fuels, impacts of climate change, and risks of nuclear energy require a transformation of energy systems which implies risks itself, e.g. technical or socio-economic risks or still unknown and unexpected surprises. Nevertheless, in order to follow the direction desired by the transformation, the question arises how the direction of the transformation processes of socio-technical energy systems (system innovations) can be influenced. Guiding orientation processes (GOPs) could represent such a possibility to give direction where desired directions are taken up with guiding concept ideas-understood as socially shared, views deemed simultaneously desired and feasible for the future-being specified together with selected addressees, spread and implemented. Next to the main question as to whether such GOPs can give direction for system innovations, we focused on factors supporting the effectiveness of these processes and on possibilities and limitations connected with their use. In order to answer these questions theoretically, we developed (1) the three-level model differentiating guiding orientations into three levels and representing their relationship as an important content-related effectiveness factor, (2) the definition of giving direction in the form of guiding orientation ideas and true guiding orientations and (3) the phase model of an ideal GOP with its phases of triggering, specifying, spreading and implementing. Within our empirical studies, we analysed two GOPs with the guiding concept ideas of sustainable energy system respectively resilient energy system. Thereby, we could confirm that GOPs can direct system innovations if certain effectiveness factors are considered which we abstracted within our phase model. With the Internet's integration into mainstream society, online technologies have become a significant economic factor and a central aspect of everyday life. Thus, it is not surprising that news providers and social scientists regularly offer media-induced visions of a nearby future and that these horizons of expectation are continually expanding. This is true not only for the Web as a traditional media technology but also for 3D printing, which has freed modern media utopianism from its stigma of immateriality. Our article explores the fundamental semantic structures and simplification patterns of popular media utopias and unfolds the thesis that their resounding success is based on their instantaneous connectivity and compatibility to societal discourses in a broad variety of cultural, political, or economic contexts. Further, it addresses the social functions of utopian concepts in the digital realm. Visions of and narratives about the future energy system influence the actual creation of innovations and are thus accompanying the current energy transition. Particularly in times of change and uncertainty, visions gain crucial relevance: imagining possible futures impacts the current social reality by both creating certain spaces of action and shaping technical artifacts. However, different actors may express divergent visions of the future energy system and its implementation. Looking at a particular innovation site involving multiple stakeholders over an 8-year period, we empirically analyze the collective negotiation process of vision making, its shifting over time, and how visions eventually unfold performativity. Adopting a process perspective, we identify four different phases and the respective functions of visions and visioneering related to the site's development by exploring the question: Why do certain visions gain importance and eventually lead to substantial changes of the project in process? Qualitative data from documents and interviews analyzed with reference to science and technology studies show the interweaving conditions that influence the visioneering and the linkage to the actual development of material artifacts. Against the backdrop of innovation projects, this paper explores visioneering as an ongoing, transformative and collective process and reveals its moments of (de)stabilization. The production, manipulation and exploitation of future visions are increasingly important elements in practices of visioneering socio-technical processes of innovation and transformation. This becomes obvious in new and emerging science and technologies and large-scale transformations of established socio-technical systems (e.g. the energy system). A variety of science and technology studies (STS) provide evidence on correlations between expectations and anticipatory practices with the dynamics of such processes of change. Technology assessment (TA) responded to the challenges posed by the influence of visions on the processes by elaborating methodologies for a "vision assessment" as a contribution to what is now increasingly known as "hermeneutical TA". But until now, the practical functions of visions in the processes have not been explained in a way that satisfies the empirical needs of TA's vision assessment-that is to provide future-oriented knowledge based on the analysis of ongoing changes in the present without knowing the future outcomes. Our leading hypothesis is that we can only understand the practical roles of visions in current processes if we analyse them as socio-epistemic practices which simultaneously produce new knowledge and enable new social arrangements. We elaborate this by means of two cases: the visions of In Vitro meat and of the smart grid. Here, we interpret visioneering more in its collective dimension as a contingent and open-ended process, emerging from heterogeneous socio-epistemic practices. This paper aims at improving TA's vision assessments and related STS research on visionary practices for real-time analysis and assessments. Expectations play a distinctive role in shaping emerging technologies and producing hype cycles when a technology is adopted or fails on the market. To harness expectations, facilitate and provoke forward-looking discussions, and identify policy alternatives, futures studies are required. Here, expert anticipation of possible or probable future developments becomes extremely arbitrary beyond short-term prediction, and the results of futures studies are often controversial, divergent, or even contradictory; thus they are contested. Nevertheless, such socio-technical imaginaries may prescribe a future that seems attainable to those involved in the visioneering process, and other futures may thus become less likely and shaping them could become more difficult. This implies a need to broaden the debate on socio-technological development, creating spaces where policy, science, and society can become mutually responsive to each other. Laypeople's experiential and value-based knowledge is highly relevant for complementing expertise to inform socially robust decision-making in science and technology. This paper presents the evolution of a transdisciplinary, forward-looking co-creation process-a demand-side approach developed to strengthen needs-driven research and innovation governance by cross-linking knowledge of laypeople, experts, and stakeholders. Three case studies serve as examples. We argue that this approach can be considered a method for adding social robustness to visioneering and to responsible socio-technical change. Since industrial trade fair Hannover Messe 2011, the term "Industrie 4.0" has ignited a vision of a new Industrial Revolution and has been inspiring a lively, ongoing debate among the German public about the future of work, and hence society, ever since. The discourse around this vision of the future eventually spread to other countries, with public awareness reaching a temporary peak in 2016 when the World Economic Forum's meeting in Davos was held with the motto "Mastering the Fourth Industrial Revolution." How is it possible for a vision originally established by three German engineers to unfold and bear fruit at a global level in such a short period of time? This article begins with a summary of the key ideas that are discussed under the label Industrie 4.0. The main purpose, based on an in-depth discourse analysis, is to debunk the myth about the origin of this powerful vision and to trace the narrative back to the global economic crisis in 2009 and thus to the real actors, central discourse patterns, and hidden intentions of this vision of a new Industrial Revolution. In conclusion, the discourse analysis reveals that this is not a case of visioneering but one of a future told, tamed, and traded. This commentary reflects on the 1930 general theory of L,on Rosenfeld dealing with phase-space constraints. We start with a short biography of Rosenfeld and his motivation for this article in the context of ideas pursued by W. Pauli, F. Klein, E. Noether. We then comment on Rosenfeld's General Theory dealing with symmetries and constraints, symmetry generators, conservation laws and the construction of a Hamiltonian in the case of phase-space constraints. It is remarkable that he was able to derive expressions for all phase space symmetry generators without making explicit reference to the generator of time evolution. In his Applications, Rosenfeld treated the general relativistic example of Einstein-Maxwell-Dirac theory. We show, that although Rosenfeld refrained from fully applying his general findings to this example, he could have obtained the Hamiltonian. Many of Rosenfeld's discoveries were re-developed or re-discovered by others two decades later, yet as we show there remain additional firsts that are still not recognized in the community. Since the publication of Premack and Woodruff's classic paper introducing the notion of a 'theory of mind' (Premack and Woodruff in Behav Brain Sci 1(4):515-526, 1978), interdisciplinary research in social cognition has witnessed the development of theory-theory, simulation theory, hybrid approaches, and most recently interactionist and perceptual accounts of other minds. The challenges that these various approaches present for each other and for research in social cognition range from adequately defining central concepts to designing experimental paradigms for testing empirical hypotheses. But is there any approach that promises to dominate future interdisciplinary research in social cognition? Is social cognition witnessing a gradual paradigm shift where hitherto grounding notions such as theory of mind are no longer viewed as explanatorily necessary? Or have we simply lost our way in attempting to devise adequate experimental setups that could sway the debate in favour of one of the contending accounts? This special issue addresses these questions in an attempt to discover what the future holds for interdisciplinary research in social cognition. A number of convergent recent findings with adults have been interpreted as evidence of the existence of two distinct systems for mindreading that draw on separate conceptual resources: one that is fast, automatic, and inflexible; and one that is slower, controlled, and flexible. The present article argues that these findings admit of a more parsimonious explanation. This is that there is a single set of concepts made available by a mindreading system that operates automatically where it can, but which frequently needs to function together with domain-specific executive procedures (such as visually rotating an image to figure out what someone else can see) as well as domain-general resources (including both long-term and working memory). This view, too, can be described as a two-systems account. But in this case one of the systems encompasses the other, and the conceptual resources available to each are the same. When do children acquire a propositional attitude folk psychology or theory of mind? The orthodox answer to this central question of developmental ToM research had long been that around age 4 children begin to apply "belief" and other propositional attitude concepts. This orthodoxy has recently come under serious attack, though, from two sides: Scoffers complain that it over-estimates children's early competence and claim that a proper understanding of propositional attitudes emerges only much later. Boosters criticize the orthodoxy for underestimating early competence and claim that even infants ascribe beliefs. In this paper, the orthodoxy is defended on empirical grounds against these two kinds of attacks. On the basis of new evidence, not only can the two attacks safely be countered, but the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes. This unified conceptual capacity contrasts with the less sophisticated, less unified implicit forms of tracking simpler mental states present in ontogeny long before. This refined version of the orthodoxy can thus most plausibly be spelled out in some form of 2-systems-account of theory of mind. The concept of empathy has received much attention from philosophers and also from both cognitive and social psychologists. It has, however, been given widely conflicting definitions, with some taking it primarily as an epistemological notion and others as a social one. Recently, empathy has been closely associated with the simulationist approach to social cognition and, as such, it might be thought that the concept's utility stands or falls with that of simulation itself. I suggest that this is a mistake. Approaching the question of what empathy is via the question of what it is for, I claim that empathy plays a distinctive epistemological role: it alone allows us to know how others feel. This is independent of the plausibility of simulationism more generally. With this in view I propose an inclusive definition of empathy, one likely consequence of which is that empathy is not a natural kind. It follows that, pace a number of empathy researchers, certain experimental paradigms tell us not about the nature of empathy but about certain ways in which empathy can be achieved. I end by briefly speculating that empathy, so conceived, may also play a distinctive social role, enabling what I term 'transparent fellow-feeling'. There is widely assumed to be a fundamental epistemological asymmetry between self-knowledge and knowledge of others. They are said to be 'categorically different in kind and manner' (Moran), and the existence of such an asymmetry is taken to be a primitive datum in accounts of the two kinds of knowledge. I argue that standard accounts of the differences between self-knowledge and knowledge of others exaggerate and misstate the asymmetry. The inferentialist challenge to the asymmetry focuses on the extent to which both self-knowledge and knowledge of others are matters of inference and interpretation. In the case of self-knowledge I focus on the so-called 'transparency method' and on the extent to which use of this method delivers inferential self-knowledge. In the case of knowledge of others' thoughts, I discuss the role of perception as a source of such knowledge and argue that even so-called 'perceptual' knowledge of other minds is inferential. I contend that the difference between self-knowledge and knowledge of others is a difference in the kinds of evidence on which they are typically based. The unobservability thesis (UT) states that the mental states of other people are unobservable. Both defenders and critics of UT seem to assume that UT has important implications for the mindreading debate. Roughly, the former argue that because UT is true, mindreaders need to infer the mental states of others, while the latter maintain that the falsity of UT makes mindreading inferences redundant. I argue, however, that it is unclear what 'unobservability' means in this context. I outline two possible lines of interpretation of UT, and argue that on one of these, UT has no obvious implications for the mindreading debate. On the other line of interpretation, UT may matter to the mindreading debate, in particular if we think of it as a thesis about the possible contents of perceptual experience. The upshot is that those who believe UT has implications for the mindreading debate need to be more specific about how they understand the thesis. The debate about direct perception encompasses different topics, one of which concerns the richness of the contents of perceptual experiences. Can we directly perceive only low-level properties, like edges, colors etc. (the sparse-content view), or can we perceive high-level properties and entities as well (the liberal-content view)? The aim of the paper is to defend the claim that the content of our perceptual experience can include emotions and also person impressions. Using these examples, an argument is developed to defend a liberal-content view for core examples of social cognition. This view is developed and contrasted with accounts which claim that in the case of registering another person's emotion while seeing them, we have to describe the relevant content not as the content of a perceptual experience, but of a perceptual belief. The paper defends the view that perceptual experiences can have a rich content yet remain separable from beliefs formed on the basis of the experience. How liberal and enriched the content of a perceptual experience is will depend upon the expertise a person has developed in the field. This is supported by the argument that perceptual experiences can be systematically enriched by perceiving affordances of objects, by pattern recognition or by top-down processes, as analyzed by processes of cognitive penetration or predictive coding. The question of how we actually arrive at our knowledge of others' mental lives is lively debated, and some philosophers defend the idea that mentality is sometimes accessible to perception. In this paper, a distinction is introduced between "mind awareness" and "mental state awareness," and it is argued that the former at least sometimes belongs to perceptual, rather than cognitive, processing. Over the last several decades, there has been a wealth of illuminating work on processes implicated in social cognition. Much less has been done in articulating how we learn the contours of particular concepts deployed in social cognition, like the concept MENTALISTIC AGENT. Recent developments in learning theory afford new tools for approaching these questions. In this article, I describe some rudimentary ways in which learning theoretic considerations can illuminate philosophically important aspects of the MENTALISTIC AGENT concept. I maintain that MENTALISTIC AGENT is an essentialized concept (cf. Gelman, in The essential child, 2003; Keil, in Concepts, kinds, and cognitive development, 1992) and that learning-theoretic considerations help explain why the concept is not tied to particular traits. This paper argues that mind-reading hypotheses (MRHs), of any kind, are not needed to best describe or best explain basic acts of social cognition. It considers the two most popular MRHs: one-ToM and two-ToM theories. These MRHs face competition in the form of complementary behaviour reading hypotheses (CBRHs). Following Buckner (Mind Lang 29: 566-589, 2014), it is argued that the best strategy for putting CBRHs out of play is to appeal to theoretical considerations about the psychosemantics of basic acts of social cognition. In particular, need-based accounts that satisfy a teleological criterion have the ability to put CBRHs out of play. Yet, against this backdrop, a new competitor for MRHs is revealed: mind minding hypothesis (MMHs). MMHs are capable of explaining all the known facts about basic forms of social cognition and they also satisfy the teleological criterion. In conclusion, some objections concerning the theoretical tenability of MMHs are addressed and prospects for further research are canvassed. The category-theoretic representation of quantum event structures provides a canonical setting for confronting the fundamental problem of truth valuation in quantum mechanics as exemplified, in particular, by Kochen-Specker's theorem. In the present study, this is realized on the basis of the existence of a categorical adjunction between the category of sheaves of variable local Boolean frames, constituting a topos, and the category of quantum event algebras. We show explicitly that the latter category is equipped with an object of truth values, or classifying object, which constitutes the appropriate tool for assigning truth values to propositions describing the behavior of quantum systems. Effectively, this category-theoretic representation scheme circumvents consistently the semantic ambiguity with respect to truth valuation that is inherent in conventional quantum mechanics by inducing an objective contextual account of truth in the quantum domain of discourse. The philosophical implications of the resulting account are analyzed. We argue that it subscribes neither to a pragmatic instrumental nor to a relative notion of truth. Such an account essentially denies that there can be a universal context of reference or an Archimedean standpoint from which to evaluate logically the totality of facts of nature. It is widely believed that neural elements interact by communicating messages. Neurons, or groups of neurons, are supposed to send packages of data with informational content to other neurons or to the body. Thus, behavior is traditionally taken to consist in the execution of commands or instructions sent by the nervous system. As a consequence, neural elements and their organization are conceived as literally embodying and transmitting representations that other elements must in some way read and conform to. In opposition to this conception, growing approaches such as enactivism and ecological psychology hold that neurons are not in the business of representing. However, by insisting that neural causation is not of a representational kind, these anti-representationalist approaches seem to be left with only one rather implausible alternative, viz. that behavior is the result of nothing but basic physical causation such as push-pull forces. In this paper it is argued that a third form of causation-termed "modulation"-exists and is at work in the coordination of animal behavior. Modulation is the quasi-direct guidance of dynamical systems through specific yet emerging trajectories. By setting the constraints that coordinate the free interaction of multi-element systems, modulation influences without forcing nor representing goal states. The basic properties of modulatory causation are analyzed and shown to be present in some fundamental aspects of neural and bodily interaction. The main goal of this paper is to investigate what explanatory resources Robert Brandom's distinction between acknowledged and consequential commitments affords in relation to the problem of logical omniscience. With this distinction the importance of the doxastic perspective under consideration for the relationship between logic and norms of reasoning is emphasized, and it becomes possible to handle a number of problematic cases discussed in the literature without thereby incurring a commitment to revisionism about logic. One such case in particular is the preface paradox, which will receive an extensive treatment. As we shall see, the problem of logical omniscience not only arises within theories based on deductive logic; but also within the recent paradigm shift in psychology of reasoning. So dealing with this problem is important not only for philosophical purposes but also from a psychological perspective. Suppose we take a picture containing a full image of a duck and slice it right through, leaving some of the duck image (including its head) on one slice and some of it on the other. How many duck images will we be left with? Received theories of pictorial representation presuppose that a surface cannot come to contain new images just by changing its physical relations with other surfaces, such as physical continuity. But as it turns out, this is in tension with received theories' approach to incomplete images. I address three views with respect to the circumstances in which incomplete images of X represent X. 1. A liberal, non-restrictive view: 'Iff they meet relevant requirements posed by received theory of pictorial representation.' 2. Moderate restrictions of this view ('iff they meet requirements posed by theory of pictorial representation and.') and 3. A fully restrictive view ('never'). After investigating challenges for the liberal view, I end up supporting it. The main challenges rest on the fact that only the fully restrictive view can plausibly accommodate some principles that seem inherent to our theory of representation. For instance: only this view accommodates received theories' presupposition that the representational properties of a surface depend on its configurational properties such that new images may appear on a surface only if its configurational properties have changed. Since the liberal view is overall more plausible than the restrictive view, I reject this presupposition and bear the consequences. Extant contextualist theories have relied on the mechanism of pragmatically driven modulation to explain the way non-indexical expressions take on different interpretations in different contexts. In this paper I argue that a modulation-based contextualist semantics is untenable with respect to non-ambiguous expressions whose invariant meaning fails to determine a unique literal interpretation, such as 'lawyer' 'musician' 'book' and 'game'. The invariant meaning of such an expression corresponds to a range of closely related and equally basic interpretations, none of which can be distinguished as the literal interpretation. Moreover, what counts as a literal interpretation as opposed to a non-literal one is arguably vague. The nonuniqueness of the literal interpretation and the vagueness in the literal/non-literal divide doubly challenge a modulation-based semantics, for modulation is supposed to operate on a unique literal interpretation to generate a clearly non-literal interpretation. Lastly I contend that non-ambiguous expressions which lack determinate literal interpretation are amenable to a Radical Contextualist semantics, according to which the invariant meaning of such an expression directs its interpretation to congruent background information in context. Thereby, these expressions exhibit semantically driven context sensitivity without displaying indexicality. Recently, a number of epistemologists (notably Feldman in Philosophers without gods: meditations on atheism and the secular life. Oxford University Press, Oxford, 2007, in Episteme 6(3): 294-312, 2009; White in Philos Perspect 19: 445-449, 2005, White in Contemporary debates in epistemology. Blackwell, Oxford, 2013) have argued for the rational uniqueness thesis, the principle that any set of evidence permits only one rationally acceptable attitude toward a given proposition. In contrast, this paper argues for extreme rational permissivism, the view that two agents with the same evidence (evidential peers) may sometimes arrive at contradictory beliefs rationally. This paper (1) identifies different versions of uniqueness and permissivism that vary in strength and range, (2) argues that evidential peers with different interests need not rationally endorse all the same hypotheses, (3) argues that evidential peers who weigh the theoretic virtues differently (that is, who have different standards) can sometimes rationally endorse contradictory conclusions, and finally (4) defends the permissivist appeal to standards against objections in the works of Feldman and White. Conventional immunosensors require many binding events to give a single transducer output which represents the concentration of the analyte in the sample. Because of the requirements to selectively detect species in complex samples, immunosensing interfaces must allow immobilisation of antibodies while repelling nonspecific adsorption of other species. These requirements lead to quite sophisticated interfacial design, often with molecular level control, but we have no tools to characterise how well these interfaces work at the molecular level. The work reported herein is an initial feasibility study to show that antibody-antigen binding events can be monitored at the single molecule level using single molecule localisation microscopy (SMLM). The steps to achieve this first requires showing that indium tin oxide surfaces can be used for SMLM, then that these surfaces can be modified with self-assembled monolayers using organophosphonic acid derivatives, that the amount of antigens and antibodies on the surface can be controlled and monitored at the single molecule level and finally antibody binding to antigen modified surfaces can be monitored. The results show the amount of antibody that binds to an antigen modified surface is dependent on both the concentration of antigen on the surface and the concentration of antibody in solution. This study demonstrates the potential of SMLM for characterising biosensing interfaces and as the transducer in a massively parallel, wide field, single molecule detection scheme for quantitative analysis. A skin covered oxygen electrode, SCOE, was constructed with the aim to study the enzyme catalase, which is part of the biological antioxidative system present in skin. The electrode was exposed to different concentrations of H2O2 and the amperometric current response was recorded. The observed current is due to H2O2 penetration through the outermost skin barrier (referred to as the stratum corneum, SC) and subsequent catalytic generation of O-2 by catalase present in the underlying viable epidermis and dermis. By tape-stripping the outermost skin layers we demonstrate that SC is a considerable diffusion barrier for H2O2 penetration. Our experiments also indicate that skin contains a substantial amount of catalase, which is sufficient to detoxify H2O2 that reaches the viable epidermis after exposure of skin to high concentrations of peroxide (0.5-1 mM H2O2). Further, we demonstrate that the catalase activity is reduced at acidic pH, as compared with the activity at pH 7.4. Finally, experiments with often used penetration enhancer thymol shows that this compound interferes with the catalase reaction. Health aspect of this is briefly discussed. Summarizing, the results of this work show that the SCOE can be utilized to study a broad spectrum of issues involving the function of skin catalase in particular, and the native biological antioxidative system in skin in general. In this research, the electrochemical biosensor composed of myoglobin (Mb) on molybdenum disulfide nanoparticles (MoS2 NP) encapsulated with graphene oxide (GO) was fabricated for the detection of hydrogen peroxide (H2O2). Hybrid structure composed of MoS2 NP and GO (GO@MoS2) was fabricated for the first time to enhance the electrochemical signal of the biosensor. As a sensing material, Mb was introduced to fabricate the biosensor for H2O2 detection. Formation and immobilization of GO@MoS2 was confirmed by transmission electron microscopy, ultraviolet-visible spectroscopy, scanning electron microscopy, and scanning tunneling microscopy. Immobilization of Mb, and electrochemical property of biosensor were investigated by cyclic voltammetry and amperometric i-t measurements. Fabricated biosensor showed the electrochemical signal enhanced redox current as -1.86 mu A at an oxidation potential and 1.95 A at a reduction potential that were enhanced relative to those of electrode prepared without GO@MoS2 Also, this biosensor showed the reproducibility of electrochemical signal, and retained the property until 9 days from fabrication. Upon addition of H2O2, the biosensor showed enhanced amperometric response current with selectivity relative to that of the biosensor prepared without GO@MoS2. This novel hybrid material-based biosensor can suggest a milestone in the development of a highly sensitive detecting platform for biosensor fabrication with highly sensitive detection of target molecules other than H2O2. A new lateral flow strip assay (LFSA) using a pair of aptamers has been designed and successfully developed using a pair of aptamers-functionalized with gold nanoparticles (AuNPs). This new LFSA biosensor system utilizes a cognate aptamer duo binding to vaspin, a target protein, at the two different binding sites, and exhibited a sensitive and highly selective response to vaspin. A sandwich-type format in LFSA was developed based on biotin-labeled primary V1 aptamer immobilized on streptavidin coated membrane as a capturing probe and secondary V49 aptamer conjugated with AuNPs as a signaling probe. Using this LFSA, vaspin could be visibly observed within the detectable concentrations of vaspin up to 5 nM in both buffer and serum conditions. The sensitivity of this LFSA developed in this study was ranged from 0.137 to 25 nM in buffer and from 0.105 to 25 nM in spiked human serum, respectively. The limit of detection (LOD) of this LFSA was found to be 0.137 nM and 0.105 nM in buffer and spiked human serum condition, respectively. Therefore, this study has shown successful development of a simple yet effective LFSA for vaspin detection, and this development is not only limited to this target, but also other targets with a pair of aptamers available, without any special and laborious instrumentations. This system will be particularly useful as a screening tool for rapid on-site detection of any targets with a pair of aptamers generated. Genome editing with site-specific nucleases (SSNs) can modify only the target gene and may be effective for gene therapy. The main limitation of genome editing for clinical use is off-target effects; excess SSN5 in the cells and their longevity can contribute to off-target effects. Therefore, a controlled delivery system for SSN5 is necessary. Fold nuclease domain (FokI) is a common DNA cleavage domain in zinc finger nuclease (ZFN) and transcription activator-like effector nuclease. Previously, we reported a zinc finger protein delivery system that combined aptamer-fused, double-strand oligonucleotides and nanoneedles. Here, we report the development of DNA aptamers that bind to the target molecules, with high affinity and specificity to the FokI. DNA aptamers were selected in six rounds of systematic evolution of ligands by exponential enrichment. Aptamers F6#8 and #71, which showed high binding affinity to Fokl (K-d=82 nM, 74 nM each), showed resistance to nuclease activity itself and did not inhibit nuclease activity. We immobilized the ZFN-fused GFP to nanoneedles through these aptamers and inserted the nanoneedles into HEK293 cells. We observed the release of ZFN-fused GFP from the nanoneedles in the presence of cells. Therefore, these aptamers are useful for genome editing applications such as controlled delivery of SSNs. An on-chip gene expression analysis compartmentalized in droplets was developed for detection of cancer cells at a single-cell level. The chip consists of a keyhole-shaped reaction chamber with hydrophobic modification employing a magnetic bead-droplet-handling system with a gate for bead separation. Using three kinds of water-based droplets in oil, a droplet with sample cells, a lysis buffer with magnetic beads, and RT-PCR buffer, parallel magnetic manipulation and fusion of droplets were performed using a magnet-handling device containing small external magnet patterns in an array. The actuation with the magnet offers a simple system for droplet manipulation that allows separation and fusion of droplets containing magnetic beads. After reverse transcription and amplification by thermal cycling, fluorescence was obtained for detection of overexpressing genes. For clinical detection of gastric cancer cells in peritoneal washing, the Her2-overexpressing gastric cancer cells spiked within normal cells was detected by gene expression analysis of droplets containing an average of 2.5 cells. Our developed droplet-based cancer detection system manipulated by external magnetic force without pumps or valves offers a simple and flexible set-up for transcriptional detection of cancer cells, and will be greatly advantageous for less-invasive clinical diagnosis and prognostic prediction. In this work we have developed an amperometric enzymatic biosensor in a paper-based platform with a mixed electrode configuration: carbon ink for, the working electrode (WE) and metal wires (from a low-cost standard electronic connection) for reference (RE) and auxiliary electrodes (AE). A hydrophobic wax-defined paper area was impregnated with diluted carbon ink. Three gold-plated pins of the standard connection are employed, one for connecting the WE and the other two acting as RE and AE. The standard connection works as a clip in order to support the paper in between. As a proof-of-concept, glucose sensing was evaluated. The enzyme cocktail (glucose oxidase, horseradish peroxidase and potassium ferrocyanide as mediator of the electron transfer) was adsorbed on the surface. After drying, glucose solution was added to the paper, on the opposite side of the carbon ink. It wets RE and AE, and flows by capillarity through the paper contacting the carbon WE surface. The reduction current of ferricyanide, product of the enzymatic reaction, is measured chronoamperometrically and correlates to the concentration of glucose. Different parameters related to the bioassay were optimized, adhering the piece of paper onto a conventional screen-printed carbon electrode (SPCE). In this way, the RE and the AE of the commercial card were employed for optimizing the paper-WE. After evaluating the assay system in the hybrid paper-SPCE cell, the three-electrode system consisting of paper-WE, wire-RE and wire-AE, was employed for glucose determination, achieving a linear range between 0.3 and 15 mM with good analytical features and being able of quantifying glucose in real food samples. Paper-based microfluidic devices are gaining large popularity because of their uncontested advantages of simplicity, cost-effectiveness, limited necessity of laboratory infrastructure and skilled personnel. Moreover, these devices require only small volumes of reagents and samples, provide rapid analysis, and are portable and disposable. Their combination with electrochemical detection offers additional benefits of high sensitivity, selectivity, simplicity of instrumentation, portability, and low cost of the total system. Herein, we present the first example of an integrated paper-based screen-printed electrochemical biosensor device able to quantify nerve agents. The principle of this approach is based on dual electrochemical measurements, in parallel, of butyrylcholinesterase (BChE) enzyme activity towards butyrylthiocholine with and without exposure to contaminated samples. The sensitivity of this device is largely improved using a carbon black/Prussian Blue nanocomposite as a working electrode modifier. The proposed device allows an entirely reagent-free analysis. A strip of a nitrocellulose membrane, that contains the substrate, is integrated with a paper-based test area that holds a screen-printed electrode and BChE. Paraoxon, chosen as nerve agent simulant, is linearly detected down to 3 mu g/L. The use of extremely affordable manufacturing techniques provides a rapid, sensitive, reproducible, and inexpensive tool for in situ assessment of nerve agent contamination. This represents a powerful approach for use by non-specialists, that can be easily broadened to other (bio)systems. In this work, we developed an impedimetric label-free immunosensor for the detection of 2,4-Dichlorophenoxy Acetic Acid (2,4-D) herbicide either in standard solution and spiked real samples. For this purpose, we prepared by electropolymerization a conductive polymer poly-(aniline-co-3-aminobenzoic acid) (PANABA) then we immobilized anti-2,4-D antibody onto a nanocomposite AuNPs-PANABA-MWCNTs employing the carboxylic moieties as anchor sites. The nanocomposite was synthesized by electrochemical polymerization of aniline and 3-aminobenzoic acid, in the presence of a dispersion of gold nanoparticles, onto a multi-walled carbon nanotubes-based screen printed electrode. Aniline-based copolymer, modified with the nanomaterials, allowed to enhance the electrode conductivity thus obtaining a more sensitive antigen detection. The impedimetric measurements were carried out by electrochemical impedance spectroscopy (EIS) in faradic condition by using Fe(CN)6(3-/4-) as redox probe. The developed impedimetric immunosensor displayed a wide linearity range towards 2,4-D (1-100 ppb), good repeatability (RSD 6%), stability and a LOD (0.3 ppb) lower than herbicide emission limits. Fishes display a wide variation in their physiological responses to stress, which is clearly evident in the plasma corticosteroid changes, chiefly cortisol levels in fish. As a well-known indicator of fish stress, a simple and rapid method for detecting cortisol changes especially sudden increases is desired. In this study, we describe an enzyme-functionalized label-free immunosensor system for detecting fish cortisol levels. Detection of cortisol using amperometry was achieved by immobilizing both anti-cortisol antibody (selective detection of cortisol) and glucose oxidase (signal amplification and non-toxic measurement) on an Au electrode surface with a self assembled monolayer. This system is based on the maximum glucose oxidation output current change induced by the generation of a non-conductive antigen-antibody complex, which depends on the levels of cortisol in the sample. The immunosensor responded to cortisol levels with a linear decrease in the current in the range of 1.25-200 ng ml(-1) (I1=0.964). Since the dynamic range of the sensor can cover the normal range of plasma cortisol in fish, the samples obtained from the fish did not need to be diluted. Further, electrochemical measurement of one sample required only similar to 30 min. The sensor system was applied to determine the cortisol levels in plasma sampled from Nile tilapia (Oreochromis niloticus), which were then compared with levels of the same samples determined using the conventional method (ELISA). Values determined using both methods were well correlated. These findings suggest that the proposed label-free immunosensor could be useful for rapid and convenient analysis of cortisol levels in fish without sample dilution. We also believe that the proposed system could be integrated in a miniaturized potentiostat for point-of-care cortisol detection and useful as a portable diagnostic in fish farms in the future. This work addresses the design of an Ebola diagnostic test involving a simple, rapid, specific and highly sensitive procedure based on isothermal amplification on magnetic particles with electrochemical readout. Ebola padlock probes were designed to detect a specific L-gene sequence present in the five most common Ebola species. Ebola cDNA was amplified by rolling circle amplification (RCA) on magnetic particles. Further re-amplification was performed by circle-to-circle amplification (C2CA) and the products were detected in a double-tagging approach using a biotinylated capture probe for immobilization on magnetic particles and a readout probe for electrochemical detection by square-wave voltammetry on commercial screen-printed electrodes. The electrochemical genosensor was able to detect as low as 200 ymol, corresponding to 120 cDNA molecules of L-gene Ebola virus with a limit of detection of 33 cDNA molecules. The isothermal double-amplification procedure by C2CA combined with the electrochemical readout and the magnetic actuation enables the high sensitivity, resulting in a rapid, inexpensive, robust and user-friendly sensing strategy that offers a promising approach for the primary care in low resource settings, especially in less developed countries. Infectious plant diseases are caused by pathogenic microorganisms such as fungi, bacteria, viruses, viroids, phytoplasma and nematodes. Worldwide, plant pathogen infections are among main factors limiting crop productivity and increasing economic losses. Plant pathogen detection is important as first step to manage a plant disease in greenhouses, field conditions and at the country boarders. Current immunological techniques used to detect pathogens in plant include enzyme-linked immunosorbent assays (ELISA) and direct tissue blot immunoassays (DTBIA). DNA-based techniques such as polymerase chain reaction (PCR), real time PCR (RTPCR) and dot blot hybridization have also been proposed for pathogen identification and detection. However these methodologies are time-consuming and require complex instruments, being not suitable for in-situ analysis. Consequently, there is strong interest for developing new biosensing systems for early detection of plant diseases with high sensitivity and specificity at the point-of-care. In this context, we revise here the recent advancement in the development of advantageous biosensing systems for plant pathogen detection based on both antibody and DNA receptors. The use of different nanomaterials such as nanochannels and metallic nanoparticles for the development of innovative and sensitive biosensing systems for the detection of pathogens (i.e. bacteria and viruses) at the point-of-care is also shown. Plastic and paper-based platforms have been used for this purpose, offering cheap and easy-to-use really integrated sensing systems for rapid on-site detection. Beside devices developed at research and development level a brief revision of commercially available kits is also included in this review. Biosensors can deliver the rapid bacterial detection that is needed in many fields including food safety, clinical diagnostics, biosafety and biosecurity. Whole-cell imprinted polymers have the potential to be applied as recognition elements in biosensors for selective bacterial detection. In this paper, we report on the use of 3-minophenylboronic acid (3-APBA) for the electrochemical fabrication of a cell-imprinted polymer (CIP). The use of a monomer bearing a boronic acid group, with its ability to specifically interact with cis-diol, allowed the formation of a polymeric network presenting both morphological and chemical recognition abilities. A particularly beneficial feature of the proposed approach is the reversibility of the cis-diol-boronic group complex, which facilitates easy release of the captured bacterial cells and subsequent regeneration of the CIP. Staphylococcus epidermidis was used as the model target bacteria for the CIP and electrochemical impedance spectroscopy (EIS) was explored for the label-free detection of the target bacteria. The modified electrodes showed a linear response over the range of 103-107 cfu/mL. A selectivity study also showed that the CIP could discriminate its target from non-target bacteria having similar shape. The CIPs had high affinity and specificity for bacterial detection and provided a switchable interface for easy removal of bacterial cell. Volatile organic compounds (VOCs) detection is in high demand for clinic treatment, environment monitoring, and food quality control. Especially, VOCs from human exhaled breath can serve as significant biomarkers of some diseases, such as lung cancer and diabetes. In this study, a smartphone-based sensing system was developed for real-time VOCs monitoring using alternative current (AC) impedance measurement. The interdigital electrodes modified with zinc oxide (ZnO), graphene, and nitrocellulose were used as sensors to produce impedance responses to VOCs. The responses could be detected by a hand-held device, sent out to a smartphone by Bluetooth, and reported with concentration on an android program of the smartphone. The smartphone-based system was demonstrated to detect acetone at concentrations as low as 1.56 ppm, while AC impedance spectroscopy was used to distinguish acetone from other VOCs. Finally, measurements of the exhalations from human being were carried out to obtain the concentration of acetone in exhaled breath before and after exercise. The results proved that the smartphone-based system could be applied on the detection of VOCs in real settings for healthcare diagnosis. Thus, the smartphone-based system for VOCs detection provided a convenient, portable and efficient approach to monitor VOCs in exhaled breath and possibly allowed for early diagnosis of some diseases. This work discusses an application of titanium oxide (TiOx) thin films deposited using physical (reactive magnetron sputtering, RMS) and chemical (atomic layer deposition, ALD) vapour deposition methods as a functional coating for label-free optical biosensors. The films were applied as a coating for two types of sensors based on the localised surface plasmon resonance (LSPR) of gold nanoparticles deposited on a glass plate and on a long-period grating (LPG) induced in an optical fibre. Optical and structural properties of the TiOx thin films were investigated and discussed. It has been found that deposition method has a significant influence on optical properties and composition of the films, but negligible impact on TiOx surface silanization effectiveness. A higher content of oxygen with lower Ti content in the ALD films leads to the formation of layers with higher refractive index and slightly higher extinction coefficient than for the RMS TiOx. Moreover, application of the TiOx film independently on deposition method enables not only for tuning of the spectral response of the investigated biosensors, but also in case of LSPR for enhancing the ability for biofunctionalization, i.e., TiOx film mechanically protects the nanoparticles and induces change in the biofunctionalization procedure to the one typical for oxides. TiOx coated LSPR and LPG sensors with refractive index sensitivity of close to 30 and 3400 nm/RIU, respectively, were investigated. The ability for molecular recognition was evaluated with the well-known complex formation between avidin and biotin as a model system. The shift in resonance wavelength reached 3 and 13.2 nm in case of LSPR and LPG sensors, respectively. Any modification in TiOx properties resulting from the biofunctionalization process can be also clearly detected. The process of agglutination is commonly used for the detection of biomarkers like proteins or viruses. The multiple bindings between micrometer sized particles, either latex beads or red blood cells (RBCs), create aggregates that are easily detectable and give qualitative information about the presence of the biomarkers. In most cases, the detection is made by simple naked-eye observation of agglutinates without any access to the kinetics of agglutination. In this study, we address the development of a real-time time observation of RBCs agglutination. Using ABO blood typing as a proof-of-concept, we developed i) an integrated biological protocol suitable for further use as point-of-care (POC) analysis and two dedicated image processing algorithms for the real-time and quantitative measurement of agglutination. Anti-A or anti-B typing reagents were dried inside the microchannel of a passive microfluidic chip designed to enhance capillary flow. A blood drop deposit at the tip of the biochip established a simple biological protocol. In situ agglutination of autologous RBCs was achieved by means of embedded reagents and real time agglutination process was monitored by video recording. Using a training set of 24 experiments, two real-time indicators based on correlation and variance of gray levels were optimized and then further confirmed on a validation set. 100% correct discrimination between positive and negative agglutinations was performed within less than 2 min by measuring real-time evolution of both correlation and variance indicators. DNA methylation level at a certain gene region is considered as a new type of biomarker for diagnosis and its miniaturized and rapid detection system is required for diagnosis. Here we have developed a simple electrochemical detection system for DNA methylation using methyl CpG-binding domain (MBD) and a glucose dehydrogenase (GDH)-fused zinc finger protein. This analytical system consists of three steps: (1) methylated DNA collection by MBD, (2) PCR amplification of a target genomic region among collected methylated DNA, and (3) electrochemical detection of the PCR products using a GDH-fused zinc finger protein. With this system, we have successfully measured the methylation levels at the promoter region of the androgen receptor gene in 106 copies of genomic DNA extracted from PC3 and TSU-PR1 cancer cell lines. Since no sequence analysis or enzymatic digestion is required for this detection system, DNA methylation levels can be measured within 3 h with a simple procedure. In this work, a novel nanocomposite film consisting of the Au nanoparticles/graphene-chitosan has been designed to construct an impedimetric immunosensor for a rapid and sensitive immunoassay of botulinum neurotoxin A (BoNT/A). BoNT/A antibody was immobilized on glassy carbon electrode modified with Au nanoparticles/graphene-chitosan for the signal amplification. The fabrication of immunosensor was extensively characterized by using transmission electron microscopy (TEM), scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray diffraction (XRD), Fourier transform infrared (FTIR), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The impedance changes, due to the specific immuno-interactions at the immunosensor surface that efficiently restricted the electron transfer of redox probe Fe(CN)(6)(4-/3-) were utilized to detect BoNT/A. The measurements were highly targeted specific and linear with logarithmic BoNT/A concentrations in PBS, milk and human serum across a 0.27-268 pg mL(-1) range and associated with a detection limit of 0.11 pg mL(-1). Electrophysiological biosensors embedded in planar devices represent a state of the art approach to measure and evaluate the electrical activity of biological systems. This measurement method allows for the testing of drugs and their influences on cells or tissues, cytotoxicity, as well as the direct implementation into biological systems in vivo for signal transduction. Multi-electrode arrays (MEAs) with metal or metal-like electrodes on glass substrates are one of the most common, well-established platforms for this purpose. In recent years organic electrochemical transistors (OECTs) made of poly(2,3-dihydrothieno-1,4-dioxin)-poly(styrenesulfonate) (PEDOT:PSS) have as well shown their value in transducing and amplifying the ionic signals in biological systems. We developed OECT devices in a wafer-scale process and used them as electrophysiological biosensors measuring electrophysiological activity of the cardiac cell line HL-1. Our optimized devices show very promising properties such as good signal-to-noise ratio as well as the ability to record fast components of extracellular signals. Combined with an easy, cost effective fabrication and the transparency of the polymer, this platform offers a valuable alternative to traditional MEA systems for future cell sensing applications. In sport, exercise and healthcare settings, there is a need for continuous, non-invasive monitoring of biomarkers to assess human performance, health and wellbeing. Here we report the development of a flexible microfluidic platform with fully integrated sensing for on-body testing of human sweat. The system can simultaneously and selectively measure metabolite (e.g. lactate) and electrolytes (e.g. pH, sodium) together with temperature sensing for internal calibration. The construction of the platform is designed such that continuous flow of sweat can pass through an array of flexible microneedle type of sensors (50 gm diameter) incorporated in a microfluidic channel. Potentiometric sodium ion sensors were developed using a polyvinyl chloride (PVC) functional membrane deposited on an electrochemically deposited internal layer of Poly(3,4-ethylenedioxythiophene) (PEDOT) polymer. The pH sensing layer is based on a highly sensitive membrane of iridium oxide (IrOx). The amperometric-based lactate sensor consists of doped enzymes deposited on top of a semipermeable copolymer membrane and outer polyurethane layers. Real-time data were collected from human subjects during cycle ergometry and treadmill running. A detailed comparison of sodium, lactate and cortisol from saliva is reported, demonstrating the potential of the multi-sensing platform for tracking these outcomes. In summary, a fully integrated sensor for continuous, simultaneous and selective measurement of sweat metabolites, electrolytes and temperature was achieved using a flexible microfluidic platform. This system can also transmit information wirelessly for ease of collection and storage, with the potential for real-time data analytics. Surface acoustic wave mediated transductions have been widely used in the sensors and actuators applications. In this study, a shear horizontal surface acoustic wave (SHSAW) was used for the detection of food pathogenic Escherichia coli 0157:117 (E.coli 0157:117), a dangerous strain among 225 E. coli unique serotypes. A few cells of this bacterium are able to cause young children to be most vulnerable to serious complications. Presence of higher than 1 cfu E.coli 0157:H7 in 25 g of food has been considered as a dangerous level. The SHSAW biosensor was fabricated on 64 YX LiNbO3 substrate. Its sensitivity was enhanced by depositing 130.5 nm thin layer of SiO2 nanostructures with particle size lesser than 70 nm. The nanostructures act both as a waveguide as well as a physical surface modification of the sensor prior to biomolecular immobilization. A specific DNA sequence from E. coli 0157:H7 having 22 mers as an amine-terminated probe ssDNA was immobilized on the thin film sensing area through chemical functionalization [(CHO-(CH2)(3)-CHO) and APTES; NH2-(CH2)(3)-Si (OC2H5)(3)]. The high-performance of sensor was shown with the specific oligonucleotide target and attained the sensitivity of 0.6439 nM/0.1 kHz and detection limit was down to 1.8 femto-molar (1.8x10(-15) M). Further evidence was provided by specificity analysis using single mismatched and complementary oligonucleotide sequences. Microarrays and other surface-based nucleic acid detection schemes rely on the hybridization of the target to surface-bound detection probes. We present the first comparison of two strategies to detect DNA using a giant magnetoresistive (GMR) biosensor platform starting from an initially double-stranded DNA target. The target strand of interest is biotinylated and detected by the GMR sensor by linking streptavidin magnetic nanoparticles (MNPs) to the sensor surface. The sensor platform has a dynamic detection range from 40 pM to 40 nM with highly reproducible results and is used to monitor real-time binding signals. The first strategy, using off-chip heat denaturation followed by sequential on-chip incubation of the nucleic acids and MNPs, produces a signal that stabilizes quickly but the signal magnitude is reduced due to competitive rehybridization of the target in solution. The second strategy, using magnetic capture of the double-stranded product followed by denaturing, produces a higher signal but the signal increase is limited by diffusion of the MNPs. Our results show that both strategies give highly reproducible results but that the signal obtained using magnetic capture is higher and insensitive to rehybridization. A semi-combinatorial virtual approach was used to prepare peptide-based gas sensors with binding properties towards five different chemical classes (alcohols, aldehydes, esters, hydrocarbons and ketones). Molecular docking simulations were conducted for a complete tripeptide library (8000 elements) versus 58 volatile compounds belonging to those five chemical classes. By maximizing the differences between chemical classes, a subset of 120 tripeptides was extracted and used as scaffolds for generating a combinatorial library of 7912 tetrapeptides. This library was processed in an analogous way to the former. Five tetrapeptides (IHRI, KSDS, LGFD, TGKF and WHVS) were chosen depending on their virtual affinity and cross-reactivity for the experimental step. The five peptides were covalently bound to gold nanoparticles by adding a terminal cysteine to each tetrapeptide and deposited onto 20 MHz quartz crystal microbalances to construct the gas sensors. The behavior of peptides after this chemical modification was simulated at the pH range used in the immobilization step. Delta F signals analyzed by principal component analysis matched the virtually screened data. The array was able to clearly discriminate the 13 volatile compounds tested based on their hydrophobicity and hydrophilicity molecules as well as the molecular weight. Interleuldn-1b (IL-1b) and interleukin-10 (IL-10) biomarkers are one of many antigens that are secreted in acute stages of inflammation after left ventricle assisted device (LVAD) implantation for patients suffering from heart failure (HF). In the present study, we have developed a fully integrated electrochemical biosensor platform for cytokine detection at minute concentrations. Using eight gold working microelectrodes (WEs) the design will increase the sensitivity of detection, decrease the time of measurements, and allow a simultaneous detection of varying cytokine biomarkers. The biosensor platform was fabricated onto silicon substrates using silicon technology. Monoclonal antibodies (mAb) of anti-human IL-lb and anti-human IL-10 were electroaddressed onto the gold WEs through functionalization with 4-carboxymethyl aryl diazonium (CMA). Cyclic voltammetry (CV) was applied during the WE functionalization process to characterize the gold WE surface properties. Finally, electrochemical impedance spectroscopy (EIS) characterized the modified gold WE. The biosensor platform was highly sensitive to the corresponding cytokines and no interference with other cytokines was observed. Both cytokines: IL-10 and IL-lb were detected within the range of 1 pg mL(-1) to 15 pg mL(-1). The present electrochemical biosensor platform is very promising for multi-detection of biomolecules which can dramatically decrease the time of analysis. This can provide data to clinicians and doctors concerning cytokines secretion at minute concentrations and the prediction of the first signs of inflammation after LVAD implantation. This work presents the development of high sensitive, selective, fast and reusable C-reactive protein (CRP) aptasensors. This novel approach takes advantage of the utilization of high sensitive refractometers based on Lossy Mode Resonances generated by thin indium tin oxide (ITO) films fabricated onto the planar region of D-shaped optical fibers. CRP selectivity is obtained by means of the adhesion of a CRP specific aptamer chain onto the ITO film using the Layer-by-Layer (LbL) nano-assembly fabrication process. The sensing mechanism relies on resonance wavelength shifts originated by refractive index variations of the aptamer chain in presence of the target molecule. Fabricated devices show high selectivity to CRP when compared with other target molecules, such as urea or creatinine, while maintaining a low detection limit (0.0625 mg/L) and fast response time (61 s). Additionally, these sensors show a repetitive response for several days and are reusable after a cleaning process in ultrapure water. (C) 2017 Elsevier B.V. All rights reserved. This paper presents a "turn-on" fluorescence biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS2) nanosheets for rapid and sensitive detection of epithelial cell adhesion molecule (EpCAM). PEGylated GQDs were used as donor molecules, which could not only largely increase emission intensity but also prevent non-specific adsorption of PEGylated GQD on MoS2 surface. The sensing platform was realized by adsorption of PEGylated GQD labelled EpCAM aptamer onto MoS2 surface via van der Waals force. The fluorescence signal of GQD was then quenched by MoS2 nanosheets via fluorescence resonance energy transfer (FRET) mechanism. In the presence of EpCAM protein, the stronger specific affinity interaction between aptamer and EpCAM protein could detach GQD labelled EpCAM aptamer from MoS2 nanosheets, leading to the restoration of fluorescence intensity. By monitoring the change of fluorescence signal, the target EpCAM protein could be detected sensitively and selectively with a linear detection range from 3 nM to 54 nM and limit of detection (LOD) around 450 pM. In addition, this nanobiosensor has been successfully used for EpCAM-expressed breast cancer MCF-7 cell detection. (C) 2016 Elsevier B.V. All rights reserved. Antimicrobial resistance (AMR) is becoming a major global-health concern prompting an urgent need for highly-sensitive and rapid diagnostic technology. Traditional assays available for monitoring bacterial cultures are time-consuming and labor-intensive. We present a magnesium zinc oxide (MZO) nanostructure-modified quartz crystal microbalance (MZO(nano)-QCM) biosensor to dynamically monitor antimicrobial effects on E. coli and S. cerevisiae. MZO nanostructures were grown on the top electrode of a standard QCM using metal-organic chemical-vapor deposition (MOCVD). The MZO nanostructures are chosen for their multifunctionality, biocompatibility, and giant effective sensing surface. The MZO surface-wettability and morphology are controlled, offering high-sensitivity to various biological/bio-chemical species. MZO-nanostructures showed over 4-times greater cell viability over ZnO due to MZO releasing 4-times lower Zn2+ density in the cell medium than ZnO. The MZO(nano)-QCM was applied to detect the effects of ampicillin and tetracycline on sensitive and resistant strains of E.coli, as well as effects of amphotericin-B and miconazole on S. cerevisiae through the device's time-dependent frequency shift and motional resistance. The MZO(nano)-QCM showed 4-times more sensitivity over ZnOnano-QCM and over 10-times better than regular QCM. For comparison, the optical density at 600 nm (OD600) method and the cell viability assay were employed as standard references to verify the detection results from MZO(nano)-QCM. In the case of S. cerevisiae, the Don method failed to distinguish between cytotoxic and cytostatic drug effects whereas the MZO(nano)-QCM was able to accurately detect the drug effects. The MZO(nano)-QCM biosensor provides a promising technology enabling dynamic and rapid diagnostics for antimicrobial drug development and AMR detection. (C) 2016 Elsevier B.V. All rights reserved. A sensitive and rapid sandwich immunoassay (IA) was developed for human lipocalin-2 (LCN2) by functionalizing a KOH-treated polystyrene microtiter plate with multiwalled carbon nanotubes (MWCNTs) dispersed in 3-aminoproyltriethoxysilane (APTES). The significantly increased surface area due to the presence of MWCNTs led to a high immobilization density of 1-ethyl-3-(3-dimethylamino-propyl) carbodiimide (EDC) activated anti-LCN2 capture antibodies (Ab). The anti-LCN2 Ab-bound MTPs were stable for 6 weeks when stored in 0.1 M PBS, pH 7.4 at 4 degrees C. The IA detects LCN2 from 0.6 to 5120 pg mL(-1) with a limit of detection (LOD) and limit of quantification (LOQ) of 0.9 pg mL(-1) and 6 pg mL(-1), respectively. The assay offered similar to 50-fold lower LOD and similar to 3-fold faster IA, compared to a commercial sandwich enzyme-linked immunosorbent assay kit. (C) 2016 Elsevier B.V. All rights reserved. We present a hand-held optical biosensing system utilizing a smartphone-embedded illumination sensor that is integrated with immunoblotting assay method. The smartphone-embedded illumination sensor is regarded as an alternative optical receiver that can replaces the conventional optical analysis apparatus because the illumination sensor can respond to the ambient light in a wide range of wavelengths, including visible and infrared. To demonstrate the biosensing applicability of our system employing the enzyme-mediated immunoblotting and accompanying light interference, various types of ambient light conditions including outdoor sunlight and indoor fluorescent were tested. For the immunoblotting assay, the biosensing channel generating insoluble precipitates as an end product of the enzymatic reaction is fabricated and mounted on the illumination sensor of the smartphone. The intensity of penetrating light arrives on the illumination sensor is inversely proportional to the amount of precipitates produced in the channel, and these changes are immediately analyzed and quantified via smartphone software. In this study, urinary C-terminal telopeptide fragment of type II collagen (uCTX-II), a biomarker of osteoarthritis diagnosis, was tested as a model analyte. The developed smartphone-based sensing system efficiently measured uCTX-II in the 0-5 ng/mL concentration range with a high sensitivity and accuracy under various light conditions. These assay results show that the illumination sensor-based optical biosensor is suitable for point-of-care testing (POCT). (C) 2016 Elsevier B.V. All rights reserved. Sepsis by bacterial infection causes high mortality in patients in intensive care unit (ICU). Rapid identification of bacterial infection is essential to ensure early appropriate administration of antibiotics to save lives of patients, yet the present benchtop molecular diagnosis is timeconsuming and laborintensive, which limits the treatment efficiency especially when the number of samples to be tested is extensive. Therefore, we hereby report a microfluidic platform labonadisc (LOAD) to provide a sample-to-answer solution. Our LOAD customized design of microfluidic channels allows automation to mimic sequential analytical steps in benchtop environment. It relies on a simple but controllable centrifugation force for the actuation of samples and reagents. Our LOAD system performs three major functions, namely DNA extraction, isothermal DNA amplification and real-time signal detection, in a predefined sequence. The disc is self-contained for conducting sample heating with chemical lysis buffer and silica microbeads are employed for DNA extraction from clinical specimens. Molecular diagnosis of specific target bacteria DNA sequences is then performed using a realtime loopmediated isothermal amplification (RTLAMP) with SYTO-9 as the signal reporter. Our LOAD system capable of bacterial identification of Mycobacterium tuberculosis (TB) and Acinetobacter baumanii (Ab) with the detection limits 10(3) cfu/mL TB in sputum and 10(2) cfu/mL Ab in blood within 2 h after sample loading. The reported LOAD based on an integrated approach should address the growing needs for rapid pointofcare medical diagnosis in ICU. Rapid and reliable molecular analysis of DNA for disease diagnosis is highly sought-after. FET-based sensors fulfill the demands of future point-of-care devices due to its sensitive charge sensing and possibility of integration with electronic instruments. However, most of the FETs are unstable in aqueous conditions, less sensitive and requires conventional Ag/AgCI electrode for gating. In this work, we propose a solution-gated graphene FET (SG-FET) for real-time monitoring of microscale loop-mediated isothermal amplification of DNA. The SG-FET was fabricated effortlessly with graphene as an active layer, on-chip co-planar electrodes, and polydimethylsiloxane-based microfluidic reservoir. A linear response of about 0.23 V/pH was seen when the buffers from pH 5-9 were analyzed on the SG-FET. To evaluate the performance of SG-FET, we monitored the amplification of Lambda phage gene as a proof-of-concept. During amplification, protons are released, which gradually alters the Dirac point voltage (V-Dirac) of SG-FET. The resulting device was highly sensitive with a femto-level limit of detection. The SG-FET could easily produce a positive signal within 16.5 min of amplification. An amplification of 10 ng/mu l DNA for 1 h produced a Delta V-Dirac of 0.27 V. The sensor was tested within a range of 2 x 10(2) copies/mu l (10 fg/mu l) to 2 x 10(8) copies/mu l (10 ng/mu l,1) of target DNA. Development of this sensing technology could significantly lower the time, cost, and complications of DNA detection. (C)2016 Elsevier B.V. All rights reserved. A composite consisting of cerium oxide nanoparticles (nanoceria) and an oxidative enzyme co-entrapped in an agarose gel has been developed for the reagent-free colorimetric detection of biologically important target molecules. The oxidase, immobilized in the agarose matrix, promotes the oxidation of target molecules to generate H2O2 that subsequently induces changes in the physicochemical properties of nanoceria exhibiting a color change from white/light-yellow into intense-yellow/light-orange without any requirement for additional colorimetric substrates. By utilizing the unique color-changing property of nanoceria entrapped within the agarose gel, target glucose molecules were very specifically detected over a wide linear range from 0.05 to 2 mM, which is suitable to measure the serum glucose level, with excellent operational stability over two weeks at room temperature. The biosensor also exhibited a high degree of precision and reproducibility when employed to detect glucose present in real human serum samples. We expect that this novel nanoceria-based biosensing format could be readily extended to other oxidative enzymes for the convenient detection of various clinically important target molecules. (C) 2016 Elsevier B.V. All rights reserved. Development of quick and reliable methods to investigate antibiotic susceptibility of bacteria is vital to prevent inappropriate and untargeted use of antibiotics and control the antibiotic resistance crisis. The authors have developed an innovative, low-cost and rapid approach to evaluate antibiotic susceptibility of bacteria by employing photoluminescence (PL) emission of photocorroding GaAs/AlGaAs quantum well (QW) biochips. The biochips were functionalized with self-assembled monolayers of biotinylated polyethylene glycol thiols, neutravidin and biotinylated antibodies to immobilize bacteria. The illumination of a QW biochip with the above bandgap radiation leads to formation of surface oxides and dissolution of a limited thickness GaAs cap material (<= 10 nm) that results in the appearance of a characteristic maximum in the PL plot collected over time. The position of the PL maximum depends on the photocorrosion rate which, in turn, depends on the electric charge immobilized on the surface of the GaAs/AlGaAs biochips. Bacteria captured on the surface of biochips retard the PL maximum, while growth of these bacteria further delays the PL maximum. For the bacteria affected by antibiotics a faster occurring PL maximum, compared with growing bacteria, is observed. By exposing bacteria to nutrient broth and penicillin or ciprofloxacin, the authors were able to distinguish in situ antibiotic-sensitive and resistant Escherichia coli bacteria within less than 3 h, considerable more rapid than with culture-based methods. The PL emission of the heterostructures was monitored with an inexpensive reader. This rapid determination of bacterial sensitivity to different antibiotics could have clinical and research applications.(C) 2016 Elsevier B.V. All rights reserved. In this study, a novel spectroelectrochemical method was proposed for neurotransmitters detection. The central sensing device was a hybrid structure of nanohole array and gold nanoparticles, which demonstrated good conductivity and high localized surface plasmon resonance (LSPR) sensitivity. By utilizing such specially-designed nanoplasmonic sensor as working electrode, both electrical and spectral responses on the surface of the sensor could be simultaneously detected during the electrochemical process. Cyclic voltammetry was implemented to activate the oxidation and recovery of dopamine and serotonin, while transmission spectrum measurement was carried out to synchronously record to LSPR responses of the nanoplasmonic sensor. Coupling with electrochemistry, LSPR results indicated good integrity and linearity, along with promising accuracy in qualitative and quantitative detection even for mixed solution and in brain tissue homogenates. Also, the detection results of other negatively-charged neurotransmitters like acetylcholine demonstrated the selectivity of our detection method for transmitters with positive charge. When compared with traditional electrochemical signals, LSPR signals provided better signal-to-noise ratio and lower detection limits, along with immunity against interference factors like ascorbic acid. Taking the advantages of such robustness, the coupled detection method was proved to be a promising platform for point-of-care testing for neurotransmitters.(C) 2016 Elsevier B.V. All rights reserved. Blood microparticles (MPs) are small membrane vesicles (50-1000 nm), derived from different cell types. They are known to play important roles in various biological processes and also recognized as potential biomarkers of various health disorders. Different methods are currently used for the detection and characterization of MPs, but none of these methods is capable to quantify and qualify total MPs at the same time, hence, there is a need to develop a new approach for simultaneous detection, characterization and quantification of microparticles. Here we show the potential of surface plasmon resonance (SPR) method coupled to atomic force microscopy (AFM) to quantify and qualify platelet-derived micro particles (PMPs), on the whole nano-to micro-meter scale. The different subpopulations of microparticles could be determined via their capture onto the surface using specific ligands. In order to verify the correlation between the capture level and the microparticles concentration in solution, two calibration standards were used: Virus-Like Particles (VLPs) and synthetic beads with a mean diameter of 53 nm and 920 nm respectively. The AFM analysis of the biochip surface allowed metrological analysis of captured PMPs and revealed that more than 95% of PMPs were smaller than 300 nm. Our results suggest that our NanoBioAnalytical platform, combining SPR and AFM, is a suitable method for a sensitive, reproducible, label-free characterization and quantification of MPs over a wide concentration range ( 107 to 1012 particles/mL; with a limit of detection (LOD) in the lowest ng/mu L range) which matches with their typical concentrations in blood. (C) 2016 Elsevier B.V. All rights reserved. A highly sensitive biosensor to detect norovirus in environment is desired to prevent the spread of infection. In this study, we investigated a design of surface plasmon resonance (SPR)-assisted fluoroimmunosensor to increase its sensitivity and performed detection of norovirus virus-like particles (VLPs). A quantum dot fluorescent dye was employed because of its large Stokes shift. The sensor design was optimized for the CdSe-ZnS-based quantum dots. The optimal design was applied to a simple SPR-assisted fluoroimmunosensor that uses a sensor chip equipped with a V-shaped trench. Excitation efficiency of the quantum dots, degree of electric field enhancement by SPR, and intensity of auto fluorescence of a substrate of the sensor chip were theoretically and experimentally evaluated to maximize the signal-to-noise ratio. As the result, an excitation wavelength of 390 nm was selected to excite SPR on an Al film of the sensor chip. The sandwich assay of norovirus VLPs was performed using the designed sensor. Minimum detectable concentration of 0.01 ng/mL, which corresponds to 100 virus-like particles included in the detection region of the V-trench, was demonstrated. (C) 2016 The Authors. Published by Elsevier B.V. Multifunctional nanocomposite has a huge potential for cell imaging, drug delivery, and improving therapeutic effect with less side effects. To date, diverse approaches have been demonstrated to endow a single nanostructure with multifunctionality. Herein, we report the synthesis and application of core-shell nanoparticles composed with upconversion nanoparticle (UCNP) as a core and a graphene oxide quantum dot (GOQD) as a shell. The UCNP was prepared and applied for imaging-guided analyses of upconversion luminescence. GOQD was prepared and employed as promising drug delivery vehicles to improve anti-tumor therapy effect in this study. Unique properties of UCNPs and GOQDs were incorporated into a single nanostructure to provide desirable functions for cell imaging and drug delivery. In addition, hypocrellin A (HA) was loaded on GOQDs for photo-dynamic therapy (PDT). HA, a commonly used chemotherapy drug and a photo-sensitizer, was conjugated with GOQD by pi-pi interaction and loaded on PEGylated UCNP without complicated synthetic process, which can break structure of HA. Applying these core-shell nanoparticles to MIT assay, we demonstrated that the UCNPs with GOQD shell loaded with HA could be excellent candidates as multifunctional agents for cell imaging, drug delivery and cell therapy. (C) 2016 Elsevier B.V. All rights reserved. Malignant melanoma is one of the most dangerous skin cancer originating from melanocytes. Thus, an early and proper melanoma diagnosis influences significantly the therapy efficiency. The melanoma recognition is still difficult, and generally, relies on subjective assessments. In particular, there is a lack of quantitative methods used in melanoma diagnosis and in the monitoring of tumour progression. One such method can be the atomic force microscopy (AFM) working in the force spectroscopy mode combined with quartz crystal microbalance (QCM), both applied to quantify the molecular interactions. In our study we have compared the recognition of mannose type glycans in melanocytes (HEMa-LP) and melanoma cells originating from the radial growth phase (WM35) and from lung metastasis (A375-P). The glycosylation level on their surfaces was probed using lectin concanavalin A (Con A) from Canavalia ensiformis. The interactions of Con A with surface glycans were quantified with both AFM and QCM techniques that revealed the presence of various glycan structural groups in a cell-dependent manner. The Con A - mannose (or glucose) type glycans present on WM35 cell surface are rather short and less ramified while in A375-P cells, Con A binds to long, branched mannose and glucose types of oligosaccharides. (C) 2016 Elsevier B.V. All rights reserved. Sulfapyridine (SPy) is a sulfonamide antibiotic largely employed as veterinary drugs for prophylactic and therapeutic purposes. Therefore, its spread in the food products has to be restricted. Herein, we report the synthesis and characterization of a novel electrochemical biosensor based on gold microelectrodes modified with a new structure of magnetic nanoparticles (MNPs) coated with poly(pyrrole-co-pyrrole-2-carboxylic acid) (Py/Py-COOH) for high efficient detection of SPy. This analyte was quantified through a competitive detection procedure with 5-[4-(amino)phenylsulfonamide]-5-oxopentanoic acid-BSA (SA2-BSA) antigens toward polyclonal antibody (Ab-155). Initially, gold working electrodes (WEs) of integrated biomicro electro-mechanical system (BioMEMS) were functionalized by Ppy-COOH/MNPs, using a chronoamperometric (CA) electrodeposition. Afterward, SA2-BSA was covalently bonded to Py/Py-COOH/MNP modified gold WEs through amide bonding. The competitive detection of the analyte was made by a mixture of a fixed concentration of Ab-155 and decreasing concentrations of SPy from 50 mu g L-1 to 2 ng L-1. Atomic Force Microscopy characterization was performed in order to ensure Ppy-COOH/MNPs electrodeposition on the microelectrode surfaces. Electrochemical measurements of SPy detection were carried out using electrochemical impedance spectroscopy (EIS). This biosensor was found to be highly sensitive and specific for SPy, with a limit of detection of 0.4 ng L-1. This technique was exploited to detect SPy in honey samples by using the standard addition method. The measurements were highly reproducible for detection and interferences namely, sulfadiazine (SDz), sulfathiazole (STz) and sulfamerazine (SMz). Taking these advantages of sensitivity, specificity, and low cost, our system provides a new horizon for development of advanced immunoassays in industrial food control. (C) 2016 Published by Elsevier B.V. Interest on Tau protein is fast increasing in Alzheimer's disease (AD) diagnosis. There is the urgent need of highly sensitive and specific diagnostic platforms for its quantification, also in combination with the other AD hallmarks. Up to now, SPR has been poorly exploited for tau detection by immunosensing, due to sensitivity limits at nanomolar level, whereas the clinical requirement is in the picomolar range. Molecular architectures built in a layer-by-layer fashion, biomolecules and nanostructures (metallic or not) may amplify the SPR signal and improve the limit of detection to the desired sensitivity. Mostly gold nanostructures are widely employed to this aim, but great interest is also emerging in Multi Walled Carbon Nanotubes (MWCNTs). Here MWCNTs are modified and then decorated with the secondary antibody for tau protein. Eventually we took advantage from MWCNTs-antibody conjugate to obtain a sandwich-based bioassay with the capability to increase the SPR signal of about 102 folds compared to direct detection and conventional unconjugated sandwich. With respect to these results, we hope to give a strong impulse for further investigation on studying possible roles of carbon nanotubes in optical-based biosensing. A novel label-free system for the sensitive fluorescent detection of deoxyribonuclease I (DNase I) activity has been developed by utilizing DNA-templated silver nanocluster/graphene oxide (DNA-AgNC/GO) nanocomposite. AgNC is first synthesized around C-rich template DNA and the resulting DNA-AgNC binds to GO through the interaction between the extension DNA and GO. The resulting DNA-AgNC/GO would show quite reduced fluorescence signal because the fluorescence from DNA-AgNCs is quenched by GO. In the presence of DNase I, however, it degrades the DNA strand within DNA/RNA hybrid duplex probe employed in this study, consequently releasing RNA which is complementary to the extension DNA. The released free RNA then extracts DNA-AgNC from GO by hybridizing with the extension DNA bound to GO. This process would restore the quenched fluorescence, emitting highly enhanced fluorescence signal. By employing this assay principle, DNase I activity was reliably identified with a detection limit of 0.10 U/ml which is lower than those from previous fluorescence-based methods. Finally, the practical capability of this assay system was successfully demonstrated by its use to determine DNase I activity in bovine urine. (C) 2016 Elsevier B.V. All rights reserved. A new Screen-printed electrodes (SPE) integrated in one channel flow-cell was developed. The one channel flow cell is attached and directly changeable with electrode. In the new flow-cell the injection is done through an "inline luer injection port" which can be less aggressive than wall-jet flow cell for a biological recognition element immobilized on the surface of the electrode. The sample volume can be easily controlled by the operator through a syringe. In this novel thin layer flow-cell screen-printed electrodes, the working electrode was modified with graphene materials, and an enhancement of electroactive area to 388% over a standard electrode was found. This new configuration was applied to study the entrapped cellobiose dehydrogenase from the ascomycete Corynascus thermophilus (CtCDH) in a photocrosslinkable PVA-based polymer. The calibration curve of lactose using optimized parameters shows a wide linear measurement ranges between 0.25 and 5 mM. A good operational stability of the CtCDH-PVA-modified graphene electrode is obtained, which keeps the same initial activity during 8 h and exhibits a good storage stability with a decrease of only 9% in analytical response after 3 months storage at 4 degrees C. The actin-myosin system, responsible for muscle contraction, is also the force-generating element in dynamic nanodevices operating with surface-immobilized motor proteins. These devices require materials that are amenable to micro- and nano-fabrication, but also preserve the bioactivity of molecular motors. The complexity of the protein-surface systems is greatly amplified by those of the polymer-fluid interface; and of the structure and function of molecular motors, making the study of these interactions critical to the success of molecular motor-based nanodevices. We measured the density of the adsorbed motor protein (heavy meromyosin, HMM) using quartz crystal microbalance; and motor bioactivity with ATPase assay, on a set of model surfaces, i.e., nitrocellulose, polystyrene, poly(methyl methacrylate), and poly(butyl methacrylate), poly(tert-butyl methacrylate). A higher hydrophobicity of the adsorbing material translates in a higher total number of HMM molecules per unit area, but also in a lower uptake of water, and a lower ratio of active per total HMM molecules per unit area. We also measured the motility characteristics of actin filaments on the model surfaces, i.e., velocity, smoothness and deflection of movement, determined via in vitro motility assays. The filament velocities were found to be controlled by the relative number of active HMM per total motors, rather than their absolute surface density. The study allowed the formulation of the general engineering principles for the selection of polymeric materials for the manufacturing of dynamic nanodevices using protein molecular motors. (C) 2016 Elsevier B.V. All rights reserved. Lab-on-Chip are miniaturized systems able to perform biomolecular analysis in shorter time and with lower reagent consumption than a standard laboratory. Their miniaturization interferes with the multiple functions that the biochemical procedures require. In order to address this issue, our paper presents, for the first time, the integration on a single glass substrate of different thin film technologies in order to develop a multifunctional platform suitable for on-chip thermal treatments and on-chip detection of biomolecules. The proposed System on-Glass hosts thin metal films acting as heating sources; hydrogenated amorphous silicon diodes acting both as temperature sensors to monitor the temperature distribution and photosensors for the on-chip detection and a ground plane ensuring that the heater operation does not affect the photodiode currents. The sequence of the technological steps, the deposition temperatures of the thin films and the parameters of the photolithographic processes have been optimized in order to overcome all the issues of the technological integration. The device has been designed, fabricated and tested for the implementation of DNA amplification through the Polymerase Chain Reaction (PCR) with thermal cycling among three different temperatures on a single site. The glass has been connected to an electronic system that drives the heaters and controls the temperature and light sensors. It has been optically and thermally coupled with another glass hosting a microfluidic network made in polydimethylsiloxane that includes thermally actuated microvalves and a PCR process chamber. The successful DNA amplification has been verified off chip by using a standard fluorometer. (C) 2016 Elsevier B.V. All rights reserved. Enzymes are the most effective catalysts for a broad range of difficult chemical reactions e.g. hydroxylation of non-activated C-H Bonds and stereoselective synthesis. Nevertheless, a lot of enzymes are not accessible for the biotechnological applications or industrial use. One reason is the prerequisite of expensive cofactors. In this context, we developed a bioelectrocatalytic analysis platform for the electrochemical and photonic quantification of the direct electron transfer from the electrode to redox enzymes and therefore, bypass the need of soluble cofactors that had to be continuously exchanged or regenerated. As reference enzyme, we chose cytochrome P450 BM3 that is restricted by NADPH dependence. We optimized the substrate spectrum for aromatic compounds by introduction of the triple mutation A74G/F87V/L188Q and established a sensitive fluorimetric product formation assay to monitor the enzymatic conversion of 7-ethoxycoumarine to 7-hydroxycoumarine. Gold and indium tin oxide electrodes were characterized with respect to surface morphology, charge-transfer resistance and P450 BM3 immobilization as well as activity. Using gold electrodes, no significant product formation by electrode mediated direct electron transfer could be detected. In contrast, P450 BM3 adsorbed on unmodified indium tin oxide electrodes revealed 36% activity by electrode mediated direct electron transfer in comparison to enzyme regeneration by NADPH. Since the reaction volumes are in the microliter range and upscaling of the measurement system is easily possible, our analysis platform is a useful tool for bioelectrocatalytic enzyme characterization and library screening based optimization for applications in the field of enzyme catalyzed chemical synthesis but also enzyme based fuel cells. (C) 2016 Elsevier B.V. All rights reserved. We herein describe a novel fluorescent method for the rapid and selective detection of adenosine by utilizing DNA-templated Cu/Ag nanoclusters (NCs) and employing s-adenosylhomocysteine hydrolase (SAHH). SAHH is allowed to promote hydrolysis reaction of s-adenosylhomocysteine (SAH) and consequently produces homocysteine, which would quench the fluorescence signal from DNA-templated Cu/Ag nanoclusters employed as a signaling probe in this study. On the other hand, adenosine significantly inhibits the hydrolysis reaction and prevent the formation of homocysteine. Consequently, highly enhanced fluorescence signal from DNA-Cu/Ag NCs is retained, which could be used to identify the presence of adenosine. By employing this design principle, adenosine was sensitively detected down to 19 nM with high specificity over other adenosine analogs such as AMP, ADP, ATP, cAMP, guanosine, cytidine, and urine. Finally, the diagnostic capability of this method was successfully verified by reliably detecting adenosine present in a real human serum sample. (C) 2016 Elsevier B.V. All rights reserved. An ultimate goal for those engaged in research to develop implantable medical devices is to develop mechatronic implantable artificial organs such as artificial pancreas. Such devices would comprise at least a sensor module, an actuator module, and a controller module. For the development of optimal mechatronic implantable artificial organs, these modules should be self-powered and autonomously operated. In this study, we aimed to develop a microcontroller using the BioCapacitor principle. A direct electron transfer type glucose dehydrogenase was immobilized onto mesoporous carbon, and then deposited on the surface of a miniaturized Au electrode (7 mm(2)) to prepare a miniaturized enzyme anode. The enzyme fuel cell was connected with a 100 mu F capacitor and a power boost converter as a charge pump. The voltage of the enzyme fuel cell was increased in a stepwise manner by the charge pump from 330 mV to 3.1 V, and the generated electricity was charged into a 100 mu F capacitor. The charge pump circuit was connected to an ultra-low-power microcontroller. Thus prepared BioCapacitor based circuit was able to operate an ultra-low-power microcontroller continuously, by running a program for 17 h that turned on an LED every 60 s. Our success in operating a microcontroller using glucose as the sole energy source indicated the probability of realizing implantable self-powered autonomously operated artificial organs, such as artificial pancreas. Foot (2001), Hursthouse (1999), and Thompson (2008), along with other philosophers, have argued for a metaethical position, the natural goodness approach, that claims moral judgments are, or are on a par with, teleological claims made in the biological sciences. Specifically, an organism's flourishing is characterized by how well they function as specified by the species to which they belong. In this essay, I first sketch the Neo-Aristotelian natural goodness approach. Second, I argue that critics who claim that this sort of approach is inconsistent with evolutionary biology due to its species essentialism are incorrect. Third, I contend that combining the natural goodness account of natural-historical judgments with our best account of natural normativity, the selected effects theory of function, leads to implausible moral judgments. This is so if selected effects function are understood in terms of evolution by natural selection, but also if they are characterized in terms of cultural evolution. Thus, I conclude that proponents of the natural goodness approach must either embrace non-naturalistic vitalism or troubling moral revisionism. This paper addresses the foundations of Teleological Individualism, the view that organisms, even non-sentient organisms, are goal-oriented systems while biological collectives, such as ecosystems or conspecific groups, are mere assemblages of organisms. Typical defenses of Teleological Individualism ground the teleological organization of organisms in the workings of natural selection. This paper shows that grounding teleological organization in natural selection is antithetical to Teleological Individualism because such views assume a view about the units of selection on which it is only individual organisms that are units of selection. However, none of the Conventionalist, Reductionist, or Multi-Level Realist theories serve to justify such an assumption. Thus, Teleological Individualism cannot be grounded in natural selection. In this paper I examine the connection between accounts of biological teleology and the biocentrist claim that all living beings have a good of their own. I first present the background for biocentrists' appeal to biological teleology. Then I raise a problem of scope for teleology-based biocentrism and, drawing in part on recent work by Basl and Sandler, I discuss Taylor and Varner's responses to this problem. I then challenge Basl and Sandler's own response to the scope problem for its reliance on a selectionist account of organismic teleology. Finally I examine the prospects for a biocentrist response to the problem of scope based on an alternative organisational account of internal teleology. I conclude by assessing the prospects for teleology-based biocentrism. This paper argues that biological organisation can be legitimately conceived of as an intrinsically teleological causal regime. The core of the argument consists in establishing a connection between organisation and teleology through the concept of self-determination: biological organisation determines itself in the sense that the effects of its activity contribute to determine its own conditions of existence. We suggest that not any kind of circular regime realises self-determination, which should be specifically understood as self-constraint: in biological systems, in particular, self-constraint takes the form of closure, i.e. a network of mutually dependent constitutive constraints. We then explore the occurrence of intrinsic teleology in the biological domain and beyond. On the one hand, the organisational account might possibly concede that supra-organismal biological systems (as symbioses or ecosystems) could realise closure, and hence be teleological. On the other hand, the realisation of closure beyond the biological realm appears to be highly unlikely. In turn, the occurrence of simpler forms of self-determination remains a controversial issue, in particular with respect to the case of self-organising dissipative systems. We argue that ecology in general and biodiversity and ecosystem function (BEF) research in particular need an understanding of functions which is both ahistorical and evolutionarily grounded. A natural candidate in this context is Bigelow and Pargetter's (1987) evolutionary forward-looking account which, like the causal role account, assigns functions to parts of integrated systems regardless of their past history, but supplements this with an evolutionary dimension that relates functions to their bearers' ability to thrive and perpetuate themselves. While Bigelow and Pargetter's account focused on functional organization at the level of organisms, we argue that such an account can be extended to functional organization at the community and ecosystem levels in a way that broadens the scope of the reconciliation between ecosystem ecology and evolutionary biology envisioned by many BEF researchers (e.g. Holt 1995; Loreau 2010a). By linking an evolutionary forward-looking account of functions to the persistence-based understanding of evolution defended by Bouchard (2008, 2011) and others (e.g. Bourrat 2014; Doolittle 2014), and to the theoretical research on complex adaptive systems (Levin 1999, 2005; Norberg 2004), we argue that ecosystems, by forming more or less resilient assemblages, can evolve even while they do not reproduce and form lineages. We thus propose a Persistence Enhancing Propensity (PEP) account of role functions in ecology to account for this overlap of evolutionary and ecological processes. This paper argues that a minimal notion of function and a notion of normal-proper function are used in explaining how bodies and brains operate. Neither is Cummins' (1975) notion, as originally defined, and yet his is often taken to be the clearly relevant notion for such an explanatory context. This paper also explains how adverting to normal-proper functions, even if these are selected functions, can play a significant scientific role in the operational explanations of complex systems that physiologists and neurophysiologists provide, despite a lack of relevant causal efficacy on the part of such functions. This paper contains a positive development and a negative argument. It develops a theory of function loss and shows how this undermines an objection raised against the etiological theory of function in support of the modal theory of function. Then it raises two internal problems for the modal theory of function. One widely-endorsed argument in the experimental philosophy literature maintains that intuitive judgments are unreliable because they are influenced by the order in which thought experiments prompting those judgments are presented. Here, we explicitly state this argument from ordering effects and show that any plausible understanding of the argument leads to an untenable conclusion. First, we show that the normative principle is ambiguous. On one reading of the principle, the empirical observation is well-supported, but the normative principle is false. On the other reading, the empirical observation has only weak support, and the normative principle, if correct, would impugn the reliability of deliberative reasoning, testimony, memory, and perception, since judgments in all these areas are sensitive to ordering in the relevant sense. We then reflect on what goes wrong with the argument. Leslie Tharp proves three theorems concerning epistemic and metaphysical modality for conventional modal predicate logic: every truth is a priori equivalent to a necessary truth, every truth is necessarily equivalent to an a priori truth, and every truth is a priori equivalent to a contingent truth. Lloyd Humberstone has shown that these theorems also hold in the modal system Actuality Modal Logic (AML), the logic that results from the addition of the actuality operator to conventional modal logic. We show that Tharp's theorems fail for the expressively equivalent Subjunctive Modal Logic (SML), the logic that was developed by Kai Wehmeier as an alternative to AML. We then argue that the existence of Tharp's theorems for AML is due to a faulty interpretation of the notion of necessary truth, a feature that is not shared by SML. The paper concludes with an argument for the thesis that the existence of the distinction between truth at all worlds w and truth at all worlds w from the point of view of w as the actual world is an artifact owing to the interaction of the necessity and actuality operator. The lottery paradox shows that the following three individually highly plausible theses are jointly incompatible: (i) highly probable propositions are justifiably believable, (ii) justified believability is closed under conjunction introduction, (iii) known contradictions are not justifiably believable. This paper argues that a satisfactory solution to the lottery paradox must reject (i) as versions of the paradox can be generated without appeal to either (ii) or (iii) and proposes a new solution to the paradox in terms of a novel account of justified believability. In this paper, I consider the relationship between Inference to the Best Explanation (IBE) and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of "explanatory virtues"-by means of which IBE ranks competing explanations-have confirmational import. Rather, some of the items that feature on these lists are "informational virtues"-properties that do not make a hypothesis more probable than some competitor given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to "accept" H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends. This paper focuses on two questions: (1) Is understanding intimately bound up with accurately representing the world? (2) Is understanding intimately bound up with downstream abilities? We will argue that the answer to both these questions is "yes", and for the same reason-both accuracy and ability are important elements of orthogonal evaluative criteria along which understanding can be assessed. More precisely, we will argue that representational-accuracy (of which we assume truth is one kind) and intelligibility (which we will define so as to entail abilities) are good-making features of a state of understanding. Interestingly, both evaluative claims have been defended by philosophers in the literature on understanding as the criterion of evaluation. We argue that proponents of both approaches have important insights and that, drawing on both their own observations and a few novel arguments, we can construct a more complete picture of understanding evaluation. We thus posit the theory of there being Multiple Understanding Dimensions. The main thing to note about our dualism regarding the evaluative criteria of understanding is that it accounts for the intuitions about cases underlying both previously held positions. The no miracles argument is one of the main arguments for scientific realism. Recently it has been alleged that the no miracles argument is fundamentally flawed because it commits the base rate fallacy. The allegation is based on the idea that the appeal of the no miracles argument arises from inappropriate neglect of the base rate of approximate truth among the relevant population of theories. However, the base rate fallacy allegation relies on an assumption of random sampling of individuals from the population which cannot be made in the case of the no miracles argument. Therefore the base rate fallacy objection to the no miracles argument fails. I distinguish between a "local" and a "global" form of the no miracles argument. The base rate fallacy objection has been leveled at the local version. I argue that the global argument plays a key role in supporting a base-rate-fallacy-free formulation of the local version of the argument. Over the years several non-equivalent probabilistic measures of coherence have been discussed in the philosophical literature. In this paper we examine these measures with respect to their empirical adequacy. Using test cases from the coherence literature as vignettes for psychological experiments we investigate whether the measures can predict the subjective coherence assessments of the participants. It turns out that the participants' coherence assessments are best described by Roche's (Insights from philosophy, jurisprudence and artificial intelligence, 2013) coherence measure based on Douven and Meijs' (Synthese 156:405-425, 2007) average mutual support approach and the conditional probability. The Received View on quantum non-individuality (RV) is, roughly speaking, the view according to which quantum objects are not individuals. It seems clear that the RV finds its standard expression nowadays through the use of the formal apparatuses of non-reflexive logics, mainly quasi-set theory. In such logics, the relation of identity is restricted, so that it does not apply for terms denoting quantum particles; this "lack of identity" formally characterizes their non-individuality. We face then a dilemma: on the one hand, identity seems too important to be given up, on the other hand the RV seems to require that identity be given up. In this paper we shall discuss how the specific characterization of the RV through non-reflexive logics came to be framed. We examine some of the main objections to this version of the RV and argue that they are misguided under this specific "non-reflexive" understanding of the RV. Finally, we shall also argue that this non-reflexive view is not the only option for a metaphysical articulation of the RV: less radical approaches to identity and logic are open. In particular, some of these alternative approaches to the RV we present may be immune to most of the criticisms presented against the non-reflexive approach. Modal fictionalists face a problem that arises due to their possible-world story being incomplete in the sense that certain relevant claims are neither true nor false according to it. It has recently been suggested that this incompleteness problem generalises to other brands of fictionalism, such as fictionalism about composite or mathematical objects. In this paper, I argue that these fictionalist positions are particularly threatened by a generalised incompleteness problem since they cannot emulate the modal fictionalists' most attractive response. I then defend mathematical and compositional fictionalism by showing that the reasons for which the incompleteness problem has been thought to affect them are mistaken. This leads to the question of whether there are other fictionalist positions to which the problem does in fact generalise. I give a general account of the features of a fictionalist position that generate the incompleteness problem and argue that whenever a fictionalist position does exemplify these features then the problem can be addressed in analogy to the modal fictionalists' preferred response. Debates about the ethics and effects of placebos and whether 'placebos' in clinical trials of complex treatments such as acupuncture are adequate (and hence whether acupuncture is 'truly' effective or a 'mere placebo') rage. Yet there is currently no widely accepted definition of the 'placebo'. A definition of the placebo is likely to inform these controversies. Grunbaum's (1981, 1986) characterization of placebos and placebo effects has been touted by some authors as the best attempt thus far, but has not won widespread acceptance largely because Grunbaum failed to specify what he means by a therapeutic theory and because he does not stipulate a special role for expectation effects. Grunbaum claims that placebos are treatments whose 'characteristic features' do not have therapeutic effects on the target disorder. I show that with four modifications, Grunbaum's definition provides a defensible account of placebos for the purpose of constructing placebo controls within clinical trials. The modifications I introduce are: adding a special role for expectations, insisting that placebo controls control for all and only the effects of the incidental treatment features, relativizing the definition of placebos to patients, and introducing harmful interventions and nocebos to the definitional scheme. I also provide guidance for classifying treatment features as characteristic or incidental. I consider the 'inferentialist' thesis that whenever a mental state rationally justifies a belief it is in virtue of inferential relations holding between the contents of the two states. I suggest that no good argument has yet been given for the thesis. I focus in particular on Williamson (Knowledge and its limits, 2000) and Ginsborg (Reasons for belief, 2011) and show that neither provides us with a reason to deny the plausible idea that experience can provide non-inferential justification for belief. I finish by pointing out some theoretical costs and tensions associated with endorsing inferentialism. Purpose: This study discusses the process of co-constructing a prototype pedagogical model for working with youth from socially vulnerable backgrounds.Participants and settings: This six-month activist research project was conducted in a soccer program in a socially vulnerable area of Brazil in 2013. The study included 17 youths, 4 coaches, a pedagogic coordinator and a social worker. An expert in student-centered pedagogy and inquiry-based activism assisted as a debriefer helping in the progressive data analysis and the planning of the work sessions.Data collection/analysis: Multiple sources of data were collected, including 38 field journal/observation and audio records of: 18 youth work sessions, 16 coaches' work sessions, 3 combined coaches and youth work sessions, and 37 meetings between the researcher and the expert.Findings: The process of co-construction of this prototype pedagogical model was divided into three phases. The first phase involved the youth and coaches identifying barriers to sport opportunities in their community. In the second phase, the youth, coaches and researchers imagined alternative possibilities to the barriers identified. In the final phase, we worked collaboratively to create realistic opportunities for the youth to begin to negotiate some of the barriers they identified. In this phase, the coaches and youth designed an action plan to implement (involving a Leadership Program) aimed at addressing the youths' needs in the sport program. Five critical elements of a prototype pedagogical model were co-created through the first two processes and four learning aspirations emerged in the last phase of the project.Implications: We suggest an activist approach of co-creating a pedagogical model of sport for working with youth from socially vulnerable backgrounds is beneficial. That is, creating opportunities for youth to learn to name, critique and negotiate barriers to their engagement in sport in order to create empowering possibilities. Background: Fundamental motor skill proficiency is essential for engagement in sports and physical play and in the development of a healthy lifestyle. Children with motor delays (with and without disabilities) lack the motor skills necessary to participate in games and physical activity, and tend to spend more time as onlookers than do their peers. As such, intervention programs are crucial in promoting motor skill development of children with motor delays. While mastery climate (MC) interventions have shown to positively impact children's motor performance, what is unknown is the impact of cognitive strategies used by children within these climates. Furthermore, although vigorous play seems to be related to the development of gross motor skills, it is still unknown if children with and without disabilities would benefit from exercise play (EP) interventions.Purpose: This study examined the effects of MC and EP interventions on the motor skill development and verbal recall (VR) of children with motor delays. The sample included children with and without disabilities.Research designs: One hundred and thirty-eight children from 27 urban public schools were referred to the present study. Children were assessed using the Test of Gross Motor Development second edition (TGMD-2) and a VR checklist. Sixty-four children (18 with disabilities and 46 without) met the inclusion criteria, which was a score less than the fifth percentile on the TGMD-2. Participants were randomly assigned to the MC or EP 14-week interventions emphasizing gross motor skill practice.Data collection and analysis: Children were assessed at pre- and post-intervention. A 2 (groups)x2 (disability) x 2 (time) analyses of variance with repeated measures on the last factor was conducted. Change scores, t-test comparisons on the delta scores and Cohen's D were also calculated.Results: The MC group demonstrated significant and positive changes over the intervention period. Further, the MC group showed superior locomotor and object control performance and higher recall of verbal cues (p.05) at post-intervention compared to the EP group. Children with and without disabilities within the MC showed similar patterns of improvement. The EP intervention did not demonstrate significant improvements.Conclusion: Children with and without disabilities showed improvements in motor skills and VR when exposed to an MC, incorporating the six TARGET structures. These structures included (a) providing feedback and encouragement, providing opportunities for decision-making and establishing personal goals, (b) including parents in the recognition of children's achievements, (c) creating opportunities to experience leadership and self-pacing, (d) guiding children to use verbal cues and modeling when practicing gross motor skills, and (e) providing demonstrations and teaching children to self-monitor their performance. Instruction is therefore seen as critical to learning gross motor skills, as demonstrated from the findings. Although there were opportunities for vigorous play within the EP intervention, the children did not show improvements in motor performance or VR. These findings suggest that new trends in teacher education physical education to prioritize physical activity over good motor skill instruction may not be advantageous for children in the early years, and should be reconsidered. Primary objective: Teacher evaluation is being revamped by policy-makers. The marginalized status of physical education has protected this subject area from reform for many decades, but in our current era of system-wide, data-based decision-making, physical education is no longer immune. Standardized and local testing, together with structured observation measures, are swiftly being mandated in the USA as required elements of teacher evaluation systems in an effort to improve school programs and student achievement. The purpose of this investigation was to document how this reform was initiated and the experiences of teachers, students and administrators, from three high school physical education programs, during initiation of this reform. Documenting how physical education programs respond to such reforms develops our understanding of top-down reform efforts and helps to identify conditions under which such reforms have the intended effect on physical education teachers and student learning in physical education.Theoretical framework: Fullan's three phases of school change has been used to analyze and guide school change efforts in several subject areas including physical education. The phases are initiation, implementation and institutionalization. This study is situated primarily within the first phase of school change, the initiation phase.Methods and procedures: This study took place over a 21-month period in 3 suburban school districts in a northeast metropolitan area of the USA. Interviews with district physical education administrators, high school physical education teachers and students were conducted. Field notes of physical education classes, informal interviews and related artifacts including pre- and post-physical education assessments were collected. To ensure trustworthiness, several steps were taken including member checks, triangulation and peer review. The data were analyzed to find common themes and patterns using the constant comparative method.Results: Several themes emerged: (1) changes in curriculum and assessment; (2) effect on administrators; (3) stakeholder apathy and (4) department collaboration.Conclusion: Changes, although minor, did take place in the wake of this top-down teacher reform; however, additional research needs to be completed to determine whether or not these changes are meaningful or long lasting. Background: The majority of reviews related to Physical Education Pedagogy (PEP) refer to the English-speaking world. Some of these assert the need to obtain more data and provide reviews of what has been investigated in languages other than English to be able to assess the current state of the field internationally.Purpose: The aim of this study was to identify, categorise, and analyse articles on PEP published in Spanish sport science journals during the last decade (2005-2014).Participants and setting: A total of 13 journals were selected: 8 were indexed in the Scopus databases, and 5 were added based on expert judgement due to their importance in the field of Spanish PEP.Research design: The study uses a quantitative approach that is exploratory and descriptive in nature and includes document research techniques.Data collection: The identification of all articles published between 2005 and 2014 (both inclusive) was performed. The search yielded 3258 published articles, from which articles whose content was not related to PEP were eliminated. The final sample was 534 articles.Data analysis: The articles that comprise the final sample were analysed and classified according to their content and article type.Findings: Of the 3258 articles published in the last decade in these journals, only 534 (16.39%) address content in the PEP field of study. With regard to sub-areas, half of the research conducted pertains to teaching (50.00%), followed by curriculum (25.66%). The combination of both sub-areas comprises the third largest percentage (9.74%). Teacher education is the least addressed sub-area, with 8.80%, and its intersections with teaching and curriculum in no case exceed 3%. In terms of article type, 38.39% are theoretical studies, historical studies, or essays. One-fourth (25.09%) are quantitative empirical research, and one-fifth (22.47%) refer to experiences in education or innovation. These three article types are predominant, comprising 85.95% of the total. The remaining articles are divided into studies related to qualitative empirical research (7.68%), those conducting empirical research using a mixed methodology (quantitative and qualitative) (5.06%), and a testimonial for review articles (1.31%).Conclusions: In the decade studied, an increase in the gap between the number of articles published in Spanish journals specialising in physical education and sport and the percentage of articles related to PEP is observed. Of these journals, those publishing the most articles on PEP were included by expert judgement and are not indexed in Scopus. The implication is an academically worrying state for the field in the Spanish context, differing significantly from the English context. Background: School teachers who become teacher educators (TEs) are rarely prepared for the different pedagogies that teacher education requires. One pedagogical difference is the need for TEs to make their thinking and decisions explicit to pre-service teachers (PSTs) so PSTs can see teaching as an adaptive process rather than a set of routines to be memorised.Purpose: This research set out to analyse my learning about teaching teachers through making my decisions and thinking explicit to my PSTs.Participants and data collection: Using a self-study of teacher education practice (S-STEP) methodology, I collected data during an outdoor education course in a physical education (PE) degree. Participants included a convenience sample of six participating PSTs (of a cohort of 24) who participated in four interviews and two group interviews. Three critical friends observed five lessons and participated in interviews. In addition, self-generated data consisted of 104 written reflective journal entries (both private and open). Lessons were video-recorded to assist with reflection.Data analysis: Utilizing Schon's concepts of reflection for, in and on action, I sought out contrary perspectives in order to frame and reframe my understanding of TE practice. I then presented these new understandings to other participants for further development.Findings: My learning about teaching teachers can be represented as swinging between opposite extremes of infatuation and disillusionment. After observing my teaching, a critical friend identified that my physical position (or how I placed myself in the group) affected PST engagement in discussions. As I explored this aspect of my teaching further, I became very focused on the influence of my physical position to the point of infatuation. My infatuation stage culminated in a reflection-in-action moment when I changed my position in the act of teaching, which appeared to significantly increase PST engagement. But the PSTs challenged my interpretation and stated that inequalities of power cannot be resolved by rearranging where a teacher stands. In this second stage, I experienced a strong sense of disillusionment, even cynicism. As a TE, I felt any actions I took were pointless against the power structures of society. Later, with insights from participants, I developed a more nuanced understanding of power and position; while not a panacea, how I arranged myself and the class physically did have some influence on the flow of discussions.Conclusions: S-STEP requires that researching practitioners challenge their assumptions. In making my own learning about my teaching explicit to my PSTs and critical friends, I was able to frame and reframe my understandings about teaching teachers. Through this research, I discovered that I learnt about my teaching by swinging between extremes. I argue that thinking about teaching informed by extreme positions, provides a fuller purview of the complexity of teaching teachers. S-STEP in conjunction with explicit teaching practices offers TEs a tangible means to understand our practices more deeply and furthermore, to advance our understanding of teacher education more broadly. Background: Within the context of sports coaching and coach education, formalised mentoring relationships are often depicted as a mentor-mentee dyad. Thus, mentoring within sports coaching is typically conceptualised as a one-dimensional relationship, where the mentor is seen as the powerful member of the dyad, with greater age and/or experience [Colley, H. (2003). Mentoring for Social Inclusion. London: Routledge].Aim: The aim of this study was to explore the concept of a multiple mentor system in an attempt to advance our theoretical and empirical understanding of sports coach mentoring. In doing so, this paper builds upon the suggestion of Jones, Harris, and Miles [(2009). Mentoring in Sports Coaching: A Review of the Literature. Physical Education and Sport Pedagogy 14 (3): 267-284] who highlight the importance of generating empirical research to explore current mentoring approaches in sport, which in turn can inform meaningful formal coach education enhancement. The significance of this work therefore lies in opening up both a practical and a theoretical space for dialogue within sports coach education in order to challenge the traditional dyadic conceptualisation of mentoring and move towards an understanding of mentoring in practice'.Method: Drawing upon Kram's [(1985). Mentoring at Work: Developmental Relationships in Organisational Life. Glenview, IL: Scott Foresman] foundational mentoring theory to underpin a multiple mentoring support system, 15 elite coach mentors across a range of sports were interviewed in an attempt to explore their mentoring experiences. Subsequently, an inductive thematic analysis endeavoured to further investigate the realities and practicalities of employing a multiple mentoring system in the context of elite coach development.Results: The participants advocated support for the utilisation of a multiple mentor system to address some of the inherent problems and complexities within elite sports coaching mentoring. Specifically, the results suggested that mentees sourced different mentors for specific knowledge acquisition, skills and attributes. For example, within a multiple mentor approach, mentors recommended that mentees use a variety of mentors, including cross-sports and non-sport mentors.Conclusion: Tentative recommendations for the future employment of a multiple mentoring framework were considered, with particular reference to cross-sports or non-sport mentoring experiences. Background: Laws and legislation have prompted movement from special education towards inclusive education, whereby students with disabilities are included in mainstream physical education (PE) classes. It is widely acknowledged that including students with disabilities in PE presents significant challenges in relation to meeting the diverse needs of all students. Significantly, little is known about how teachers include junior primary students with a disability in PE.Aims: This paper aims to explore pedagogical practices for the inclusion of junior primary students with disabilities in PE as well as environmental accommodations teachers make. In order to address these aims, the research undertaking was guided by the question: What pedagogies do teachers draw upon to include junior primary students with disabilities in PE'?Methods: This qualitative research undertaking incorporated a critical case study approach, which utilised semi-structured interviews and field observations as data collection tools. Three teachers of PE in primary schools located in Adelaide, South Australia, participated in the research undertaking. Given this small sample group we make no claims for generalisability, but seek to provide connections for others teaching in PE.Results: Findings are presented in three general themes of: Relationships for inclusion, Practices of Inclusion and Complexity and inclusion. Participants' statements are used to illuminate discussions about discourses drawn on and to make links between previous research and theoretical perspectives. In general terms, findings revealed that despite barriers, such as catering for multiple forms of disabilities with minimal assistance from support staff and negotiating school environments, participants embraced inclusion and made pedagogical modifications to ensure meaningful involvement in PE lessons for all students. This research also identified the important role teachers play in terms of relationships, adaptations and safe learning environments, which collectively enable the inclusion of junior primary students with disabilities.Conclusion: Students with disabilities warrant specific recognition and access to educational resources including within the field of PE. Background: A new national physical education (PE) curriculum has been developed in South Korea and PE teachers have been challenged to deliver new transferable educational outcomes in character development through PE. In one geographical area, in order to support teachers to make required changes, a Communities of Practice (CoP) approach to continuing professional development (CPD) was adopted. Rather than being based in a single-school, this CoP brought PE teachers together from a number of schools with the aim of sharing learning and impacting on pedagogies, practices and pupils' learning in character development through PE.Aims: To map and analyse the ways in which teachers (i) learnt about character education in a CoP, (ii) used this learning to inform their pedagogies and practices, and (iii) impacted on pupil learning in and beyond PE.Method: The participants were a university professor, 8 secondary school PE teachers from 8 different schools and 41 pupils. Data collection was undertaken in two phases in Autumn 2014 and Spring 2015. In-depth qualitative data were collected in the CoP and the teachers' schools using individual interviews, focus groups with pupils, observations of lessons, open-ended questionnaires and document analysis. Data were analysed using a constructivist revision of grounded theory.Findings: There was clear evidence of teacher learning in the CoP and changes to their pedagogies and indirect teaching behaviours (ITBs). Pupils were also able to identify the new intended learning about character development at both cognitive and behavioural levels, although there was little evidence of understanding about or intention to transfer this learning beyond PE (which was the original aim of the Government's character education initiative). Barriers to teacher and pupil learning are also discussed.Conclusion: Teachers' professional learning in the CoP impacted on the development of both teachers' pedagogies and ITBs which then influenced pupils' learning, however, linking teachers' professional learning to pupils' learning remains challenging. This study has added further insights into the complexity of the processes linking policy, teachers' learning and pupils' learning outcomes. While it was possible to trace clear pathways from the CoP to teachers' learning, and in some cases to pupils' learning, it was also apparent that a wide range of factors intervened to influence the learning outcomes. In this paper, we reflect on the connection between the notions of organism and organisation, with a specific interest in how this bears upon the issue of the reality of the organism (or in contrast the status of these notions as constructs, whether heuristic or otherwise scientifically useful). We do this by presenting the case of Buffon, who developed complex views about the relation between the notions of "organised'' and "organic'' matter. We argue that, contrary to what some interpreters have suggested, these notions are not orthogonal in his thought. Also, we argue that Buffon has a view in which organisation is not just ubiquitous, but basic and fundamental in nature, and hence also fully natural. We suggest that he can hold this view because of his anti-mathematicism. Buffon's case is interesting, in our view, because he can regard organisation, and organisms, as perfectly natural, and can admit their reality without invoking problematic supernaturalist views, and because he allows organisation and the organismal to come in kinds and degrees. Thus, his view tries to do justice to two cautionary notes for the debate on the reality of the organism: the need for a commitment to a broadly naturalist perspective, and the need to acknowledge the interesting features of organisms through which we make sense of them. In this paper, I argue that Kant adopted, throughout his career, a position that is much more akin to classical accounts of epigenesis, although he does reject the more radical forms of epigenesis proposed in his own time, and does make use of preformationist sounding terms. I argue that this is because Kant (1) thinks of what is pre-formed as a species, not an individual or a part of an individual; (2) has no qualm with the idea of a specific, teleological principle or force underlying generation, and conceives of germs and predispositions as specific constraints on such a principle or force. Neither of these conceptions of what is 'preformed'', I argue, is in strict opposition to classical epigenesis. I further suggest that Kant's lingering use of preformationist terminology is due to (1) his belief that this is required to account for the specificity of the specific generative force; (2) his resistance towards the unrestricted plasticity of the generative force in radical epigenesis, which violates species-fixism; and (3) his insistence on the internal, organic basis of developmental plasticity and variation within species. I conclude by suggesting that this terminological and interpretative peculiarity is partly due to a larger shift in the natural philosophical concerns surrounding the debate on epigenesis and preformation. Specifically, it is a sign that the original reasons for resisting epigenesis, namely its use of specific, teleological principles and its commitment to the natural production of biological structure, became less of a concern, whereas unrestricted plasticity and its undermining of fixism became a real issue, thereby also becoming the focal point of the debate. In recent years a certain emphasis has been put by some scholars on Leibniz's concern about empirical sciences and the relations between such concern and the development of his mature metaphysical system. In this paper I focus on Leibniz's interest for the microscope and the astonishing discoveries that such instrument made possible in the field of the life sciences during the last part of the Seventeenth century. The observation of physical bodies carried out by the "magnifying glasses'' revealed a matter swarming everywhere with life and activity, contrary to the cartesian and atomistic view of matter as something sterile and passive. Moreover, the discovery of uncountable complete "animalcula'' living in the smallest drop of water provided evidence for the idea of the preformation of every organism. During his lifetime, Leibniz was extremely watchful about the new microscopical discoveries and came into contact with some of the major "observers'' of his time, such as Hooke, Leeuwenhoek, Swammerdam and Malpighi. Relying both on some passages in Leibniz's texts and on recent critical studies, I will argue that important aspects of his metaphysics have been strongly affected by the empirical observation of the "invisible world'' which the microscope made possible. In the last part of the paper I show how the concept of "preformation'', originally drawn from the context of the life sciences, comes to play in Leibniz's philosophy a very general role, going far beyond the scope of biology and shaping important aspects of his overall philosophical system. ArgumentIn the Almagest, Ptolemy finds that the apogee of Mercury moves progressively at a speed equal to his value for the rate of precession, namely one degree per century, in the tropical reference system of the ecliptic coordinates. He generalizes this to the other planets, so that the motions of the apogees of all five planets are assumed to be equal, while the solar apsidal line is taken to be fixed. In medieval Islamic astronomy, one change in this general proposition took place because of the discovery of the motion of the solar apogee in the ninth century, which gave rise to lengthy discussions on the speed of its motion. Initially Brn and later Ibn al-Zarqlluh assigned a proper motion to it, although at different rates. Nevertheless, appealing to the Ptolemaic generalization and interpreting it as a methodological axiom, the dominant idea became to extend it in order to include the motion of the solar apogee as well. Another change occurred after correctly making a distinction between the motion of the apogees and the rate of precession. Some Western Islamic astronomers generalized Ibn al-Zarqlluh's proper motion of the solar apogee to the apogees of the planets. Analogously, Ibn al-Shtir maintained that the motion of the apogees is faster than precession. Nevertheless, the Ptolemaic generalization in the case of the equality of the motions of the apogees remained untouchable, despite the notable development of planetary astronomy, in both theoretical and observational aspects, in the late Islamic period. Argument Revising the diffusionist view of current scholarship on the Pasteur Institutes in China, this paper demonstrates the ways in which local networks and circumstances informed the circulation and construction of knowledge and practices relating to smallpox prophylaxis in the Southwest of China during the early twentieth century. I argue that the Pasteur Institute of Chengdu did not operate in a natural continuity with the preceding local French medical institutions, but rather presented an intentional break from them. This Institute, as the first established by the French in China, strove for political and administrative independence both from the Chinese authority and from the Catholic Church. Yet, its operation realized political independence only partially. The founding of this Institute was also an attempt to satisfy the medical demand for local vaccine production. However, even though the Institute succeeded at producing the Jennerian vaccine locally, its production needed to accommodate local conditions pertaining to the climate, vaccine strains, and animals. Furthermore, vaccination had to conform to Chinese variolation, including its social and medical practices, in order to achieve the collaboration of local Chinese traditional practitioners with French colonial physicians, who were Pastorian-trained and worked at the Pasteur Institute of Chengdu. Thus the nature of the Pastorian work in Chengdu was not an imposition of foreign standards and practices, but rather a mutual compromise and collaboration between the French and the Chinese. Argument While science and economy are undoubtedly interwoven, the nature of their relationship is often reduced to a positive correlation between economic and scientific prosperity. It seems that the modern scholarship focusing on success stories tends to neglect counterintuitive examples such as the impact of economic crises on research. We argue that economic difficulties, under certain circumstances, may also lead to the prosperous development of scientific institutions. This paper focuses on a particular organism, the Pine Institute in Bordeaux in France. Not only was it a key actor in the process of defining the discipline of resin chemistry, but also it remained for years at the heart of the local resin producing industry. Interestingly, there is an actual inverse correlation between the Institute's budgets and the prices and production of resinous products. The Pine Institute's existence seemed to have been driven by the crisis of the resin industry. Argument This paper analyzes the research strategies of three different cases in the study of human genetics in Mexico - the work of Ruben Lisker in the 1960s, INMEGEN's mapping of Mexican genomic diversity between 2004 and 2009, and the analysis of Native American variation by Andres Moreno and his colleagues in contemporary research. We make a distinction between an approach that incorporates multiple disciplinary resources into sampling design and interpretation (unpacking), from one that privileges pragmatic considerations over more robust multidisciplinary analysis (flattening). These choices have consequences for social, demographic, and biomedical practices, and also for accounts of genetic variation in human populations. While the former strategy unpacks fine-grained genetic variation - favoring precision and realism, the latter tends to flatten individual differences and historical depth in lieu of generalization. Agricultural societies have precursor societies that can be misrepresented in the process of writing agricultural history. Also, the interest in environmental history on labor and nature is rarely applied to Indigenous workers. This article addresses these two issues in the context of Aboriginal people of the Murray River in the region of the Victorian Mallee in southeastern Australia, now premier wheat country. It argues through a close examination of work within a "geography of labor" along the river, that Indigenous people at European contact in the 1840s and long before, labored in a far more successful and sustainable manner than humans did for most of the farm history of this region. This paper examines self-sown crops as agents in the agricultural development of Australia's southern mallee lands from the 1890s to the 1940s. Self-sown crops suggested ways to farm and provided the enticement of an occasional windfall. They assisted with expansion and consolidation of holdings and provided moral lessons in the value of persistence. In the context of the rise of modern, scientc farming characterized by strict regimes of crop rotation and fallowing, self-sown crops encouraged farmers to maintain more adaptive, less regimented approaches. Ultimately, modernist systems triumphed, and by the mid-twentieth century self-sown crops were all but excluded from mallee agriculture. For a time, however, these plants played a signcant role in shaping approaches to farming in the mallee lands and sustaining agricultural enterprise there. In the 1890s agricultural settlers moved into the Victorian Mallee, an area characterized by low rainfall and a deep-rooted eucalyptus mallee scrub. By rolling, cutting, and burning this scrub, large areas could be rapidly brought under cereal crops. The key to success on this agricultural frontier was cheap land and the cropping of broad acres using labor-saving cultivation and harvesting machinery. From the 1890s to the early 1920s, settlers successfully farmed the southern regions of the Mallee. In the 1920s settlement pushed north into drier regions, but settlers were allocated blocks too small to be viable, and in the 1930s world commodity prices collapsed. From 1938 to 1944 settlers across the Mallee experienced a run of very dry years, and dust storms became a feature of Mallee life. Government intervention resulted in the consolidation of blocks, which enabled settlers to less intensively cultivate their land and to combine cropping with sheep farming. Government research encouraged new methods of cultivation in the 1940s to arrest sand drift. Between 1926 and 1935 the Better Farming Train made seven trips to the Victorian Mallee region. Modeled on North American examples, the mission of the Train was to spread the "doctrine of better farming" to this wheat-growing region. The Train carried to the Mallee ideas about the promise of science and the hopes of modernity. It championed particular ideas about agricultural development, settlement, and the role of female labor in carrying out the yeoman ideal of the small farmholding. Although the product of a speqflc time and place, it also tapped into a long-standing belief that the mallee lands could be developed through correct settlement, the advances of technology, and the application of science. The Train was more than a moving collection of exhibits; it also freighted a way of imagining the Mallee that saw in the prospect of golden fields of wheat a way of redeeming the land and forging a modern nation. Philosophy of history and history of philosophy of science make for an interesting case of mutual containment: the former is an object of inquiry for the latter, and the latter is subject to the demands of the former. This article discusses a seminal turn in past philosophy of history with an eye to the practice of historians of philosophy of science. The narrative turn by Danto and Mink represents both a liberation for historians and a new challenge to the objectivity of their findings. I will claim that good sense can be made of working historical veins of possibility (contrary to how the phrase was originally intended) and that already Danto and Mink provided materials (although they did not quite advertise them as such) to assuage fears of a constructivist free-for-all. Karl Popper argued in 1974 that evolutionary theory contains no testable laws and is therefore a metaphysical research program. Four years later, he said that he had changed his mind. Here we seek to understand Popper's initial position and his subsequent retraction. We argue, contrary to Popper's own assessment, that he did not change his mind at all about the substance of his original claim. We also explore how Popper's views have ramifications for contemporary discussion of the nature of laws and the structure of evolutionary theory. The aim of this article is to discuss the Austro-American logical empiricism proposed by physicist and philosopher Philipp Frank, particularly his interpretation of Carnap's Aufbau, which he considered the charter of logical empiricism as a scientific world conception. According to Frank, the Aufbau was to be read as an integration of the ideas of Mach and Poincare, leading eventually to a pragmatism quite similar to that of the American pragmatist William James. Relying on this peculiar interpretation, Frank intended to bring about a rapprochement between the logical empiricism of the Vienna Circle in exile and American pragmatism. In the course of this project, in the last years of his career, Frank outlined a comprehensive, socially engaged philosophy of science that could serve as a link between science and philosophy. Part 1 of this article exposed a tension between Poincare's views of arithmetic and geometry and argued that it could not be resolved by taking geometry to depend on arithmetic. Part 2 aims to resolve the tension by supposing not merely that intuition's role is to justify induction on the natural numbers but rather that it also functions to acquaint us with the unity of orders and structures and show practices to fit or harmonize with experience. I argue that in this manner, intuition serves the epistemological function of warranting generalizations and justifying practices. In particular, it justifies the application of group-theoretic notions in geometry but not the use of set-theoretic notions in arithmetic. Recent discussions of structuralist approaches to scientific theories have stemmed primarily from John Worrall's Structural Realism in which he defends a position (since characterized epistemic structural realism) whose historical roots he attributes to Poincare. In the renewed debate inspired by Worrall, it is thus not uncommon to find Poincare's name associated with various structuralist positions. However, Poincare's structuralism is deeply entwined with neo-Kantianism and the roles of convention and objectivity within science. In this article we explore the nature of these dependencies. What emerges is not only a clearer picture of Poincare's position regarding structuralism but also two arguments for versions of epistemic structuralism different in kind from that of Worrall. Francis Bacon's method of induction is often understood as a form of eliminative induction. The idea, on this interpretation, is to list the possible formal causes of a phenomenon and, by reference to a copious and reliable natural history, to falsify all of them but one. Whatever remains must be the formal cause. Bacon's crucial instances are often seen as the crowning example of this method. In this article, I argue that this interpretation of crucial instances is mistaken, and it has caused us to lose sight of why Bacon assigns crucial instances a special role in his quest for epistemic certainty about formal causes. If crucial instances are interpreted eliminatively, then they are subject to the two problems related to underdetermination raised by Duhem: (1) that it is impossible to be certain one has specified all of the possible alternatives and (2) that an experiment falsifies a whole theory, not just a single hypothesis in isolation. I show that Bacon anticipates and aims to dodge both of these problems by conceiving of crucial instances as working, in the ideal case, through direct affirmations that are supported by links to more foundational knowledge. This paper describes the life and scientific development of Arthur E. Haas, from his early career as young, ambitious Jewish-Austrian scientist at the University of Vienna to his later career in exile at the University of Notre Dame. Haas is known for his early contributions to quantum physics and as the author of several textbooks on topics of modern physics. During the last decade of his life, he turned his attention to cosmology. In 1935 he emigrated from Austria to the United States. There he assumed, on recommendation of Albert Einstein, a faculty position at the University of Notre Dame. He continued his work on cosmology and tried to establish relationships between the mass of the universe and the fundamental cosmological constants to develop concepts for the early universe. Together with Georges LemaItre he organized in 1938 the first international conference on cosmology, which drew more than one hundred attendants to Notre Dame. Haas died in February 1941 after suffering a stroke during a visit in Chicago. We assess the scientific value of Oppenheimer's research on black holes in order to explain its neglect by the scientific community, and even by Oppenheimer himself. Looking closely at the scientific culture and conceptual belief system of the 1930s, the present article seeks to supplement the existing literature by enriching the explanations and complicating the guiding questions. We suggest a rereading of Oppenheimer as a figure both more intriguing for the history of astrophysics and further ahead of his time than is commonly supposed. Policy network approach has become a broadly accepted and frequently adopted practice in modern state governance, especially in the public sector. The study utilises a broadly defined policy network conceptual frame and categories of reference to trace the evolution of education policy-making in China. The study uses The Outline of China's National Plan for Medium and Long-term Education Reform and Development (2010-2020) as an illustrative case study. This study argues that China's education policy-making has changed, and the three most prominent changes are the transition from a Party-dominant practice to one primarily driven by the central government, the enhanced role of higher education institutions and scholars as professional interest group' in the Chinese context and the increasing participation of non-governmental actors in the policy-making process. Essentially exploratory in nature, this study hopes to contribute to the understanding of China's education policy-making and broader education governance while contributing to the mapping of an important sector of the global education network. The article builds on prior arguments that research on issues of social justice in education has often lacked constructive engagement with education policy-making, and that this can be partly attributed to a lack of clarity about what a socially just education system might look like. Extending this analysis, this article argues that this lack of clarity is perpetuated by a series of contradictions and dilemmas underpinning progressive' debate in education, which have not been adequately confronted. At the heart are dilemmas about what constitutes a socially just negotiation of the binarised hierarchy of knowledge that characterises education in the UK, Australia and elsewhere. Three exemplar cases presented from contemporary education curriculum policy in England and Australia are used to illustrate these dilemmas. We then extend this argument to a series of other philosophical dilemmas which haunt education and create tensions or contradictions for those concerned with social justice. It is maintained that we need to confront these dilemmas in efforts to extend conceptual clarity in what it is we are seeking to achieve, which in turn can better equip us to provide the empirical and conceptual information necessary to effectively engage policy-making to remediate inequalities in education. This paper empirically documents media portrayals of Australia's performance on the Program for the International Student Assessment (PISA), 2000-2014. We analyse newspaper articles from two national and eight metropolitan newspapers. This analysis demonstrates increased media coverage of PISA over the period in question. Our research data were analysed using framing theory', documenting how the media frames stories about Australia's performance on PISA. Three frames were identified: counts and comparisons; criticisms; and contexts. Most of the media coverage (41%) was concerned with the first frame, counts and comparisons, which analysed PISA data to provide evidence' that was then used to comparatively position Australia against other countries, reference societies, which do better, with particular emphasis on Finland and also Shanghai after the 2009 PISA. The other two frames dealt with criticisms and contextual issues. This paper only focuses on the first frame. The analysis demonstrates the ways in which media coverage of Australia's PISA performance has had policy impact. Increased attention on what works' in education has led to an emphasis on developing policy from evidence based on comparing and combining a particular statistical summary of intervention studies: the standardised effect size. It is assumed that this statistical summary provides an estimate of the educational impact of interventions and combining these through meta-analyses and meta-meta-analyses results in more precise estimates of this impact which can then be ranked. From these, it is claimed, educational policy decisions can be driven. This paper will demonstrate that these assumptions are false: standardised effect size is open to researcher manipulations which violate the assumptions required for legitimately comparing and combining studies in all but the most restricted circumstances. League tables of types of intervention, which governments point to as an evidence base for effective practice may, instead, be hierarchies of openness to research design manipulations. The paper concludes that public policy and resources are in danger of being misdirected. This paper explores the changing terrain of disability support policy in Australia. Drawing on a critical disability framework of policy sociology, the paper considers the policy problem of access to education for people with disabilities under recent reform by means of the National Disability Insurance Scheme (NDIS), which commenced full roll-out across the country from July 2016. The paper reviews NDIS reports, legislation and associated literature to consider how eligibility to scheme participation and education services are shaped, and how education is positioned in the development and implementation of the NDIS. The analysis highlights tensions that exist for people with disabilities and their families who both access the scheme and who might draw on its provision to support their education, because of the way the policy is oriented towards pathological categorisation, standardised outcomes and service delineation rather than integrated support and informed involvement. The paper concludes by arguing that despite the policy priority across Organisation for Economic Co-operation and Development countries of increasing lifelong learning opportunities, fragmented NDIS policy in Australia prevents people with disabilities from achieving this ideal. This paper explores teachers' resistance against pedagogic reform in South Korea, which was instituted in the form of an in-service teacher certification. Ideas for the reform, Teaching English in English (TEE), were borrowed from native-English-speaking countries' and implemented without systematic localization, therefore, it was not surprising that teachers resisted it, although hidden from the reform managers to avoid disciplinary action. The paper starts with a description of the educational context in South Korea, which has fashioned teachers' practices of resistance. The conceptualization of resistance follows, drawing on studies from varied disciplines, including Foucault's work on resistance of conduct' (counter-conducts) and Scott's invisible' resistance. Findings from a case study of the TEE certification are then discussed. Teachers were engaged in various forms of low-profile resistance, which culminated to impact on the fate of the certification. The paper highlights the potential impact of resistance on the course of a reform, which has often been disregarded as non-constituent or unimportant or even misunderstood as compliance by reform managers and researchers. Thus, it contributes to a more comprehensive understanding of teachers' resistance in the context of educational reforms, which has wider implications, as borrowed educational reforms are becoming all too frequent around the world. School funding is a principal site of policy reform and contestation in the context of broad global shifts towards private- and market-based funding models. These shifts are transforming not only how schools are funded but also the meanings and practices of public education: that is, shifts in what is public' about schooling. In this paper, we examine the ways in which different articulations of the public' are brought to bear in contemporary debates surrounding school funding. Taking the Australian Review of Funding for Schooling (the Gonski Report) as our case, we analyse the policy report and its subsequent media coverage to consider what meanings are made concerning the publicness' of schooling. Our analysis reveals three broad themes of debate in the report and related media coverage: (1) the primacy of procedural politics' (i.e. the political imperatives and processes associated with public policy negotiations in the Australian federation); (2) changing relations between what is considered public and private; and (3) a connection of government schooling to concerns surrounding equity and a public in need'. We suggest these three themes contour the debates and understandings that surround the publicness' of education generally, and school funding more specifically. The present work aimed to prolong the contact time of flurbiprofen (FBP) in the ocular tissue to improve the drug anti-inflammatory activity. Different niosome systems were fabricated adopting thin-film hydration technique and using the nonionic surfactant Span 60. The morphology of the prepared niosomes was characterized by scanning electron microscopy (SEM). Physical characterization by differential scanning calorimetry, X-ray powder diffraction and Fourier transform infrared spectroscopy were conducted for the optimized formula (F5) that was selected on the basis of percent entrapment efficiency, vesicular size and total lipid content. F5 was formulated as 1% w/w Carpobol 934 gel. Pharmacokinetic parameters of FBP were investigated following ocular administration of F5-loaded gel system, F5 niosome dispersion or the corresponding FBP ocular drops to albino rabbits dispersion. Anti-inflamatory effect of F5-loaded carbopol gel was investigated by histopathological examination of the corneal tissue before and after the treatment of inflamed rabbit eye with the system. Results showed that cholesterol content, surfactant type. and total lipid contents had an apparent impact on the vesicle size of the formulated niosomes. Physical characterization revealed reduced drug crystallinity and incidence of interaction with other niosome contents. F5-loaded gel showed higher C-max, area under the curve (AUC(0-12)), and thus higher ocular bioavailability than those of the corresponding FBP ocular solution. F5-loaded gel showed a promising rapid anti-inflammatory effect in the inflamed rabbit eye. These findings will eradicate the necessity for frequent ocular drug instillation and thus, improve patient compliance. Objective: The aim of this article is to compare the gravitational powder blend loading method to the tablet press and manual loading in terms of their influence on tablets' critical quality attributes (CQA). Significance: The results of the study can be of practical relevance to the pharmaceutical industry in the area of direct compression of low-dose formulations, which could be prone to content uniformity (CU) issues. Methods: In the preliminary study, particle size distribution (PSD) and surface energy of raw materials were determined using laser diffraction method and inverse gas chromatography, respectively. For trials purpose, a formulation containing two pharmaceutical ingredients (APIs) was used. Tablet samples were collected during the compression progress to analyze their CQAs, namely assay and CU. Results: Results obtained during trials indicate that tested direct compression powder blend is sensitive to applied powder handling method. Mild increase in both APIs content was observed during manual scooping. Gravitational approach (based on discharge into the drum) resulted in a decrease in CU, which is connected to a more pronounced assay increase at the end of tableting than in the case of manual loading. Conclusions: The correct design of blend transfer over single unit processes is an important issue and should be investigated during the development phase since it may influence the final product CQAs. The manual scooping method, although simplistic, can be a temporary solution to improve the results of API's content and uniformity when compared to industrial gravitational transfer. Objective: The aim of this work was the development of mucoadhesive sublingual films, prepared using a casting method, for the administration of oxycodone. Materials and methods: A solvent casting method was employed to prepare the mucoadhesive films. A calibrated pipette was used to deposit single aliquots of different polymeric solutions on a polystyrene plate lid. Among the various tested polymers, hydroxypropylcellulose at low and medium molecular weight (HPC) and pectin at two different degrees of esterification (PC) were chosen for preparing solutions with good casting properties, capable of producing films suitable for mucosal application. Results and discussion: The obtained films showed excellent drug content uniformity and stability and rapid drug release, which, at 8min, ranged from 60% to 80%. All films presented satisfactory mucoadhesive and mechanical properties, also confirmed by a test on healthy volunteers, who did not experience irritation or mucosa damages. Pectin films based on pectin at lower degrees of esterification have been further evaluated to study the influence of two different amounts of drug on the physicochemical properties of the formulation. A slight reduction in elasticity has been observed in films containing a higher drug dose; nevertheless, the formulation maintained satisfactory flexibility and resistance to elongation. Conclusions: HPC and PC sublingual films, obtained by a simple casting method, could be proposed to realize personalized hospital pharmacy preparations on a small scale. Nanocapsules (NCs) are submicron-sized core shell systems which present important advantages such as improvement of drug efficacy and bioavailability, prevention of drug degradation, and provision of controlled-release delivery. The available methods for NC production require expensive recovery and purification steps which compromised the morphology of NCs. Industrial applications of NCs have been avoided due to the aforementioned issues. In this study, we developed a new method based on a modified self-microemulsifying drug delivery system (SMEDDS) for in situ NCs production within the gastrointestinal tract. This new methodology does not require purification and recovery steps and can preserve the morphology and the functionality of NCs. The in situ formed NCs of Eudragit((R)) RL PO were compared with nanospheres (NEs) in order to obtain evidence of their core-shell structure. NCs presented a spherical morphology with a size of 126.2 +/- 13.1nm, an ibuprofen encapsulation efficiency of 31.3% and a zeta-potential of 37.4mV. Additionally, NC density and release profile (zero order) showed physical evidence of the feasibility of NCs in situ creation. Background and objective: Capsaicin is the main pungent principle present in chili peppers has been found to possess P-glycoprotein (P-gp) inhibition activity in vitro, which may have the potential to modulate bioavailability of P-gp substrates. Therefore, purpose of this study was to evaluate the effect of capsaicin on intestinal absorption and bioavailability of fexofenadine, a P-gp substrate in rats. Methods: The mechanistic evaluation was determined by non-everted sac and intestinal perfusion studies to explore the intestinal absorption of fexofenadine. These results were confirmed by an in vivo pharmacokinetic study of oral administered fexofenadine in rats. Results: The intestinal transport and apparent permeability (P-app) of fexofenadine were increased significantly by 2.8 and 2.6 fold, respectively, in ileum of capsaicin treated rats when compared to control group. Similarly, absorption rate constant (K-a), fraction absorbed (F-ab) and effective permeability (P-eff) of fexofenadine were increased significantly by 2.8, 2.9 and 3.4 fold, respectively, in ileum of rats pretreated with capsaicin when compared to control group. In addition, maximum plasma concentration (C-max) and area under the concentration-time curve (AUC) were increased significantly by 2.3 and 2.4 fold, respectively, in rats pretreated with capsaicin as compared to control group. Furthermore, obtained results in rats pretreated with capsaicin were comparable to verapamil (positive control) treated rats. Conclusions: Capsaicin pretreatment significantly enhanced the intestinal absorption and bioavailability of fexofenadine in rats likely by inhibition of P-gp mediated cellular efflux, suggesting that the combined use of capsaicin with P-gp substrates may require close monitoring for potential drug interactions. The objective of this study was to prepare and evaluate metoprolol tartrate sustained-release pellets. Cores were prepared by hot melt extrusion and coated pellets were prepared by hot melt coating. Cores were found to exist in a single-phase state and drug in amorphous form. Plasticizers had a significant effect on torque and drug content, while release modifiers and coating level significantly affected the drug-release behavior. The mechanisms of drug release from cores and coated pellets were Fickian diffusion and diffusion-erosion. The coated pellets exhibited sustained-release properties in vitro and in vivo. The purpose of this study was to evaluate the performance of Neusilin (R) (NEU) a synthetic magnesium aluminometasilicate as an inorganic drug carrier co-processed with the hydrophilic surfactants Labrasol and Labrafil to develop Tranilast (TLT)-based solid dispersions using continuous melt extrusion (HME) processing. Twin-screw extrusion was optimized to develop various TLT/excipient/surfactant formulations followed by continuous capsule filling in the absence of any downstream equipment. Physicochemical characterization showed the existence of TLT in partially crystalline state in the porous network of inorganic NEU for all extruded formulations. Furthermore, in-line NIR studies revealed a possible intermolecular H-bonding formation between the drug and the carrier resulting in the increase of TLT dissolution rates. The capsules containing TLT-extruded solid dispersions showed enhanced dissolution rates and compared with the marketed Rizaben (R) product. Objective: The aim of the present work is to exclusively optimize and model the effect of phospholipid type either egg phosphatidylcholine (EPC) or soybean phosphatidylcholine (SPC), together with other formulation variables, on the development of nano-ethosomal systems for transdermal delivery of a water-soluble antiemetic drug. Tropisetron HCl (TRO) is available as hard gelatin capsules and IV injections. The transdermal delivery of TRO is considered as a novel alternative route supposing to improve BAV as well as patient convenience. Methods: TRO-loaded ethanolic vesicular systems were prepared by hot technique. The effect of formulation variables were optimized through a response surface methodology using 3x2(2)-level full factorial design. The concentrations of both PC (A) and ethanol (B) and PC type (C) were the factors, while entrapment efficiency (Y-1), vesicle size (Y-2), polydispersity index (Y-3), and zeta potential (Y-4) were the responses. The drug permeation across rat skin from selected formulae was studied. Particle morphology, drug-excipient interactions, and vesicle stability were also investigated. Results: The results proved the critical role of all formulation variables on ethosomal characteristics. The suggested models for all responses showed good predictability. Only the concentration of phospholipid, irrespective to PC type, had a significant effect on the transdermal flux (p<0.01). The ethosomal vesicles were unilamellar with a nearly spherical shape. EPC-based ethosomes proved good stability. Conclusion: The study suggests the applicability of statistical modeling as a promising tool for prediction of ethosomal characteristics. The ethanolic vesicles were considered as novel potential nanocarriers for accentuated transdermal TRO delivery. Combination delivery systems composed of injectable hydrogels and drug-incorporated nanoparticles are urgently in regional cancer chemotherapy to facilitate efficient delivery of chemotherapeutic agents, enhance antitumor efficiency, and decrease side effects. Here, we developed a novel thermosensitive amphiphilic triblock copolymer consisting of methoxy poly(ethylene glycol), poly(octadecanedioic anhydride), and d,l-lactic acid oligomer (PEOALA), built a combination system of thermosensitive injectable hydrogel PTX/PEOALA(Gel) based on paclitaxel (PTX)-loaded PEOALA nanoparticles (NPs). PTX/PEOALA(Gel) could be stored as freeze-dried powders of paclitaxel-loaded PEOALA NPs, which could be easily redispersed into the water at ambient temperature, and form a hydrogel at the injected site in vivo. The in vitro cytotoxicity of PTX/PEOALA(Gel) showed no obvious cytotoxicity in comparison with Taxol (R) against MCF-7 and HeLa cells. However, the in vivo antitumor activity showed that a single intratumoral injection of the PTX/PEOALA(Gel) formulation was more effective than four intravenous (i.v.) injections of Taxol((R)) at a total dosage of 20mg/kg in inhibiting the growth of MCF-7 tumor-bearing Balb/c mice, and the inhibition could be sustained for more than 17 d. The pharmacokinetic study demonstrated that the intratumoral injection of PTX/PEOALA(Gel) could greatly decrease the systemic exposure of PTX, as confirmed by the rather low plasma concentration, and prolonged circulation time and enhanced tumor PTX accumulation, implying fewer off-target side effects. In summary, the PTX/PEOALA(Gel) combination local delivery system could enhance tumor inhibition effect and tumor accumulation of PTX, and lower the systemic exposure. So, the reconstituted PTX/PEOALA(Gel) system could potentially be a useful vehicle for regional cancer chemotherapy. Context: Particle micronization for inhalation can impart surface disorder (amorphism) of crystalline structures. This can lead to stability issues upon storage at elevated humidity from recrystallization of the amorphous state, which can subsequently affect the aerosol performance of the dry powder formulation. Objective: The aim of this study was to investigate the impact of an additive, magnesium stearate (MGST), on the stability and aerosol performance of co-milled active pharmaceutical ingredient (API) with lactose. Methods: Blends of API-lactose with/without MGST were prepared and co-milled by the jet-mill apparatus. Samples were stored at 50% relative humidity (RH) and 75% RH for 1, 5, and 15 d. Analysis of changes in particle size, agglomerate structure/strength, moisture sorption, and aerosol performance were analyzed by laser diffraction, scanning electron microscopy (SEM), dynamic vapor sorption (DVS), and in-vitro aerodynamic size assessment by impaction. Results: Co-milled formulation with MGST (5% w/w) led to a reduction in agglomerate size and strength after storage at elevated humidity compared with co-milled formulation without MGST, as observed from SEM and laser diffraction. Hysteresis in the sorption/desorption isotherm was observed in the co-milled sample without MGST, which was likely due to the recrystallization of the amorphous regions of micronized lactose. Deterioration in aerosol performance after storage at elevated humidity was greater for the co-milled samples without MGST, compared with co-milled with MGST. Conclusion: MGST has been shown to have a significant impact on co-milled dry powder stability after storage at elevated humidity in terms of physico-chemical properties and aerosol performance. Context: Bosentan is a poorly soluble drug and pose challenges in designing of drug delivery systems. Objective: The objective of this study is to enhance the solubility, dissolution and shelf-life of bosentan by formulating it as S-SMEDDS capsules. Materials and methods: Solubility of bosentan was tested in various liquid vehicles such as oils (rice bran and sunflower), surfactants (span 20 and tween 80) and co-surfactants (PEG 400 and propylene glycol) and microemulsions were developed. Bosentan was incorporated into appropriate microemulsion systems which were previously identified from pseudo ternary phase diagrams. Bosentan-loaded SMEDDS were evaluated for drug content, drug release, zeta potential, and droplet size. The selected liquid SMEDDS were converted into solid SMEDDS by employing adsorption and melt granulation. Solid SMEDDS were characterized for micromeritics and evaluated for drug content, drug release, and shelf-life Results: Isotropic systems R5, R13, S5, and S13 with submicron droplet size had exhibited 85.45, 94.12, 81.67, and 96.64% drug release, respectively. Solid SMEDDS of MR13 and AS13 formulations with rapid reconstitution ability, exhibited 84.85 and 86.74% of on par drug release. The formulations were physicochemically intact for 1.02 and 1.56 years. Discussion: Liquid SMEDDS composed with PEG400 had displayed optimal characters. Solid SMEDDS had high-dissolution profiles than bosentan due to modification in the crystalline structure of drug upon microemulsification. Conclusion: Thus, solid SMEDDS addressed the solubility, dissolution, and stability issues of bosentan and becomes an alternate for clinical convenience. Objective: The objective of this study is to develop a new solubility enhancement strategy of antipsychotic drug - perphenazine (PPZ) - in the form of its amorphous nanoparticle complex (or nanoplex) with polyelectrolyte dextran sulfate (DXT). Significance: Poor bioavailability of PPZ necessitated the development of fast-dissolving PPZ formulations regardless of delivery routes. Existing fast-dissolving formulations, however, exhibited low PPZ payload. The high-payload PPZ-DXT nanoplex represents an attractive fast-dissolving formulation, as dissolution rate is known to be proportional to payload. Methods: The nanoplex was prepared by electrostatically driven complexation between PPZ and DXT in a simple process that involved only ambient mixing of PPZ and DXT solutions. We investigated the effects of key variables in drug-polyelectrolyte complexation (i.e. pH and charge ratio R-DXT/PPZ) on the physical characteristics and preparation efficiency of the nanoplex produced. Subsequently, we characterized the colloidal and amorphous state stabilities, dissolution enhancement, and supersaturation generation of the nanoplex prepared at the optimal condition. Results: The physical characteristics of nanoplex were governed by R-DXT/PPZ, while the preparation efficiency was governed by the preparation pH. Nanoplex having size of approximate to 80nm, zeta potential of approximate to(-) 60mV, and payload of approximate to 70% (w/w) were prepared at nearly 90% PPZ utilization rate and approximate to 60% yield. The nanoplex exhibited superior dissolution than native PPZ in simulated intestinal juice, resulting in high and prolonged apparent solubility with good storage stabilities. Conclusions: The simple yet efficient preparation, excellent physical characteristics, fast dissolution, and high apparent solubility exhibited by the PPZ-DXT nanoplex established its potential as a new bioavailability enhancement strategy of PPZ. The purpose of this work was to formulate piperine solid lipid nanoparticle (SLN) dispersion to exploit its efficacy orally and topically. Piperine SLN were prepared by melt emulsification method and formula was optimized by the application of 3(2) factorial design. The nanoparticulate dispersion was evaluated for particle size, entrapment efficiency and zeta potential (ZP). Optimized batch (128.80nm average size, 78.71% entrapment efficiency and -23.34mV zeta potential) was characterized for differential scanning calorimetry (DSC), X-ray diffraction which revealed amorphous nature of piperine in SLN. The prepared SLN were administered orally and topically to CFA-induced arthritic rats. Ex vivo study using Franz diffusion cell indicate that piperine from SLN gel formulation accumulates in the skin. Pharmacodynamic study result indicates both the topical and oral piperine evoked a significant response compared to orally administered chloroquine suspension. The results of ELISA show significant reduction in TNF in treated rat which might be the reason behind the DMARD action of piperine SLN. Context: Novel, safe, efficient and cost effective nano-carriers from renewable resources have got greater interest for enhancing solubility and bioavailability of hydrophobic dugs. Objectives: This study reports the synthesis of a novel biocompatible non-phospholipid human metabolite "Creatinine" based niosomal delivery system for Azithromycin improved oral bioavailability. Methods: Synthesized surfactant was characterized through spectroscopic and spectrometric techniques and then the potential for niosomal vesicle formation was evaluated using Azithromycin as model drug. Drug loaded vesicles were characterized for size, polydispersity index (PDI), shape, drug encapsulation efficiency (EE), in vitro release and drug-excipient interaction using zetasizer, atomic force microscope (AFM), LC-MS/MS and FTIR. The biocompatibility of surfactant was investigated through cells cytotoxicity, blood hemolysis and acute toxicity. Azithromycin encapsulated in niosomes was investigated for in vivo bioavailability in rabbits. Results: The vesicles were spherical with 2474.67nm diameter hosting 73.29 +/- 3.51% of the drug. Surfactant was nontoxic against cell cultures and caused 5.80 +/- 0.51% hemolysis at 1000 mu g/mL. It was also found safe in mice up to 2.5g/kg body weight. Synthesized surfactant based niosomal vesicles revealed enhanced oral bioavailability of Azithromycin in rabbits. Conclusions: The results of the present study confirm that the novel surfactant is highly biocompatible and the niosomal vesicles can be efficiently used for improving the oral bioavailability of poor water soluble drugs. The current research work was executed with an aim to explore and promote the potential of self-microemusifying drug delivery systems (SMEDDS) in the form of tablets, in order to enhance solubility and oral bioavailability of poorly aqueous soluble drug Repaglinide (RPG). RPG-loaded liquid SMEDDS were developed consisting Labrafil M 1944CS, Kolliphor EL and Propylene glycol, which were then characterized on various parameters. After characterization and optimization, liquid SMEDDS were converted into solid form by adsorbing on Aeroperl (R) 300 pharma and polyplasdone(TM) XL. Further, selection of suitable excipients was done and mixed with prepared solidified SMEDDS powder followed by the preparation of self-microemulsifying tablets (SMET's) wet granulation-compression method. SMET's were subjected to differential scanning calorimetry (DSC) and particle X-ray diffraction (RXRD) studies, results of which indicated transformation of crystalline structure of RPG because of dispersion of RPG at molecular level in liquid SMEDDS. This was further assured by micrographs obtained from scanning electron microscope. SMET's shown more than 85% (30min) of in vitro drug release in contrast to conventional marketed tablets (13.2%) and pure RPG drug (3.2%). Results of in vivo studies furnished that SMET's had shown marked decrease in the blood glucose level and prolonged duration of action (up to 8h) in comparison with conventional marketed tablets and pure RPG drug. In conclusion, SMET's serves as a promising tool for successful oral delivery of poorly aqueous soluble drug(s) such as RPG. Objective: The objective of this study is to investigate the fate of albumin coupled nanoparticulate system over non-targeted drug carrier in the treatment of hemisectioned spinal cord injury (SCI). Significance: Targeted delivery of methyl prednisolone (MP) and minocycline (MC) portrayed improved therapeutic efficacy as compared with non-targeted nanoparticles (NPS). Methods: Albumin coupled, chitosan stabilized, and cationic NPS (albumin-MP+MC-NPS) of poly-(lactide-co-glycolic acid) were prepared using the emulsion solvent evaporation method. Prepared NPS were characterized for drug entrapment efficiency, particle size, poly-dispersity index (PDI), zeta potential, and morphological characteristics. Their evaluation was done based on the pharmaceutical, toxicological, and pharmacological parameters. Results and discussion: In vitro release of MP+MC from albumin-MP+MC-NPS and MP+MC-NPS was observed to be very controlled for the period of eight days. Cell viability study portrayed non-toxic nature of the developed NPS. Albumin-MP+MC-NPS showed prominent anti-inflammatory potential as compared with non-targeted NPS (MP+MC-NPS) when studied in LPS-induced inflamed astrocytes. Albumin-MP+MC-NPS reduced lesional volume and improved behavioral outcomes significantly in rats with SCI (hemisectioned injury model) when compared with that of MP+MC-NPS. Conclusions: Albumin-coupled NPS carrier offered an effective method of SCI treatment following safe co-administration of MP and MC. The in vitro and in vivo effectiveness of MP+MC was improved tremendously when compared with the effectiveness showed by MP+MC-NPS. That could be attributed to the site specific, controlled release of MP+MC to the inflammatory site. Generalized polynomial chaos (gPC) is a spectral technique in random space to represent random variables and stochastic processes in terms of orthogonal polynomials of the Askey scheme. One of its most fruitful applications consists of solving random differential equations. With gPC, stochastic solutions are expressed as orthogonal polynomials of the input random parameters. Different types of orthogonal polynomials can be chosen to achieve better convergence. This choice is dictated by the key correspondence between the weight function associated to orthogonal polynomials in the Askey scheme and the probability density functions of standard random variables. Otherwise, adaptive gPC constitutes a complementary spectral method to deal with arbitrary random variables in random differential equations. In its original formulation, adaptive gPC requires that both the unknowns and input random parameters enter polynomially in random differential equations. Regarding the inputs, if they appear as non-polynomial mappings of themselves, polynomial approximations are required and, as a consequence, loss of accuracy will be carried out in computations. In this paper an extended version of adaptive gPC is developed to circumvent these limitations of adaptive gPC by taking advantage of the random variable transformation method. A number of illustrative examples show the superiority of the extended adaptive gPC for solving nonlinear random differential equations. In addition, for the sake of completeness, in all examples randomness is tackled by nonlinear expressions. (C) 2017 Elsevier B.V. All rights reserved. In this work, a size-dependent model of a Sheremetev-Pelekh-Reddy-Levinson micro-beam is proposed and validated using the couple stress theory, taking into account large deformations. The applied Hamilton's principle yields the governing PDEs and boundary conditions. A comparison of statics and dynamics of beams with and without size-dependent components is carried out. It is shown that the proposed model results in significant, both qualitative and quantitative, changes in the nature of beam deformations, in comparison to the so far employed standard models. A novel scenario of transition from regular to chaotic vibrations of the size-dependent Sheremetev-Pelekh model, following the Pomeau-Manneville route to chaos, is also detected and illustrated, among others. (C) 2017 Elsevier B.V. All rights reserved. We show for the first time that supratransmission threshold can be found in discrete non-linear Schrodinger equation modelling the optical waveguide arrays with Kerr nonlinearity using two-dimensional map approach. Called homoclinic nonlinear band gap threshold, this amplitude is in agreement with the numerical one even for the strongly discrete aspect of the waveguide and for the large frequencies. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA' non-monotonic Lyapunov function on a suitable triangulation covering the given compact and bounded set excluding a small neighbourhood of the equilibrium. It is shown that for every asymptotically stable system there exists a suitable triangulation such that the proposed algorithm terminates successfully. The second method is to verify a CPA function constructed based on the values of the norm of the state at all vertices of a suitable triangulation covering the given compact and bounded set is a non-monotonic Lyapunov function on the given set without a small neighbourhood of the equilibrium. It is further proved that if system is asymptotically stable then there exists a suitable triangulation such that the second way works. The comparison of the proposed two methods are discussed via three examples. (C) 2017 Elsevier B.V. All rights reserved. The paper is devoted to study of existence and uniqueness of periodic solutions for a particular class of nonlinear fractional differential equations admitting its right-hand side with certain singularities. Our approach is based on Krasnosel'skii and Schauder fixed point theorems and monotone iterative technique which enable us to extend some previously known results. The discussed problems are characterized by a Green's function which has integrable singularities disallowing a direct use of classical techniques known from theory of ordinary differential equations, therefore proper modifications are proposed. Furher, the paper presents simple numerical algorithms directly built on the iterative technique used in theoretical proofs. Illustrative examples conclude the paper. (C) 2017 Elsevier B.V. All rights reserved. We investigate the dynamics of a spin-orbit (SO) coupled BECs in a time dependent harmonic trap and show the dynamical system to be completely integrable by constructing the Lax pair. We then employ gauge transformation approach to witness the rapid oscillations of the condensates for a relatively smaller value of SO coupling in a time independent harmonic trap compared to their counterparts in a transient trap. Keeping track of the evolution of the condensates in a transient trap during its transition from confining to expulsive trap, we notice that they collapse in the expulsive trap. We further show that one can manipulate the scattering length through Feshbach resonance to stretch the lifetime of the confining trap and revive the condensate. Considering a SO coupled state as the initial state, the numerical simulation indicates that the reinforcement of Rabi coupling on SO coupled BECs generates the striped phase of the bright solitons and does not impact the stability of the condensates despite destroying the integrability of the dynamical system. (C) 2017 Elsevier B.V. All rights reserved. This work investigates the unsteady electroosmotic slip flow of viscoelastic fluid through a parallel plate micro-channel under combined influence of electroosmotic and pressure gradient forcings with asymmetric zeta potentials at the walls. The generalized second grade fluid with fractional derivative was used for the constitutive equation. The Navier slip model with different slip coefficients at both walls was also considered. By employing the Debye-Huckel linearization and the Laplace and sin-cos-Fourier transforms, the analytical solutions for the velocity distribution are derived. And the finite difference method for this problem was also given. Finally, the influence of pertinent parameters on the generation of flow is presented graphically. (C) 2017 Elsevier B.V. All rights reserved, In this paper, a white-headed langurs impulsive state feedback control model with sparse effect and continuous delay is investigated. We get the sufficient condition under which the system has a unique order-1 periodic solution through the method of successor function, and prove the stability of the order-1 periodic solution by the limit method of the successor point sequences. Furthermore, we perform numerical analysis on the theoretical results. Our results show that artificial breeding and releasing langurs in captive to the wild can effectively protect wild white-headed langurs with sparse effect and continuous delay. (C) 2017 Elsevier B.V. All rights reserved. A periodically forced series LCR circuit with Chua's diode as a nonlinear element exhibits slow passage through Hopf bifurcation. This slow passage leads to a delay in the Hopf bifurcation. The delay in this bifurcation is a unique quantity and it can be predicted using various numerical analysis. We find that when an additional periodic force is added to the system, the delay in bifurcation becomes chaotic which leads to an unpredictability in bifurcation delay. Further, we study the bifurcation of the periodic delay to chaotic delay in the slow passage effect through strange nonchaotic delay. We also report the occurrence of strange nonchaotic dynamics while varying the parameter of the additional force included in the system. We observe that the system exhibits a hitherto unknown dynamical transition to a strange nonchaotic attractor. With the help of Lyapunov exponent, we explain the new transition to strange nonchaotic attractor and its mechanism is studied by making use of rational approximation theory. The birth of SNA has also been confirmed numerically, using Poincare maps, phase sensitivity exponent, the distribution of finite-time Lyapunov exponents and singular continuous spectrum analysis. (C) 2017 Elsevier B.V. All rights reserved. We consider extended starlike networks where the hub node is coupled with several chains of nodes representing star rays. Assuming that nodes of the network are occupied by nonidentical self-oscillators we study various forms of their cluster synchronization. Radial cluster emerges when the nodes are synchronized along a ray, while circular cluster is formed by nodes without immediate connections but located on identical distances to the hub. By its nature the circular synchronization is a new manifestation of so called remote synchronization [33]. We report its long-range form when the synchronized nodes interact through at least three intermediate nodes. Forms of long-range remote synchronization are elements of scenario of transition to the total synchronization of the network. We observe that the far ends of rays synchronize first. Then more circular clusters appear involving closer to hub nodes. Subsequently the clusters merge and, finally, all network become synchronous. Behavior of the extended starlike networks is found to be strongly determined by the ray length, while varying the number of rays basically affects fine details of a dynamical picture. Symmetry of the star also extensively influences the dynamics. In an asymmetric star circular cluster mainly vanish in favor of radial ones, however, long-range remote synchronization survives. (C) 2017 Elsevier B.V. All rights reserved. In this paper, under investigation is a sixth-order variable-coefficient nonlinear Schrodinger equation, which could describe the attosecond pulses in an optical fiber. Based on the self similarity transformation and Hirota method, one-and two-soliton solutions are obtained under certain constraints. Investigation shows that the velocities and shapes of the solitons and bound solitons are both affected by the sixth-order dispersion term, and the maximum intensities of the solitons and bound solitons increase when the gain function is positive and decrease when the gain function is negative, otherwise the periodicity of the bound solitons is destroyed when the gain function is not 0. (C) 2017 Elsevier B.V. All rights reserved. Piecewise smooth dynamical systems make use of discontinuities to model switching between regions of smooth evolution. This introduces an ambiguity in prescribing dynamics at the discontinuity: should the dynamics be given by a limiting value on one side or other of the discontinuity, or a member of some set containing those values? One way to remove the ambiguity is to regularize the discontinuity, the most common being either to smooth it out, or to introduce a hysteresis between switching in one direction or the other across it. Here we show that the two can in general lead to qualitatively different dynamical outcomes. We then define a higher dimensional model with both smoothing and hysteresis, and study the competing limits in which hysteretic or smoothing effects dominate the behaviour, only the former of which correspond to Filippov's standard 'sliding modes'. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a three dimensional drug model is constructed to investigate the impact of media coverage on the spread and control of drug addiction. The dynamical behavior of the model is studied by using the basic reproduction number Ro. The drug-free equilibrium is globally asymptotically stable if R-o < 1 and the drug addiction equilibrium is locally stable if R-o > 1. The results demonstrate that the media effect in human population cannot change the stabilities of equilibria but can affect the number of drug addicts. Sensitivity analyses are performed to seek for effective control measures for drug treatment. Numerical simulations are given to support the theoretical results. (C) 2017 Elsevier B.V. All rights reserved. The nonlinear dynamical features of a gyroscopic system manifesting in a rotation rate sensor are presented. A computational shooting method and Floquet multipliers are used to characterize the response. Response characteristics are demonstrated and studied by generating various frequency-response plots, force-response curves, time-history plots, and phase-portraits. The effects of varying the DC bias voltages, the AC drive-voltage and drive frequency, and the quality factors on the system response are studied in detail. The advantages of operating in the nonlinear regime are shown to appear in larger bandwidth and higher sensitivity. (C) 2017 Elsevier B.V. All rights reserved. We focus on power-law coherency as an alternative approach towards studying power law cross-correlations between simultaneously recorded time series. To be able to study empirical data, we introduce three estimators of the power-law coherency parameter Hp based on popular techniques usually utilized for studying power-law cross-correlations detrended cross-correlation analysis (DCCA), detrending moving-average cross-correlation analysis (DMCA) and height cross-correlation analysis (HXA). In the finite sample properties study, we focus on the bias, variance and mean squared error of the estimators. We find that the DMCA-based method is the safest choice among the three. The HXA method is reasonable for long time series with at least 104 observations, which can be easily attainable in some disciplines but problematic in others. The DCCA-based method does not provide favorable properties which even deteriorate with an increasing time series length. The paper opens a new venue towards studying cross-correlations between time series. (C) 2017 Elsevier B.V. All rights reserved. Under investigation in this paper is a discrete (2+1)-dimensional Ablowitz-Ladik equation, which has certain applications in nonlinear optics and Bose-Einstein condensation. Employing the Hirota method and symbolic computation. we obtain the bright/dark one-, two-, three-and N-soliton solutions. Asymptotic analysis indicates that the interactions between the bright/dark two solitons are elastic. Amplitudes and velocities of the bright and dark solitons increase with the value of the coupling strength increasing. Head-on and overtaking interactions between the bright two solitons as well as the bound state two solitons are depicted. Overtaking interaction between the dark two solitons are also plotted. The increasing value of the coupling strength can lead the increasing amplitudes and velocities of the bright/dark two solitons. (C) 2017 Published by Elsevier B.V. We present a numerical method to solve a time-space fractional Fokker-Planck equation with a space-time dependent force field F(x, t), and diffusion d(x, t). When the problem being modelled includes time dependent coefficients, the time fractional operator, that typically appears on the right hand side of the fractional equation, should not act on those coefficients and consequently the differential equation can not be simplified using the standard technique of transferring the time fractional operator to the left hand side of the equation. We take this into account when deriving the numerical method. Discussions on the unconditional stability and accuracy of the method are presented, including results that show the order of convergence is affected by the regularity of solutions. The numerical experiments confirm that the convergence of the method is second order in time and space for sufficiently regular solutions and they also illustrate how the order of convergence can depend on the regularity of the solutions. In this case, the rate of convergence can be improved by considering a non-uniform mesh. (C) 2017 Elsevier B.V. All rights reserved. This paper extends the study of noise-perturbed planar Julia sets to the spatial case by studying the Julia set of a complex Lorenz system. We give the definition of the Julia set of a complex Lorenz system and visualize its spatial fractal structure. Then, the symmetry property of the 3-D slice of the CLS Julia set is proved. Finally, the influence of additive and multiplicative noises on the CLS Julia set are analyzed with respect to structural damages and changes in symmetry separately. (C) 2017 Elsevier B.V. All rights reserved. A diffusive autocatalytic bimolecular model with delayed feedback subject to Neumann boundary conditions is considered. We mainly study the stability of the unique positive equilibrium and the existence of periodic solutions. Our study shows that diffusion can give rise to Turing instability, and the time delay can affect the stability of the positive equilibrium and result in the occurrence of Hopf bifurcations. By applying the normal form theory and center manifold reduction for partial functional differential equations, we investigate the stability and direction of the bifurcations. Finally, we give some simulations to illustrate our theoretical results. (C) 2017 Elsevier B.V. All rights reserved. A multiscale model of hospital infections coupling the micro model of the growth of bacteria and the macro model describing the transmission of the bacteria among patients and health care workers (HCWs) was established to investigate the effects of antibiotic treatment on the transmission of the bacteria among patients and HCWs. The model was formulated by viewing the transmission rate from infected patients to HCWs and the shedding rate of bacteria from infected patients to the environment as saturated functions of the within-host bacterial load. The equilibria and the basic reproduction number of the coupled system were studied, and the global dynamics of the disease free equilibrium and the endemic equilibrium were analyzed in detail by constructing two Lyapunov functions. Furthermore, effects of drug treatment in the within-host model on the basic reproduction number and the dynamics of the coupled model were studied by coupling a pharmacokinetics model with the within-host model. Sensitive analysis indicated that the growth rate of the bacteria, the maximum drug effect and the dosing interval are the three most sensitive parameters contributing to the basic reproduction number. Thus, adopting "wonder" drugs to decrease the growth rate of the bacteria or to increase the drug's effect is the most effective measure but changing the dosage regime is also effective. A quantitative criterion of how to choose the best dosage regimen can also be obtained from numerical results. (C) 2017 Elsevier B.V. All rights reserved. This article investigates combined effects of nonlinearities and substrate's deformability on modulational instability. For that, we consider a lattice model based on the nonlinear Klein-Gordon equation with an on-site potential of deformable shape. Such a consideration enables to broaden the description of energy-localization mechanisms in various physical systems. We consider the strong-coupling limit and employ semi-discrete approximation to show that nonlinear wave modulations can be described by an extended nonlinear Schrodinger equation containing a fourth-order dispersion component. The stability of modulation of carrier waves is scrutinized and the following findings are obtained analytically. The various domains of gains and instabilities are provided based upon various combinations of the parameters of the system. The instability gains strongly depend on nonlinear terms and on the kind of shape of the substrate. According to the system's parameters, our model can lead to different sets of known equations such as those in a negative index material embedded into a Kerr medium, glass fibers, resonant optical fiber and others. Consequently, some of the results obtained here are in agreement with those obtained in previous works. The suitable combination of nonlinear terms with the deformability of the substrate can be utilized to specifically control the amplitude of waves and consequently to stabilize their propagations. The results of analytical investigations are validated and complemented by numerical simulations. (C) 2017 Elsevier B.V. All rights reserved. This paper compares several fractional operational matrices for solving a system of linear fractional differential equations (FDEs) of commensurate or incommensurate order. For this purpose, three fractional collocation differentiation matrices (FCDMs) based on finite differences are first proposed and compared with Podlubny's matrix previously used in the literature, after which two new efficient FCDMs based on Chebyshev collocation are proposed. It is shown via an error analysis that the use of the well-known property of fractional differentiation of polynomial bases applied to these methods results in a limitation in the size of the obtained Chebyshev-based FCDMs. To compensate for this limitation, a new fast spectrally accurate FCDM for fractional differentiation which does not require the use of the gamma function is proposed. Then, the Schur-Pade and Schur decomposition methods are implemented to enhance and improve numerical stability. Therefore, this method overcomes the previous limitation regarding the size limitation. In several illustrative examples, the convergence and computation time of the proposed FCDMs are compared and their advantages and disadvantages are outlined. (C) 2017 Elsevier B.V. All rights reserved. We present a three-dimensional model of rain-induced landslides, based on cohesive spherical particles. The rainwater infiltration into the soil follows either the fractional or the fractal diffusion equations. We analytically solve the fractal partial differential equation (PDE) for diffusion with particular boundary conditions to simulate a rainfall event. We developed a numerical integration scheme for the PDE, compared with the analytical solution. We adapt the fractal diffusion equation obtaining the gravimetric water content that we use as input of a triggering scheme based on Mohr-Coulomb limit-equilibrium criterion. This triggering is then complemented by a standard molecular dynamics algorithm, with an interaction force inspired by the Lennard-Jones potential, to update the positions and velocities of particles. We present our results for homogeneous and heterogeneous systems, i.e., systems composed by particles with same or different radius, respectively. Interestingly, in the heterogeneous case, we observe segregation effects due to the different volume of the particles. Finally, we analyze the parameter sensibility both for the triggering and the propagation phases. Our simulations confirm the results of a previous two-dimensional model and therefore the feasible applicability to real cases. (C) 2017 Elsevier B.V. All rights reserved. The global structure of nonlinear response of mechanical centrifugal governor, forming in two-dimensional parameter space, is studied in this paper. By using three kinds of phases, we describe how responses of periodicity, quasi-periodicity and chaos organize some self similarity structures with parameters varying. For several parameter combinations, the regular vibration shows fractal characteristic, that is, the comb-shaped self-similarity structure is generated by alternating periodic response with intermittent chaos, and Arnold's tongues embedded in quasi-periodic response are organized according to Stern-Brocot tree. In particular, a new type of mixed-mode oscillations (MMOs) is found in the periodic response. These unique structures reveal the natural connection of various responses between part and part, part and the whole in parameter space based on self-similarity of fractal. Meanwhile, the remarkable and unexpected results are to contribute a valid dynamic reference for practical applications with respect to mechanical centrifugal governor. (C) 2017 Elsevier B.V. All rights reserved. Spoken term detection (STD), the process of finding all occurrences of a specified search term in a large amount of speech segments, has many applications in multimedia search and retrieval of information. It is known that use of video information in the form of lip movements can improve the performance of STD in the presence of audio noise. However, research in this direction has been hampered by the unavailability of large annotated audio visual databases for development. We propose a novel approach to develop audio visual spoken term detection when only a small (low resource) audio visual database is available for development. First, cross database training is proposed as a novel framework using the fused hidden Markov modeling (HMM) technique, which is used to train an audio model using extensive large and publicly available audio databases; then it is adapted to the visual data of the given audio visual database. This approach is shown to perform better than standard HMM joint-training method and also improves the performance of spoken term detection when used in the indexing stage. In another attempt, the external audio models are first adapted to the audio data of the given audio visual database and then they are adapted to the visual data. This approach also improves both phone recognition and spoken term detection accuracy. Finally, the cross database training technique is used as HMM initialization, and an extra parameter re-estimation step is applied on the initialized models using Baum Welch technique. The proposed approaches for audio visual model training have allowed for benefiting from both large extensive out of domain audio databases that are available and the small audio visual database that is given for development to create more accurate audio-visual models. (C) 2017 Elsevier Ltd. All rights reserved. Sparse coding, as a successful representation method for many signals, has been recently employed in speech enhancement. This paper presents a new learning-based speech enhancement algorithm via sparse representation in the wavelet packet transform domain. We propose sparse dictionary learning procedures for training data of speech and noise signals based on a coherence criterion, for each subband of decomposition level. Using these learning algorithms, self-coherence between atoms of each dictionary and mutual coherence between speech and noise dictionary atoms are minimized along with the approximation error. The speech enhancement algorithm is introduced in two scenarios, supervised and semi-supervised. In each scenario, a voice activity detector scheme is employed based on the energy of sparse coefficient matrices when the observation data is coded over corresponding dictionaries. In the proposed supervised scenario, we take advantage of domain adaptation techniques to transform a learned noise dictionary to a dictionary adapted to noise conditions captured based on the test environment circumstances. Using this step, observation data is sparsely coded, based on the current situation of the noisy space, with low sparse approximation error. This technique has a prominent role in obtaining better enhancement results particularly when the noise is non-stationary. In the proposed semi-supervised scenario, adaptive thresholding of wavelet coefficients is carried out based on the variance of the estimated noise in each frame of different subbands. The proposed approaches lead to significantly better speech enhancement results in comparison with the earlier methods in this context and the traditional procedures, based on different objective and subjective measures as well as a statistical test. (C) 2017 Elsevier Ltd. All rights reserved. The i-vector representation and modeling technique has been successfully applied in spoken language identification (SLI). The advantage of using the i-vector representation is that any speech utterance with a variable duration length can be represented as a fixed length vector. In modeling, a discriminative transform or classifier must be applied to emphasize the variations correlated to language identity since the i-vector representation encodes several types of the acoustic variations (e.g., speaker variation, transmission channel variation, etc.). Owing to the strong nonlinear discriminative power, the neural network model has been directly used to learn the mapping function between the i-vector representation and the language identity labels. In most studies, only the point-wise feature-label information is fed to the model for parameter learning that may result in model overfitting, particularly when with limited training data. In this study, we propose to integrate pair-wise distance metric learning as the regularization of model parameter optimization. In the representation space of nonlinear transforms in the hidden layers, a distance metric learning is explicitly designed to minimize the pair-wise intra-class variation and maximize the inter-class variation. Using the pair-wise distance metric learning, the i-vectors are transformed to a new feature space, wherein they are much more discriminative for samples belonging to different languages while being much more similar for samples belonging to the same language. We tested the algorithm on an SLI task, and obtained promising results, which outperformed conventional regularization methods. (C) 2017 Elsevier Ltd. All rights reserved. Named Entity Recognition (NER) is a key NLP task, which is all the more challenging on Web and user-generated content with their diverse and continuously changing language. This paper aims to quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall. In particular, our findings indicate that NER approaches struggle to generalise in diverse genres with limited training data. Unseen NEs, in particular, play an important role, which have a higher incidence in diverse genres such as social media than in more regular genres such as newswire. Coupled with a higher incidence of unseen features more generally and the lack of large training corpora, this leads to significantly lower Fl scores for diverse genres as compared to more regular ones. We also find that leading systems rely heavily on surface forms found in training data, having problems generalising beyond these, and offer explanations for this observation. (C) 2017 The Authors. Published by Elsevier Ltd. This is an open access article article under the CC BY license. Fixed placements of inertial sensors have been utilized by previous human activity recognition algorithms to train the classifier. However, the distribution of sensor data is seriously affected by the sensor placement. The performance will be degraded when the model trained on one placement is used in others. In order to tackle this problem, a fast and robust human activity recognition model called TransM-RKELM (Transfer learning mixed and reduced kernel Extreme Learning Machine) is proposed in this paper; It uses a kernel fusion method to reduce the influence by the'choice of kernel function and the reduced kernel is utilized to reduce the computational cost. After realizing initial activity recognition model by mixed and reduced kernel extreme learning model (M-RKELM), in the online phase M-RKELM is utilized to classify the activity and adapt the model to new locations based on high confident recognition results in real time. Experimental results show that the proposed model can adapt the classifier to new sensor locations quickly and obtain good recognition performance. (C) 2017 Elsevier B.V. All rights reserved. Attributed graphs describe nodes via attribute vectors and also relationships between different nodes via edges. To partition nodes into clusters with tighter correlations, an effective way is applying clustering techniques on attributed graphs based on various criteria such as node connectivity and/or attribute similarity. Even though clusters typically form around nodes with tight edges and similar attributes, existing methods have only focused on one of these two data modalities. In this paper, we comprehend each node as an autonomous agent and develop an accurate and scalable multiagent system for extracting overlapping clusters in attributed graphs. First, a kernel function with a tunable bandwidth factor delta is introduced to measure the influence of each agent, and those agents, with highest local influence can be viewed as the "leader" agents. Then, a novel local expansion strategy is proposed, which can be applied by each leader agent to absorb the most relevant followers in the graph. Finally, we design the cluster-aware multiagent system (CAMAS), in which agents communicate with each other freely under an efficient communication mechanism. Using the proposed multiagent system, we are able to uncover the optimal overlapping cluster configuration, i.e. nodes within one cluster are not only connected closely with each other but also with similar attributes. Our method is highly efficient, and the computational time is shown that nearly linearly dependent on the number of edges when delta is an element of [0.5, 1). Finally, applications of the proposed method on a variety of synthetic benchmark graphs and real-life attributed graphs are demonstrated to verify the systematic performance. (C) 2017 Elsevier B.V. All rights reserved. Localization for a disconnected sensor network is highly unlikely to be achieved by its own sensor nodes, since accessibility of the information between any pair of sensor nodes cannot be guaranteed. In this paper, a mobile robot (or a mobile sensor node) is introduced to establish correlations among sparsely distributed sensor nodes which are disconnected, even isolated. The robot and the sensor network operate in a friendly manner, in which they can cooperate to perceive each other for achieving more accurate localization, rather than trying to avoid being detected by each other. The mobility of the robot allows for the stationary and internally disconnected sensor nodes to be dynamically connected and correlated. On one hand, the robot performs simultaneous localization and mapping (SLAM) based on the constrained local submap filter (CLSF). The robot creates a local submap composed of the sensor nodes present in its immediate vicinity. The locations of these nodes and the pose (position and orientation angle) of the robot are estimated within the local submap. On the other hand, the sensor nodes in the submap estimate the pose of the robot. A parallax-based robot pose estimation and tracking (PROPET) algorithm, which uses the relationship between two successive measurements of the robot's range and bearing, is proposed to continuously track the robot's pose with each sensor node. Then, tracking results of the robot's pose from different sensor nodes are fused by the Kalman filter (KF). The multi-node fusion result are further integrated with the robot's SLAM result within the local submap to achieve more accurate localization for the robot and the sensor nodes. Finally, the submap is projected and fused into the global map by the CLSF to generate localization results represented in the global frame of reference. Simulation and experimental results are presented to show the performances of the proposed method for robot sensor network cooperative localization. Especially, if the robot (or the mobile sensor node) has the same sensing ability as the stationary sensor nodes, the localization accuracy can be significantly enhanced using the proposed method. (C) 2017 Elsevier B.V. All rights reserved. This paper considers the design of minimal mean square error (MMSE) transceivers in a wireless sensor network. The problem is nonconvex and challenging, and previous results (with partial solutions and/or with convergence unproved) left much to be desired. Here we propose several approaches - 2 block coordinate descent (BCD), essentially cyclic multi-block method and its variants and distributive method to solve this problem. The proposed 2-BCD approach formulates the subproblem of joint beamformer optimization as a general second-order cone programming problem, which lends itself to standard numerical solvers and which requires no extra assumptions like previous works do. The proposed essentially cyclic multi-block approach further decomposes the joint beamformer design subproblem into multiple blocks, and rigorously solves each with semi-closed-form solution. The distributive algorithm optimizes transmitters in a decentralized manner and has never been considered in existing literature. The distributive algorithm has time complexity independent of number of sensors and is especially suitable for large-scale networks. All the previous BCD-based approaches left some singularity issue unattended as well as the convergence property unaddressed, our proposals are the first to provide a complete and provably converging analytical solution. Extensive analysis and simulations demonstrate the merits of the novel approaches relative to existing alternatives. (C) 2017 Elsevier B.V. All rights reserved. An ever increasing part of communication between persons involve the use of pictures, due to the cheap availability of powerful cameras on smartphones, and the cheap availability of storage space. The rising popularity of social networking applications such as Facebook, Twitter, Instagram, and of instant messaging applications, such as WhatsApp, WeChat, is the clear evidence of this phenomenon, due to the opportunity of sharing in real-time a pictorial representation of the context each individual is living in. The media rapidly exploited this phenomenon, using the same channel, either to publish their reports, or to gather additional information on an event through the community of users. While the real-time use of images is managed through metadata associated with the image (i.e., the timestamp, the geolocation, tags, etc.), their retrieval from an archive might be far from trivial, as an image bears a rich semantic content that goes beyond the description provided by its metadata. It turns out that after more than 20 years of research on Content-Based Image Retrieval (CBIR), the giant increase in the number and variety of images available in digital format is challenging the research community. It is quite easy to see that any approach aiming at facing such challenges must rely on different image representations that need to be conveniently fused in order to adapt to the subjectivity of image semantics. This paper offers a journey through the main information fusion ingredients that a recipe for the design of a CBIR system should include to meet the demanding needs of users. (C) 2017 Elsevier B.V. All rights reserved. This paper presents a multi-sensor fusion strategy able to detect the spurious sensors data that must be eliminated from the fusion procedure. The used estimator is the informational form of the Kalman Filter (KF) namely Information Filter (IF). In order to detect the erroneous sensors measurements, the Kullback-Leibler Divergence (KLD) between the a priori and a posteriori distributions of the IF is computed. It is generated from two tests: One acts on the means and the other deals with the covariance matrices. Optimal thresholding method based on a Kullback-Leibler Criterion (KLC) is developed and discussed in order to replace classical approaches that fix heuristically the false alarm probability. Multi-robot systems became one of the major fields of study in the indoor environment where the environmental monitoring and the response to crisis must be ensured. Consequently, the robots required to know precisely their positions and orientations in order to successfully perform their mission. Fault detection and exclusion (FDE) play a crucial role in enhancing the integrity of localization of the multi robot team. The main contributions of this paper are: - developing a new method of sensors data fusion that tackle the erroneous data issues, - developing a Kullback-Leibler based criterion for the threshold optimization, - Validation with real experimental data from a group of robots. (C) 2017 Elsevier B.V. All rights reserved. With the rising popularity of social media in the context of environments based on the Internet of things (loT), semantic information has emerged as an important bridge to connect human intelligence with heterogeneous media big data. As a critical tool to improve media big data retrieval, semantic fusion encounters a number of challenges: the manual method is inefficient, and the automatic approach is inaccurate. To address these challenges, this paper proposes a solution called CSF (Crowdsourcing Semantic Fusion) that makes full use of the collective wisdom of social users and introduces crowdsourcing computing to semantic fusion. First, the correlation of cross-modal semantics is mined and the semantic objects are normalized for fusion. Second, we employ the dimension reduction and relevance feedback approaches to reduce non-principal components and noise. Finally, we research the storage and distribution mechanism. Experiment results highlight the efficiency and accuracy of the proposed approach. The proposed method is an effective and practical cross-modal semantic fusion and distribution mechanism for heterogeneous social media, provides a novel idea for social media semantic processing, and uses an interactive visualization framework for social media knowledge mining and retrieval to improve semantic knowledge and the effect of representation. (C) 2017 Elsevier B.V. All rights reserved. With location-based social network (LBSN) flourishing, location check-in records offer us sufficient information resource to do relative mining. Among locations visited by a user, those attracting relatively more visits from that user can serve as a support for further mining and improvement for location-based services. Therefore, great significance lies in the partition for visited locations based on a user's visiting frequency. The aim of our paper is to partition locations for individual users by utilizing classification in machine learning, categorizing the location for a user once he or she makes initial check-in there. After feature extraction for each initial check-in record, we evaluate the contribution of three feature categories. The results show the contribution of different feature categories varies in classification, where social features appear to offer the least contribution. At last, we do a final test on the whole sample, comparing the results with two baselines based on majority voting respectively. The results largely outperform the baselines in general, demonstrating the effectiveness of classification. (C) 2017 Elsevier B.V. All rights reserved. In this paper we propose an algorithm to solve group decision making problems using n-dimensional fuzzy sets, namely, sets in which the membership degree of each element to the set is given by an increasing tuple of n elements. The use of these sets has naturally led us to define admissible orders for n-dimensional fuzzy sets, to present a construction method for those orders and to study OWA operators for aggregating the tuples used to represent the membership degrees of the elements. In these conditions, we present an algorithm and apply it to a case study, in which we show that the exploitation phase which appears in many decision making methods can be omitted by just considering linear orders between tuples. (C) 2017 Elsevier B.V. All rights reserved. In many applications of information systems learning algorithms have to act in dynamic environments where data are collected in the form of transient data streams. Compared to static data mining, processing streams imposes new computational requirements for algorithms to incrementally process incoming examples while using limited memory and time. Furthermore, due to the non-stationary characteristics of streaming data, prediction models are often also required to adapt to concept drifts. Out of several new proposed stream algorithms, ensembles play an important role, in particular for 'non-stationary environments. This paper surveys research on ensembles for data stream classification as well as regression tasks. Besides presenting a comprehensive spectrum of ensemble approaches for data streams, we also discuss advanced learning concepts such as imbalanced data streams, novelty detection, active and semi supervised learning, complex data representations and structured outputs. The paper concludes with a discussion of open research problems and lines of future research. Published by Elsevier B.V. A new edge detection technique using transformation groups based G-lets filters is proposed in this paper. Discretizing gradients seem to produce discontinuity in classic edge detectors. No particular filter is capable of identifying meaningful edges at all scales and it increases computations with a multiscale approach. It is a challenge to get localized edges without spurious ones due to noise and integrate the obtained edges into meaningful object boundaries. Without breaking edge continuity and strictly localizing edges requires that filters do not blur the image during preprocessing. G-lets filters are found to be capable of performing well in most type of images including natural, noisy, low resolution and synthetic. In this paper, an edge detection algorithm using G-lets filters which are built by direct factorization of linear transformation matrices using irreducible representations is proposed. A multiresolution approach is shown to enhance the possibility of detecting faint edges. An edge tracing algorithm is presented to produce the edge image. The computational cost involved is comparatively lesser than existing filters. It is found that the geometries in the original image are preserved in the edge image. The edge tracing algorithm is capable of constructing object boundaries without the inner textures in a way that is not completely dependent on intensity thresholding. G-lets filters and the edge operator is found to be a promising algorithm for drastically bringing down the computations needed for realtime applications. The results are compared with BSDS500 boundary detection dataset using pb and global pb detectors. (C) 2017 Elsevier Ltd. All rights reserved. In the recent work: "Fast computation of Jacobi-Fourier moments for invariant image recognition, Pattern Recognition 48 (2015) 1836-1843", the authors propose a new method for the recursive computation of Jacobi-Fourier moments. This method reduces the computational complexity in radial and angular kernel functions of the moments, improving the numerical stability of the computation procedure. However, they use a rectangular domain for the computation of the Jacobi-Fourier moments. In this work, we demonstrate that the use of this domain involves the loss of kernel orthogonality. Also, errata and inaccuracies which could lead to erroneous results have been corrected and clarified. Furthermore, we propose a more precise procedure of the moments computation by using a circular pixel tiling scheme, which is based on the image interpolation and an adaptive Simpson quadrature method for the numerical integration. (C) 2017 Elsevier Ltd. All rights reserved. Eye detection and eye state (close/open) estimation are important for a wide range of applications, including iris recognition, visual interaction and driver fatigue detection. Current work typically performs eye detection first, followed by eye state estimation by a separate classifier. Such an approach fails to capture the interactions between eye location and its state. In this paper, we propose a method for simultaneous eye detection and eye state estimation. Based on a cascade regression framework, our method iteratively estimates the location of the eye and the probability of the eye being occluded by eyelid. At each iteration of cascaded regression, image features from the eye center as well as contextual image features from eyelid and eye corners are jointly used to estimate the eye position and openness probability. Using the eye openness probability, the most likely eye state can be estimated. Since it requires large number of facial images with labeled eye related landmarks, we propose to combine the real and synthetic images for training. It further improves the performance by utilizing this learning-by-synthesis method. Evaluations of our method on benchmark databases such as BioID and Gi4E database as well as on real world driving videos demonstrate its superior performance comparing to state-of-the-art methods for both eye detection and eye state estimation. (C) 2017 Elsevier Ltd. All rights reserved. Recently proposed weighted linear loss twin support vector machine (WLTSVM) is an efficient algorithm for binary classification. However, the performance of multiple WLTSVM classifier needs improvement since it uses the strategy 'one-versus-rest' with high computational complexity. This paper presents a weighted linear loss multiple birth support vector machine based on information granulation (WLMSVM) to enhance the performance of multiple WLTSVM. Inspired by granular computing, WLMSVM divides the data into several granules and builds a set of sub-classifiers in the mixed granules. By introducing the weighted linear loss, the proposed approach only needs to solve simple linear equations. Moreover, since WLMSVM uses the strategy "all-versus-one" which is the key idea of multiple birth support vector machine, the overall computational complexity of WLMSVM is lower than that of multiple WLTSVM. The effectiveness of the proposed approach is demonstrated by experimental results on artificial datasets and benchmark datasets. (C) 2017 Elsevier Ltd. All rights reserved. A substantial amount of datasets stored for various applications are often high dimensional with redundant and irrelevant features. Processing and analysing data under such circumstances is time consuming and makes it difficult to obtain efficient predictive models. There is a strong need to carry out analyses for high dimensional data in some lower dimensions, and one approach to achieve this is through feature selection. This paper presents a new relevancy-redundancy approach, called the maximum relevance minimum multicollinearity (MRmMC) method, for feature selection and ranking, which can overcome some shortcomings of existing criteria. In the proposed method, relevant features are measured by correlation characteristics based on conditional variance while redundancy elimination is achieved according to multiple correlation assessment using an orthogonal projection scheme. A series of experiments were conducted on eight datasets from the UCI Machine Learning Repository and results show that the proposed method performed reasonably well for feature subset selection. (C) 2017 Elsevier Ltd. All rights reserved. The Minkowski weighted K-means (MWK-means) is a recently developed clustering algorithm capable of computing feature weights. The cluster-specific weights in MWK-means follow the intuitive idea that a feature with low variance should have a greater weight than a feature with high variance. The final clustering found by this algorithm depends on the selection of the Minkowski distance exponent. This paper explores the possibility of using the central Minkowski partition in the ensemble of all Minkowski partitions for selecting an optimal value of the Minkowski exponent. The central Minkowski partition appears to be also a good consensus partition. Furthermore, we discovered some striking correlation results between the Minkowski profile, defined as a mapping of the Minkowski exponent values into the average similarity values of the optimal Minkowski partitions, and the Adjusted Rand Index vectors resulting from the comparison of the obtained partitions to the ground truth. Our findings were confirmed by a series of computational experiments involving synthetic Gaussian clusters and real-world data. (C) 2017 Elsevier Ltd. All rights reserved. Object detection is a significant step of intelligent surveillance. The existing methods achieve the goals by technically designing or learning special features and detection models. Conversely, we propose an effective method for accurate object detection, which is inspired by the mechanism of memory and prediction in our brain. Firstly, a fix-sized window is slid on a static image to generate an image sequence. Then, a convolutional neural network extracts a feature sequence from the image sequence. Finally, a long short-term memory receives these sequential features in proper order to memorize and recognize the sequential patterns. Our contributions are 1) a memory-based classification model in which both of feature learning and sequence learning are integrated subtly, and 2) a memory-based prediction model which is specially designed to predict potential object locations in the surveillance scenes. Compared with some state-of-the-art methods, our method obtains the best performance in term of accuracy on three surveillance datasets. Our method may give some new insights on object detection researches. (C) 2017 Elsevier Ltd. All rights reserved. This paper focuses on the problem of script identification in scene text images. Facing this problem with state of the art CNN classifiers is not straightforward, as they fail to address a key characteristic of scene text instances: their extremely variable aspect ratio. Instead of resizing input images to a fixed aspect ratio as in the typical use of holistic CNN classifiers, we propose here a patch-based classification framework in order to preserve discriminative parts of the image that are characteristic of its class. We describe a novel method based on the use of ensembles of conjoined networks to jointly learn discriminative stroke-parts representations and their relative importance in a patch-based classification scheme. Our experiments with this learning procedure demonstrate state-of-the-art results in two public script identification datasets. In addition, we propose a new public benchmark dataset for the evaluation of multi-lingual scene text end-to-end reading systems. Experiments done in this dataset demonstrate the key role of script identification in a complete end-to-end system that combines our script identification method with a previously published text detector and an off-the-shelf OCR engine. (C) 2017 Elsevier Ltd. All rights reserved. Acute bacterial skin and skin structure infections (ABSSSI) are some of the most commonly encountered infections worldwide. Hospitalizations as a result of ABSSSI are associated with high mortality. This article discusses the role of oritavancin and dalbavancin, two new lipoglycopeptides, in the context of the other I.V. available standard therapy options. Hepatitis C virus (HCV) is a major cause of chronic liver disease worldwide. Due to the asymptomatic nature of the infection, many acute cases of HCV infection are left undiagnosed, so screening individuals at risk is an important public health priority. New medications offer sustained virologic response rates of over 95%, fewer adverse reactions, and shorter durations of therapy. This article reviews the new treatment guidelines for the evaluation and management of patients with HCV infection. Military families are often faced with unique stressors that civilian families do not have to deal with, such as deployment, geographic separation, and frequent relocation. When an NP is providing care for a military family, it is important that these unique stressors are discussed and understood. NPs can employ the Causal Uncertainty Model to encourage effortful cognition and support family attributes to ameliorate the negative effects of the stressors these families may face. Fewer than half of the individuals in the United States with HIV are receiving the full benefit of treatment. Primary care providers play a pivotal role in creating a pipeline into HIV care following diagnosis. Using interview data and current literature, this article provides recommendations for promoting care initiation for individuals newly diagnosed with HIV. Clostridium difficile infection (CDI) is increasing in the outpatient setting, and older adults are at a higher risk for contracting CDI and experiencing poor outcomes. NPs may see this infection in the primary care setting. This article focuses on the presentation, treatment, and clinical practice implications for CDI in community-dwelling older adults. Vincenc Alexandr Bohdalek (Vincenz Alexander Bochdalek) was a well-known anatomist and pathologist in the nineteenth century. Today, however, his name is all but forgotten. Bohdalek described a number of anatomical structures; some of them became eponyms. Unfortunately, his findings concerning the innervation of the eye, upper jaw, hard palate, auditory system, and meninges are little known today. This current overview is based on available archival sources and provides an insight into his results in the field of nervous system research, which account for almost half his work. Bohdalek can clearly be considered a pioneer in the field we now call functional anatomy, as he tried to find a physiological explanation for the anatomical and pathological findings he observed. The work and results of this truly outstanding neuroscientist of his time are thus again available to current and future generations of neuroscientists and neuroanatomists. The persistent vegetative state (PVS) is one of the most iconic and misunderstood phrases in clinical neuroscience. Coined as a diagnostic category by Scottish neurosurgeon Bryan Jennett and American neurologist Fred Plum in 1972, the phrase vegetative first appeared in Aristotle's treatise On the Soul (circa mid-fourth century BCE). Aristotle influenced neuroscientists of the nineteenth and early-twentieth centuries, Xavier Bichat and Walter Timme, and informed their conceptions of the vegetative nervous system. Plum credits Bichat and Timme in his use of the phrase, thus putting the ancient and modern in dialogue. In addition to exploring Aristotle's definition of the vegetative in the original Greek, we put Aristotle in conversation with his contemporariesPlato and the Hippocraticsto better apprehend theories of mind and consciousness in antiquity. Utilizing the discipline of reception studies in classics scholarship, we demonstrate the importance of etymology and historical origin when considering modern medical nosology. This article contrasts two American Physiological Societies, one founded near the beginning of the nineteenth century in 1837 and the other founded near its end in 1887. The contrast allows a perspective on how much budding neuroscience had developed during the nineteenth century in America. The contrast also emphasizes the complicated structure needed in both medicine and physiology to allow neurophysiology to flourish. The objectives of the American Physiological Society of 1887 were (and are) to promote physiological research and to codify physiology as a discipline. These would be accomplished by making physiology much more inclusive than traditionally accepted by raising research standards, by giving prestige to its members, by providing members a source of professional interchange, by protecting its members from antivivisectionists, and by promoting physiology as fundamental to medicine. The quantity of neuroscientific experiments by its members was striking. The main organizers of the society were Silas Weir Mitchell, John Call Dalton, Henry Pickering Bowditch, and Henry Newell Martin. The objective of the American Physiological Society of 1837 was to disperse knowledge of the laws of life and to promote human health and longevity. The primary organizers were William Andrus Alcott and Sylvester Graham with the encouragement of John Benson. Its technique was to use physiological information, not create it as was the case in 1887. Its object was to disseminate the word that healthy eating will improve the quality of life. The eponymous legacy of Sir William Richard Gowers (1845-1915) was the subject of a comprehensive appraisal first written for this journal late last year. Since the completion of that work, a revealing February 1903 letter has come to light recording, amongst other things, Gowers' firsthand and somewhat private opinions concerning some of his own eponymous contributions to medicine. This addendum to the primary author's original article will review and contextualize this very interesting find as it relates to Gowers' eponymous legacy. Gowers' ataxic paraplegia (referred to as Gowers' disease in the letter) and syringal hemorrhage are specially considered, and his broader neological contributions are also briefly addressed. For completion, a number of other previously unnoticed eponyms are added to the already impressive list of medical entities named in Gowers' honor, and a more complete collection of eponyms found in Gowers' Manual are tabulated for consideration. An acerbic footnote in Volume 3 (1818) of the five-volume great work of Franz Joseph Gall and Johann Gaspar Spurzheim, Anatomy and Physiology of the Nervous System in General and of the Brain in Particular with Observations on the Possibility of Understanding the Many Moral and Intellectual Dispositions of Man and Animals by the Configuration of Their Heads, marked the end of the collaboration between Gall, the founder of organologie, and Spurzheim, promoter of phrenology. We discuss the background of this note and the nature of the rift that marked the end of Gall and Spurzheim's collaboration. This essay begins from the intensified entanglements of technoscientific innovation with miscellaneous societal and public fields of interest and action over recent years. This has been accompanied by an apparent decline in the work of purification of discourses of natural and human agency, which Latour observed in 1993. Replacing such previous discursive purifications, we increasingly find technoscientific visions of the imagined-possible as key providers of public meanings and policies. This poses the question of what forms of legitimation are constituted by these sciences, including the ways in which they enter into articulations of public matters. Revisiting historical and contemporary theories of imagination and science, this essay proposes a joint focus on imagination, publics and technoscience and their mutual co-production over time. This focus is then directed towards recent reconfigurations of technosciences with their imagined publics and towards how public issues may become constituted by social actors as active imaginations-exercising agents. In this article, we analyse a 2013 press conference hosting the world's first tasting of a laboratory grown hamburger. We explore this as a media event: an exceptional performative moment in which common meanings are mobilised and a connection to a shared centre of reality is offered. We develop our own theoretical contribution - the promotional public - to characterise the affirmative and partial patchwork of carefully selected actors invoked during the burger tasting. Our account draws on three areas of analysis: interview data with the scientists who developed the burger, media analysis of the streamed press conference itself and media analysis of social media during and following the event. We argue that the call to witness an experiment is a form of promotion and that such promotional material also offers an address that invokes a public with its attendant tensions. Cochlear Implantation is now regarded as the most successful medical technology. It carries promises to provide deaf/hearing impaired individuals with a technological sense of hearing and an access to participate on a more equal level in social life. In this article, we explore the adoption of cochlear implantations among Danish users in order to shed more light on their social and political implications. We situate cochlear implantation in a framework of new life science advances, politics, and user experiences. Analytically, we draw upon the notion of social imaginary and explore the social dimension of life science through a notion of public politics adopted from the political theory of John Dewey. We show how cochlear implantation engages different social imaginaries on the collective and individual levels and we suggest that users share an imaginary of being "wired to freedom" that involves new access to social life, continuous communicative challenges, common practices, and experiences. In looking at their lives as "wired to freedom," we hope to promote a wider spectrum of civic participation in the benefit of future life science developments within and beyond the field of Cochlear Implantation. As our empirical observations are largely based in the Scandinavian countries (notably Denmark), we also provide some reflections on the character of the technology-friendly Scandinavian welfare states and the unintended consequences that may follow in the wake of rapid technology implementation of life science in society. In recent years, there has been an explosion of do it yourself, maker and hacker spaces in Europe. Through makers and do-it-yourself initiatives, 'hacking' is moving into the everyday life of citizens. This article explores the collective and political nature of those hacks by reporting on empirical work on electronic waste and do-it-yourself biology hacking. Using Dewey's experimental approach to politics, we analyse hacks as 'inquiry' to see how they serve to articulate public and political action. We argue that do-it-yourself and makers' hacks are technical and political demonstrations. What do-it-yourself and makers' hacks ultimately demonstrate is that things can be done otherwise and that 'you' can also do it. In this sense, they have a potential viral effect. The final part of the article explores some potential shortcomings of such politics of demonstration. The decision in Europe to implement biometric passports, visas and residence permits was made at the highest levels without much consultation, checks and balances. Council regulation came into force relatively unnoticed in January 2005, as part of wider securitization policies urging systems interoperability and data sharing across borders. This article examines the biometric imaginary that characterizes this European Union decision, dictated by executive powers in the policy vacuum after 9/11 - a depiction of mobility governance, technological necessity and whom/what to trust or distrust, calling upon phantom publics to justify decisions rather than test their grounding. We consult an online blog we operated in 2010 to unravel this imaginary years on. Drawing on Dewey's problem of the public, we discuss this temporary opening of a public space in which the imaginary could be reframed and contested, and how such activities may shape, if at all, relations between politics, publics, policy intervention and societal development. A common thread in the contributions to this special issue can be found in the Foucauldian notion of intensification. Technoscientific imaginaries and publics have long been embroiled; yet, elements of novelty in their relationship can be detected in the ambivalence of the overcoming of traditional purification work; the expanding production of 'prototypical' truths; the uncertain threshold between publics of enquirers, witnesses and lookouts; and the growing indistinction of the everyday and the sublime, of trivial and non-trivial futures. The intensification of old patterns and recent trends determines a critical moment in a literal sense of the word: novel potentialities for a democratic governance of technosciences are opening up, but novel dominative opportunities are disclosing as well. In an attempt to qualify changes to science news reporting due to the impact of the Internet, we studied all science news articles published in Danish national newspapers in a November week in 1999 and 2012, respectively. We find the same amount of science coverage, about 4% of the total news production, in both years, although the tabloids produce more science news in 2012. Online science news also received high priority. Journalists in 2012 more often than in 1999 make reference to scientific journals and cite a wider range of journals. Science news in 2012 is more international and politically oriented than in 1999. Based on these findings, we suggest that science news, due partly to the emergence of online resources, is becoming more diverse and available to a wider audience. Science news is no longer for the elite but has spread to virtually everywhere in the national news system. Synthetic biology will probably have a high impact on a variety of fields, such as healthcare, environment, biofuels, agriculture, and so on. A driving theme in European research policy is the importance of maintaining public legitimacy and support. Media can influence public attitudes and are therefore an important object of study. Through qualitative content analysis, this study investigates the press coverage of synthetic biology in the major Nordic countries between 2009 and 2014. The press coverage was found to be event-driven and there were striking similarities between countries when it comes to framing, language use, and treated themes. Reporters showed a marked dependence on their sources, mainly scientists and stakeholders, who thus drives the media agenda. The media portrayal was very positive, with an optimistic look at future benefits and very little discussion of possible risks. Both academic and legal communities have cautioned that laypersons may be unduly persuaded by images of the brain and may fail to interpret them appropriately. While early studies confirmed this concern, a second wave of research was repeatedly unable to find evidence of such a bias. The newest wave of studies paints a more nuanced picture in which, under certain circumstances, a neuroimage bias reemerges. To help make sense of this discordant body of research, we highlight the contextual significance of understanding how laypersons' decision making is or is not impacted by neuroimages, provide an overview of findings from all sides of the neuroimage bias question, and discuss what these findings mean to public use and understanding of neuroimages. When the Fukushima accident occurred in March 2011, Finland was at the height of a nuclear renaissance, with the Government's decision-in-principle in 2010 to allow construction of two new nuclear reactors. This article examines the nuclear power debate in Finland after Fukushima. We deploy the concepts of (de)politicisation and hyperpoliticisation in the analysis of articles in the country's main newspaper. Our analysis indicates that Finnish nuclear exceptionalism manifested in the safety-related depoliticising and the nation's prosperity-related hyperpoliticisation arguments of the pro-nuclear camp. The anti-nuclear camp used politicisation strategies, such as economic arguments, to show the unprofitability of nuclear power. The Fukushima accident had a clear effect on Finnish nuclear policy: the government programme of 2011 excluded the nuclear new build. However, in 2014 the majority of Parliament again supported nuclear power. Hence, the period after Fukushima until 2014 could be described as continued but undermined loyalty to nuclear power. The Belarusian government's decision of the last decade to build a nuclear power plant near the city of Ostrovets, in northern Belarus, has proven to be controversial, resulting in a great deal of debate about nuclear energy in the country. The debate was inevitably shaped by the traumatic event that affected Belarus - the Chernobyl nuclear accident of 1986. The Belarusian authorities have consistently promoted a positive view of nuclear energy to the population in order to overcome the so-called Chernobyl syndrome' and deliberately shaped nuclear risk communication. As a result, the issue of trust remains crucial in all nuclear debates in Belarus. This article explores the evolution of the nuclear energy debate and its associated controversies in the Portuguese parliament. The analysis focuses on the dictatorial regime of the New State (from the beginning of the nuclear program in 1951 until the 1974 revolution) and on the democratic period (post-1974). Portugal, as an exporting country of uranium minerals, significantly invested in the development of a national capacity in nuclear research, but never developed an endogenous nuclear power infrastructure. Through the analysis of parliamentary debates, this article characterizes the dynamic evolution of the Portuguese sociotechnical imaginary on nuclear energy and technology interlinked with ambivalent representations, including the promise of nuclear energy as key for the constitution of a technological Nation or as prompting new sociotechnical risks. This research examines the evolution of nuclear technology in Spain from the early years of the Franco dictatorship to the global financial crisis and technology's influence on Spanish culture. To this end, we take a sociological perspective, with science culture and social perceptions of risk in knowledge societies serving as the two elements of focus in this work. In this sense, this article analyses the transformation of social relationships in light of technological changes. We propose technology as a strategic place to observe the institutional and organisational dynamics of technologic-scientific risks, the expert role and Spain's science culture. In addition, more specifically, within the language of co-production, we follow the actor' and favour new forms of citizen participation that promote ethics to discuss technological issues. Expert disputes can present laypeople with several challenges including trying to understand why such disputes occur. In an online survey of the US public, we used a psychometric approach to elicit perceptions of expert disputes for 56 forecasts sampled from seven domains. People with low education, or with low self-reported topic knowledge, were most likely to attribute disputes to expert incompetence. People with higher self-reported knowledge tended to attribute disputes to expert bias due to financial or ideological reasons. The more highly educated and cognitively able were most likely to attribute disputes to natural factors, such as the irreducible complexity and randomness of the phenomenon. Our results show that laypeople tend to use coherentalbeit potentially overly narrowattributions to make sense of expert disputes and that these explanations vary across different segments of the population. We highlight several important implications for scientists, risk managers, and decision makers. Whether people blindly trust experts on all occasions or whether they evaluate experts' views and question them if necessary is a vital question. This study investigates associations of human values with the readiness to question experts' views and one's reasons for not disagreeing with experts among randomly sampled Finns. Readiness to question experts' views and one's reasons for not disagreeing were inferred from self-reported written accounts. Value priorities were measured with Schwartz et al.'s Portrait Values Questionnaire and Wach and Hammer's items concerning rational and non-rational truth. The results showed that after adjusting for the effects of age, sex and education, the values of power and rational truth were positively associated, whereas the values of security, conformity and tradition were negatively associated with readiness to question experts' views. Furthermore, the analysis indicated that the reasons for not disagreeing with experts were related to individual factors, situational factors, social risks and views about experts. Several studies conducted in Western democracies have indicated that men continue to be overrepresented and women underrepresented as experts in the media. This article explores the situation in Finland, a progressive and female-friendly' Nordic country with highly educated women who are widely present in the job market. The analysis is based on three sets of research data featuring a wide set of media data, a survey and interviews. This study reveals that public expertise continues to be male dominated in Finland: less than 30% of the experts interviewed in the news media are women. While the distribution of work and power in the labour market may explain some of the observed gender gap, journalistic practices and a masculine tradition of public expertise are likely to play a role as well. This article highlights how African men and women in South Africa account for the plausibility of alternative beliefs about the origins of HIV and the existence of a cure. This study draws on the notion of a street-level epistemology of trustknowledge generated by individuals through their everyday observations and experiencesto account for individuals' trust or mistrust of official claims versus alternative explanations about HIV and AIDS. Focus group respondents describe how past experiences, combined with observations about the power of scientific developments and perceptions of disjunctures in information, fuel their uncertainty and skepticism about official claims. HIV prevention campaigns may be strengthened by drawing on experiential aspects of HIV and AIDS to lend credibility to scientific claims, while recognizing that some doubts about the trustworthiness of scientific evidence are a form of skeptical engagement rather than of outright rejection. Materials designed for self-guided experiences such as worksheets and digital applications are widely used as tools to enable interactive science exhibitions to support students' progress towards conceptual understanding. However, there is a need to find expedient ways to evaluate the quality of educational experiences resulting from the use of such tools. Towards this end, the approach of this study is to focus on students' behaviours and relate identified behaviour categories to learning theory. An intervention developed as case for this study is presented. The intervention design aimed at setting up a learning environment where bodily, text-based, verbal and social experiences are embedded to facilitate progress towards conceptual understanding via the use of group assignments where students experience phenomena that correspond to focal concepts. To this end, six tasks were designed, five customised for energy-related exhibits and one which gave teachers a role to support students in pulling together the concepts they had encountered. Video recordings were transcribed and analysis investigated the quality of the intervention based on both verbal and non-verbal behaviours during the six tasks. Two overarching learning-related behavioural categories are identified: one reflecting general overall engagement in the learning environment, and a second designated as multi-modal discussions which is indicative of deeper engagement and, in turn, possibility of conceptual learning outcomes. More broadly, this research implies that frameworks based on students' behaviours can contribute to the design of valuable learning experiences and evaluation focusing on relevance for concept learning. Science opportunities in out-of-school time (OST) programs hold potential for expanding access to science, engineering, and technology (SET) pathways for populations that have not participated in these fields at equitable rates (Coalition for Science After School, 2014). This mixed-methods study examines the relationship between the diversity of youth participants and the organizational and program design features of a broad sample of SET-focused OST programs in the USA. Overall, many programs in our study appeared to deliver high-quality programming by providing immersive experiences for youth that included inquiry-based learning and positive youth development. Encouragingly, many programs served large numbers of underrepresented minority youth and girls and these programs often showed the most numerous indicators of high-quality learning experiences. While location and a diversity-oriented organizational mission were related to youth diversity, highly diverse programs enacted their mission by developing partnerships, engaging communities, local leaders, and families, and delivering long-term, supportive programs for youth. Thus, SET-focused OST programs hold great promise in promoting broad access to rich science experiences, yet specific programmatic and organizational features are highly related to the diversity of youth participants. There is an ongoing tension for scientists when deciding to engage with the public about their research as many scientists view direct participation as peripheral to their role. Pressures of time, lack of support by management and a lack of communicative skills are identified by scientists as reasons for not committing to communicative initiatives. We aimed to explore and explain the organizational culture of a research community that activity communicates with the public and has an international research culture. The Centre for Brain Research (CBR) was identified as a model and was analyzed using the concept of Complex Adaptive Systems (CAS). Twelve participants (scientists (8), clinicians (1), community liaison people (2)) and an identified director of the organization were interviewed. Direct quotes from interview were used to provide examples of the characteristics of CAS for example a variety of agents interacting, adapting the learning within the organization, non-linear dynamic behavior that is a result of aggregates of groups with actions emerging from self-organizing behavior and the development of an emergent culture. This analysis showed that complexity theory was a suitable framework for analyzing the sustainable communicative organization within CBR. Science hobbyists engage in self-directed, free-choice science learning and many have considerable expertise in their hobby area. This study focused on astronomy and birding hobbyists and examined how they used organizations to support their hobby engagement. Interviews were conducted with 58 amateur astronomers and 49 birders from the midwestern and southeastern United States. A learning ecology framework was used to map the community contexts with which the hobbyists acted. Results indicated seven contexts that supported the participants' hobby involvement over time: home, K-12 schools, universities, informal learning institutions, hobby clubs, conferences, and community organizations. Three themes emerged that described how hobbyists interacted with organizations in their communities: (1) organizations provided multiple points of entrance into the science-learning ecosystem, (2) organizations acted as catalysts to facilitate a hobbyist's development in their hobby, and (3) the relationship between hobbyists and organizations they used for learning eventually became bidirectional. Results showed that both astronomy and birding hobbyists used science-learning organizations to meet their hobby-related learning goals. Most hobbyists in the sample (90% astronomers, 78% birders) also engaged in outreach and shared their hobby with members of their community. Patterns of interaction of the astronomy and birding hobbyists within the seven contexts are discussed. Science communication is a diverse and transdisciplinary field and is taught most effectively when the skills involved are tailored to specific educational contexts. Few academic resources exist to guide the teaching of communication with non-scientific audiences for an undergraduate science context. This mixed methods study aimed to explore what skills for the effective communication of science with non-scientific audiences should be taught within the Australian Bachelor of Science. This was done to provide a basis from which to establish a teaching resource for undergraduate curriculum development. First, an extensive critique of academic literature was completed to distil the communication 'skills' or 'elements' commonly cited as being central to the effective communication of science from across the fields of science, communication, education, and science communication. A list of 'key elements' or 'core skills' was hence produced and systematically critiqued, edited, and validated by experts in the above four fields using a version of the Delphi method. Each of the skills identified was considered by experts to be mostly, highly, or absolutely essential, and the resource as a whole was validated as 'Extremely applicable', within the context of teaching undergraduate science students to communicate with non-scientific audiences. The result of this study is an evidence-based teaching resource: '12 Core skills for effective science communication', which is reflective of current theory and practice. This resource may be used in teaching or as a guide to the development of communication skills for undergraduate science students in Australia and elsewhere. Dirac showed that in a (k - 1)-connected graph there is a path through each k vertices. The path k-connectivity pi(k) (G) of a graph G, which is a generalization of Dirac's notion, was introduced by Hager in 1986. It is natural to introduce the concept of path k-edge-connectivity. omega(k)(G) of a graph G. Denote by G circle H the lexicographic product of two graphs G and H. In this paper, we prove that. omega(3) (G circle H) >= omega(3)(G)[3|V(H)|/4 for any two graphs G and H. Moreover, the bound is sharp. We also derive an upper bound of omega(3)(G circle H), that is, omega(3)(G circle H) <= min{2 omega(3)(G)|V(H)|(2),delta(H) + delta(G) |V(H)|}. We demonstrate the usefulness of the proposed constructions by applying them to some instances of lexicographic product networks. (C) 2017 Elsevier Inc. All rights reserved. Let d(L(G))(e) be the degree of an edge e in line graph L(G) of a graph G. The edge versions of atom-bond connectivity (ABC(e)) and geometric arithmetic (GA(e)) indices of G are defined as Sigma(ef is an element of E(L(G)))root(d(L(G)) (e) + d(L(G))(f)-2/d(L(G)) (e) x d(L(G)) (f) and GA(e) (G) = Sigma(ef is an element of E(L(G))) 2 root d(L(G))(e)d(L(G))(f)/d(L(G))(e)+ d(L(G))(f). In this paper, we study the ABC(e) and GA(e) indices for joint graphs and certain graph operations. (C) 2017 Elsevier Inc. All rights reserved. We study the structural stability for a thermal convection model with temperature-dependent solubility. When the spatial domain Omega is bounded in R-3, we show that the solution depends continuously on the Boussinesq coefficient lambda by using the method of a second order differential inequality. In the procedure of deriving the result, we also get the a priori bounds for the temperature T and the salt concentration C. (C) 2017 Elsevier Inc. All rights reserved. Since the last financial crisis, a relevant effort in quantitative finance research concerns the consideration of counterparty risk in financial contracts, specially in the pricing of derivatives. As a consequence of this new ingredient, new models, mathematical tools and numerical methods are required. In the present paper, we mainly consider the problem formulation in terms of partial differential equations (PDEs) models to price the total credit value adjustment (XVA) to be added to the price of the derivative without counterparty risk. Thus, in the case of European options and forward contracts different linear and non-linear PDEs arise. In the present paper we propose suitable boundary conditions and original numerical methods to solve these PDEs problems. Moreover, for the first time in the literature, we consider XVA associated to American options by the introduction of complementarity problems associated to PDEs, as well as numerical methods to be added in order to solve them. Finally, numerical examples are presented to illustrate the behavior of the models and numerical method to recover the expected qualitative and quantitative properties of the XVA adjustments in different cases. Also, the first order convergence of the numerical method is illustrated when applied to particular cases in which the analytical expression for the XVA is available. (C) 2017 Elsevier Inc. All rights reserved. We give fully explicit upper and lower bounds for the constants in two known inequalities related to the quadratic nonlinearity of the incompressible (Euler or) Navier-Stokes equations on the torus T-d. These inequalities are "tame" generalizations (in the sense of Nash-Moser) of the ones analyzed in the previous works (Morosi and Pizzocchero (2013) [6]). (C) 2017 Elsevier Inc. All rights reserved. A widely observed scenario in ecological systems is that populations interact not only with those living in the same spatial location but also with those in spatially adjacent locations, a phenomenon called nonlocal interaction. In this paper, we explore the role of nonlocal interaction in the emergence of spatial patterns in a prey-predator model under the reaction-diffusion framework, which is described by two coupled integro-differential equations. We first prove the existence and uniqueness of the global solution by means of the contraction mapping theory and then conduct stability analysis of the positive equilibrium. We find that nonlocal interaction can induce Turing bifurcation and drive the formation of stationary spatial patterns. Finally we carry out numerical simulations to demonstrate our analytical findings. (C) 2017 Elsevier Inc. All rights reserved. When a set of players cooperate, they need to decide how the collective cost should be allocated among them. Cooperative game theory provides several methods or solution concepts, that can be used as a tool for cost allocation. In this note, we consider a specific solution concept called the Equal Profit Method (EPM). In some cases, a solution according to the EPM is any one of infinitely many solutions. That is, it is not always unique. This leads to a lack of clarity in the characterization of the solutions obtained by the EPM. We present a modified version of the EPM, which unlike its precursor ensures a unique solution. In order to illustrate the differences, we present some numerical examples and comparisons between the two concepts. (C) 2017 Elsevier Inc. All rights reserved. Coupon coloring is a new coloring which has many applications. A k-coupon coloring of a graph G is a k-coloring of G by colors [k] = {1, 2,..., k} such that the neighborhood of every vertex of G contains vertices of all colors from [ k]. The maximum integer k for which a k-coupon coloring exists is called the coupon coloring number of G, and it is denoted by chi(c) (G). In this paper, we studied the coupon coloring of cographs, which are graphs that can be generated from the single vertex graph K-1 by complementation and disjoint union, and have applications in many interesting problems. We use the cotree representation of a cograph to give a polynomial time algorithm to color the vertices of a cograph, and then prove that this coloring is a coupon coloring with maximum colors, hence get the coupon coloring numbers of the cograph. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we present an improved numerical steepest descent method for the approximation of Fourier-type highly oscillatory integrals. Based on the previous numerical steepest descent method, the new method used the integrand information at endpoints and stationary points. The asymptotic order is given that is improved both for the case of stationary points and stationary points free. Several numerical examples are presented which show the high efficiency of the proposed method. Numerical results support our theoretical analyses. (C) 2017 Elsevier Inc. All rights reserved. In this study, a military decision problem is handled by an integrated approach based on game theory and geographical information systems (GIS). The problem can be defined as: finding layout plan for troops who want to maximize probability of identifying enemies using particular routes to penetrate border line. The problem has been transformed to two-person zero-sum game by some assumptions and solved in four interconnected stages. First, suitable spots in the terrain for monitoring the enemies were identified. Then, visibility percentages of each of the spots were calculated by using GIS for the routes used by enemies to pass the border line. Next, by assuming the calculated visibility ratios as the probability of identifying the enemy, a two-person zero-sum payoffmatrix was formed. Finally, linear mathematical model established to obtain optimal strategies with their probabilities. There are many techniques in literature to solve military decision problems but we believe that this study, by holding the peculiarity of the first study in which game theory and GIS are used together, will make a significant contribution to literature and future studies. (C) 2017 Elsevier Inc. All rights reserved. A new relaxed PSS-like iteration scheme for the nonsymmetric saddle point problem is proposed. As a stationary iterative method, the new variant is proved to converge unconditionally. When used for preconditioning, the preconditioner differs from the coefficient matrix only in the upper-right components. The theoretical analysis shows that the preconditioned matrix has a well-clustered eigenvalues around (1, 0) with a reasonable choice of the relaxation parameter. This sound property is desirable in that the related Krylov sub-space method can converge much faster, which is validated by numerical examples. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a new local meshless approach based on radial basis functions (RBFs) is presented to price the options under the Black-Scholes model. The global RBF approximations derived from the conventional global collocation method usually lead to ill-conditioned matrices. Employing the idea of local approximants of the finite difference (FD) method and combining it with the radial basis function (RBF) method can result in a local meshless approach such as RBF-FD. It removes the difficulty of ill-conditionness of the original method. The new proposed approach is unconditionally stable as it is shown by Von-Neumann stability analysis. It is fast and produces high accurate results as shown in numerical experiments. Moreover, we took into account the variation of shape parameter and analyzed numerically the behavior of the RBF-FD method. (C) 2017 Elsevier Inc. All rights reserved. We solve the conjecture, formulated by Bejancu, Johnson, and Said (2014), on the gradient superconvergence of semi-cardinal interpolation with quintic B-splines, for a hierarchy of finite difference end conditions. We also establish the similar result for cubic B-splines. (C) 2017 Elsevier Inc. All rights reserved. In this note we explore the extremal graphs with given matching number with respect to topological indices. We present generalization of previous studies on such graphs and apply our findings to various indices that have not yet been considered for graphs with given matching number. (C) 2017 Elsevier Inc. All rights reserved. We consider greed and fear in social dilemmas, represented by multiplayer games with two strategies, cooperation and defection. The dilemmas are defined by relevant axioms. The N-person Prisoner's Dilemma, Public Goods, Tragedy of the Commons, Volunteer's Dilemma, and Assurance Game, are included in the considered axiomatization. For two-player interactions the scheme leads to three types of social dilemmas, the PD (and the Weak PD), the Chicken and the Stag Hunt game. We define greed and fear for multiplayer social dilemmas, and observe that all the dilemmas have greed or fear build in. (C) 2017 Elsevier Inc. All rights reserved. This paper investigates two classes of synchronization problems of multiple chaotic systems with unknown uncertainties and disturbances by employing sliding mode control. Modified projective synchronization and transmission synchronization are discussed here. For the modified projective synchronization problem, sliding mode controllers are designed to ensure that multiple response systems synchronize with one drive system under the effects of external disturbances. For the transmission synchronization problem, based on adaptive sliding mode control, an integral sliding surface is selected and the adaptive laws are derived to tackle unknown uncertainties and disturbances for such systems. A class of nonlinear adaptive sliding mode controllers is developed to guarantee asymptotical stability of the error systems so that all chaotic systems can synchronize with each other. Simulation results are given to illustrate the effectiveness of the proposed schemes by comparing with the existing methods. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we deal mainly with a class of column upper-plus-lower (CUPL) Toeplitz matrices without Toeplitz structure, which are "close" to the Toeplitz matrices in the sense that their (-1, 1)-cyclic displacements coincide with cyclic displacement of some Toeplitz matrices. By constructing the corresponding displacement of the matrices, we derive the formulas on representation of the inverses of the CUPL Toeplitz matrices in the form of sums of products of factor (1, 1)-circulants and (-1, -1) -circulants. Furthermore, through the relation between the CUPL Toeplitz matrices and the CUPL Hankel matrices, the inverses of the CUPL Hankel matrices can be obtained as well. (C) 2017 Elsevier Inc. All rights reserved. Internships are now widely promoted as a valuable means of enhancing graduate employability. However, little is known about student perceptions of internships. Drawing on data from a pre-1992 university, two types of graduate are identified: engagers and disengagers. The engagers valued internship opportunities while the disengagers perceived these roles as exploitative and worthless. Few were able to distinguish paid, structured internship opportunities from unpaid, exploitative roles. We conclude that higher education institutes need to be more proactive in extolling the value of paid internships to all students and not just those most likely to engage with their services. This article first offers a survey of what has become an area of increasing interest in higher education: the rise of the so-called student-consumer'. This has been linked in part to the marketisation of higher education and the increased personal financial contributions individual students make towards their higher education. Drawing upon a qualitative study with students across seven different UK higher education institutions, the article shows that while there is evidence of growing identification with a consumer-orientated approach, this does not fundamentally capture their perspectives and relationships to higher education. The article shows the degree of variability in attitude and approaches towards consumerism of higher education and how students still perceive higher education in ways that do not conform to the ideal student-consumer approach. The implications for university relations and how policy-makers and institutions themselves approach the issue are discussed. While research on monoracial college students' experiences with racial microaggressions increases, minimal, if any, research focuses on multiracial college students' experiences with racial microaggressions. This manuscript addresses the gap in the literature by focusing on multiracial college students' experiences with multiracial microaggressions, a type of racial microaggression. Utilizing qualitative data, this study explored 3 different multiracial microaggressions that 10 multiracial women experienced at a historically white institution including, Denial of a Multiracial Reality, Assumption of a Monoracial Identity, and Not (Monoracial)Enough to Fit In.' Indigenous Australian underrepresentation in higher education remains a topical issue for social scientists, educationalists and policymakers alike, with the concept of indigenous academic success highly contested. This article is based on findings of a doctoral study investigating the drivers of indigenous Australian academic success in a large, public, research-intensive and metropolitan Australian university. It draws on the concept of transformational resistance to illuminate the forms that indigenous resistance takes and how identities of resistance performed by indigenous students complicate and speak to the students' notions of academic success. By drawing on ethnographic data, this article demonstrates how indigenous academic success is fuelled by the idea of resistance to the Western dominance, where resistance becomes the very cornerstone of indigenous achievement. This article explores the continued importance of teaching a diverse curriculum at a time when issues of racial and ethnic equality and diversity have been increasingly sidelined in the political discussion around British' values and identities, and how these should be taught in schools. The 2014 History National curriculum, in particular, provoked widespread controversy around what British history is, who gets included in this story and how best to engage young people in increasingly diverse classrooms with the subject. The new curriculum provides both opportunities for, and constraints on, addressing issues of racial and ethnic equality and diversity, but how these are put into practice in an increasingly fragmented school system remains less clear. Drawing on the findings of two research projects in schools across England and Wales, this article examines the challenges and opportunities facing teachers and young people in the classroom in the teaching and learning of diverse British histories. We argue that it is not only the content of what children and young people are taught in schools that is at issue, but how teachers are supported to teach diverse curricula effectively and confidently. Drawing on discursive psychology this article examines the understandings teachers and principals in Danish Public Schools have regarding Somali diaspora parenting practices. Furthermore, the article investigates what these understandings mean in interaction with children in the classrooms and with parents in home-school communication. It is argued that in a society with increased focus on parental responsibility the teachers and principals draw on a deficit logic when dealing with Somali diaspora parents and children which consequently leads to teachers either transmitting their expertise by educating parents or compensating for perceived deficiencies in parental practices. Both these strategies result in significant marginalizing consequences where difference' is understood as wrong' or inadequate'. Educational equity dominates discussions of US schooling. However, what educational equity' means is much contested in the scholarly literature and in public discourses. We follow the lead of scholars of color who have problematized the definition of educational equity. They have shown that the dominant, taken-for-granted definitions of equity which disguises the accumulation of societal and educational exclusions of and prejudices toward historically marginalized students, their families, and their communities. In response to this critique, we offer a new definitional framework for educational equity' that is community-based and, in our specific case, urban community-based. And, then, we will apply this new equity framework to three examples or exemplars' of education reform to explicate how they do and do not illustrate our framework. We will finish with a brief discussion, recommendations for future scholarship, and some concluding remarks. The stories of students and teacher candidates of Color (Just as singular racial/ethnic identities are capitalized (i.e. African-American, Asian, Latina, Native American etc.), I capitalize Color to honor the various identities that many non-white' people hold near and dear. I recognize the nuances in doing so- such as the reality that the term people of Color' actually erases identity while the term also highlights a shared experience (though also nuanced) of being non-white' in a white supremacist society.) hold powerful lessons and insights for teacher education programs and educational reform efforts. Yet, rarely do educators and policy-makers solicit or critically engage the educational narratives of these stakeholders. In particular, research confirms that we know little about how students' of Color educational experiences are impacted by race(ism) and culture and how those experiences subsequently inform their ideas about teaching. This study, framed by critical race theory (CRT), examines an African-American (African-American is used intentionally here as this is how Ariel identifies racially.) teacher candidate's racialized K-12 and postsecondary school experiences to more fully understand the connection between lived experience and developing teacher identity. Ariel's story reflects her own school experiences; her focus on her peers' school experiences when asked about her own; and how those experiences, informed by race and culture, contribute to her development of pedagogy. Analytical considerations illustrate that memory and remembrance, witnessing and bearing witness, and testimony are deliberate and powerful acts in the development of pedagogy and should be central to teacher education curriculum. This study provides insights to the school experiences of Latino male students through an exploration of how they describe their beliefs about education and how they engage in school for academic success. Data is drawn from interviews and surveys conducted with Latino males that participated in New York University's Black and Latino Male School Intervention Study (BLMSIS) between 2006 and 2011. The findings revealed a dynamic interplay among how the students do school' (behavioral engagement), their intellectual involvement (cognitive engagement), and their strong beliefs in the education for social mobility shaped schooling for them. This focus on the experiences of young Latino males seeks to assist researchers, policymakers, and practitioners alike design and implement programs and policies to promote their educational progress and success. This article explores how and why a group of Latino/a high school students identify and explain racism differently over the course of an 18-month participatory action research (PAR) project. To do this we examine what recent scholarship has termed racial microaggressions in what is thought of as the Post-Racial America public school system. Pulling examples from student and teacher interview, focus group, and class discussion data we first examine how these students' teachers conceptualize and talk about racism, cross-racial relationships, and racial misunderstandings, and then we juxtapose that with students' discursive work to make sense of the ways their teachers make their conceptualizations known and/or seen in school. Focusing on the K-12 context, this study finds racial battle fatigue may be why students switch between how they label these aggressions. We examine the degree to which assessment practices in the City of Detroit have created substantial inequities in property tax payments across residential properties. Two key contributions of this article include: (1) inequities created by assessment practices are examined in a collapsed real estate market, and (2) quantile regression techniques are used to determine how assessment practices have altered assessment distributions within and across property value groups. Results show that current practices have created a wide range of property tax payments across properties with similar value (horizontal inequity), and similar tax payments for properties of differing values (vertical inequity). This paper studies U.S. house prices across 45 metropolitan areas from 1980 to 2012. It applies a version of the Gordon dividend discount model for long-run fundamentals and uses Mean Group and Pooled Mean Group estimation to estimate long-run and short-run determinants of house prices. We find great similarity across cities in that the long-run house prices are largely explained by the same fundamentals; the long-run rent to price ratio is approximately 5% plus 0.75 times the real interest rate (which is on the order of 2%). However, adjustments to deviations from the fundamentals are slow, in the long-run, closing the gap at a rate of around 10% per year. We find sharp differences in short-run adjustments (momentum) away from the fundamentals across cities, and the differences are correlated with local supply elasticities (more momentum with lower elasticity). Analysis of residuals suggests strong cyclical deviations, which are mean-reverting. In this article, I synthesize an emerging literature that explores the conditions under which public and private investments and intergovernmental transfers are capitalized into local house prices and the broader economic implications of such capitalization. The main insights are: (1) house price capitalization is more pronounced in locations with strict regulatory and geographical supply constraints; (2) capitalization can induce the provision of durable local public goods and club goods; and (3) capitalization effectswhich are habitually ignored by policy-makershave important adverse consequences for a wide range of policies such as intergovernmental aid and the mortgage interest deduction. This paper uses about 26 million home sales to measure house price idiosyncratic risk for 7,580 U.S. zip codes during three periods: (1) when the U.S. housing market was stable (1996-2000), (2) booming (2001-2007) and (3) busting (2007-2012), and investigates the determinants of house price risk. We find very strong relationships between risk and some basic housing market characteristics. There is a U-shaped relationship between risk and zip-code level median household income; risk is higher in zip codes with more appreciation volatility; and risk is not compensated with higher appreciation. This article examines the relationship between broker-borrower interaction in the origination process and subsequent mortgage performance. I show that face-to-face interaction between a mortgage broker and borrower before the loan funds is associated with lower levels of ex postdefault. The relation between face-to-face broker-borrower interaction and mortgage performance holds only for borrowers that have characteristics associated with low levels of financial literacy. Specifically, face-to-face interaction is negatively related to default for minorities, borrowers located in areas with low levels of education, low-income borrowers and borrowers with low FICO scores. My results suggest that face-to-face interaction between the mortgage broker and borrower may reduce problems associated with financial illiteracy. We introduce a hedonic price model that enables us to disentangle the value of a property into the value of land and the value of structure. For given reconstruction costs, we are able to estimate the impact of physical deterioration, functional obsolescence and vintage effects on the structure and the impact of time on sale (and external obsolescence) on the land value simultaneously. Our findings show that maintenance has a substantial impact on the rate of physical deterioration. After 50 years of not or barely maintaining a home, a typical structure has lost around 43% of its value. In contrast, maintaining a home very well results in virtually no physical deterioration in the long run. This article sheds light on several puzzling empirical observations. We examine the volatility implications of equity Real Estate Investment Trust (REIT) stock returns over the sample period from January 1985 through October 2012. We find a negative leverage effect in the pre- and post-Greenspan era, but not during the Greenspan era (circa 1994-2006). We argue that the positive elasticity of variance with respect to the value of equity during the Greenspan era can be explained by a decline in the spread between the yield on commercial mortgages and 10-year Treasuries, which triggered a wealth transfer from REIT equity holders to REIT debt holders. We also argue that the declining commercial-mortgage-10-year-Treasury yield spread during the Greenspan era allowed REITs to take on far more risk than most people realized. We then document that average REIT stock return volatility increased significantly in the 2007-2010 period in the midst of a historic decline in REIT stock prices. The results have significant implications for the good deal of interest and debate in the media over the status of REITs and whether equity REITs have become excessively risky relative to the returns they generate. Using a unique combination of regulatory and survey microdata, we examine the importance of the life cycle theory of consumption in estimating housing wealth effects for the Irish mortgage market. Since the recent financial crisis, this market has experienced substantial house price declines and negative equity. Thus, house price expectations are likely to be important in influencing housing wealth effects. We find a positive correlation between consumption and changes in housing wealth among our sample of mortgaged Irish households. Furthermore, we find that this positive association only exists when housing wealth changes are perceived to be of a permanent nature. This paper introduces a new approach to represent the rocket exhaust effluents into an atmospheric dispersion model considering the trajectory and variable burning rates of a Satellite Vehicle Launcher, taking into account the buoyancy of the exhausted gases. It presents a simulation for a Satellite Vehicle Launcher flight at 12:00Z in a typical day of the dry season (Sept 17, 2008) at the Centro de Lanamento de Alcantara using the Weather Research and Forecasting Model coupled with a modified chemistry module to take into account the gases HCl, CO, CO2, and particulate matter emitted from the rocket engine. The results show that the HCl levels are dangerous in the first hour after the launching into the Launch Preparation Area and at the Technical Meteorological Center region; the CO levels are critical for the first 10 min after the launching, representing a high risk for human activities at the proximities of the launching pad. Because they are inexpensive platforms for satellites, CubeSats have become a low-cost way for universities and even developing countries to have access to space technology. This paper presents the ITASAT design, particularly the Attitude Determination and Control Subsystem, the Onboard Software, and the Assembly, Integration and Testing program. The ITASAT is a 6U CubeSat nano-satellite in development at the Instituto Tecnologico de Aeronautica, in Sao Jose dos Campos, Brazil. The platform and its subsystems will be provided by industry while the payloads are being designed and developed by the principal investigators. The ITASAT Attitude Determination and Control Subsystem will rely on a 3-axis magnetometer, 6 analog cosine sun sensors, 3-axis MEMS gyroscopes, 3 magnetic torque coils, and 3 reaction wheels. The Attitude Determination and Control Subsystem operating modes, control laws, and embedded software are under the responsibility of the Instituto Tecnologico de Aeronautica. A Kalman filter shall be employed to estimate the quaternion attitude and gyroscope biases from sensor measurements. The Attitude Determination and Control Subsystem operating modes are the nominal mode, with geocentric pointing attitude control and the stabilization mode, in which only the satellite angular velocity is controlled. The nominal mode will be split into 2 sub-modes: reaction wheel control plus magnetic wheel desaturation and 3-axis magnetic attitude control. Simulation results have shown that the attitude can be controlled with 1-degree accuracy in nominal mode with the reaction wheels, but these errors grow as much as 20 degrees or higher with the 3-axis magnetic control. A sample of 152 accidents and incidents involving Remotely Piloted Aircraft Systems, more commonly referred to as "drones", have been analysed. The data was collected from a 10-year period, 2006 to 2015, conveniently sourced from a limited population owing to the scarcity of reports. Results indicate that safety occurrences involving Remotely Piloted Aircraft Systems (RPAS) have a significantly different distribution of contributing factors when sorted into distinct categories. This provides a thorough and up-to-date characterization of the safety deficiencies specific to RPAS. In turn, this contributes to the development of adequate safety management systems applicable to the RPAS sector. The majority of RPAS occurrences involved system component failures which were the result of equipment problems. Therefore, airworthiness instead of pilot licensing needs to be considered first when regulating the Remotely Piloted Aircraft System industry. "Human factors" and "loss of control in-flight" were found to be the second most common "contributing factor" and "occurrence category", respectively; Remotely Piloted Aircraft pilot licensing will help reduce the probability of these secondary occurrences. The most significant conclusion is that reporting systems must be implemented to address RPAS accidents and incidents specifically, such that more useful data is available, and further analysis is possible facilitating an improved understanding and greater awareness. Uncertainty-based multidisciplinary design optimization considers probabilistic variables and parameters and provides an approach to account for sources of uncertainty in design optimization. The aim of this study was to apply a decoupling uncertainty-based multidisciplinary design optimization method without any dependence on probability mathematics. Existing approaches of uncertainty- based multidisciplinary design optimization are based on probability mathematics (transformation to standard space), calculating an approximation of the constraint functions in standard space and finding the most probable point, which is the best possible one. The current approach used in this paper was inspired on interval modeling, so it is good when there is insufficient data to develop a good estimate of the probability density function shape or parameters. This approach has been implemented for an existing Unmanned Aerial Vehicle (UAV, Global Hawk) designed for purposes of comparison and validation. The advantages of the provided approach are independence of probability mathematics, appropriate when there is insufficient data to approximate the uncertainties variables, appropriate speed to calculate the best reliable response, and proper success rate in the presence of uncertainties. It is performed and presented an experimental and numerical investigation over the flow patterns around the fore-body section of a microsatellite launch vehicle in development at Instituto de Aeronautica e Espaao. The experimental investigation with a VLM-1 model in 1: 50 scale is carried out at the Brazilian Pilot Transonic Wind Tunnel, located in the Aerodynamics Division of the mentioned Institute, using the classical schlieren flow visualization technique. Schlieren images are obtained for nominal Mach number varying from 0.9 to 1.01. Numerical simulation using Stanford's SU2 code is conducted together with the experimental investigation in order to improve the understanding of the complex physical phenomena associated with the experimental results of this particular regime. The combination of the 2 techniques allowed the assessment of some important aspects on the flow field around the vehicle in the conditions considered in this study, such as shock wave/boundary-layer interaction. The numerical simulation is also very important, allowing the quantification of some important parameters and confirming the shock wave formation patterns observed in the simulation when compared with the schlieren images. A good agreement regarding the position of the shock wave, when compared with the schlieren images, with a maximum error of about 6%, is observed over the VLM model. Flow field around rotors in axial flight is known to be complex especially in steep descent where the rotor is operating inside its own wake. It is often reported that, in this flight condition, the rotor is susceptible to severe wake interactions causing unsteady blade load, severe vibration, loss of performance, as well as poor control and handling. So far, there is little data from experimental and numerical analysis available for rotors in axial flight. In this paper, the steady Reynolds-Averaged Navier-Stokes Computational Fluid Dynamics solver Helicopter Multi-Block was used to predict the performance of rotors in axial flight. The main objective of this study was to improve the basic knowledge about the subject and to validate the flow solver used. The results obtained are presented in the form of surface pressure, rotor performance parameters, and vortex wake trajectories. The detailed velocity field of the tip vortex for a rotor in hover was also investigated, and a strong self-similarity of the swirl velocity profile was found. The predicted results obtained when compared with available experimental data showed a reasonably agreement for hover and descent rate, suggesting unsteady solution for rotors in vortex-ring state. The addition of Gurney flap changes the nature of flow around airfoil by producing asymmetric VonKarman vortex in its wake. Most of the investigations on Gurney flapped airfoils have modeled the flow using a quasisteady approach, resulting in time-averaged values with no information on the unsteady features of the flow. Among these, some investigations have shown that quasi-steady approach does a good job on predicting the aerodynamic coefficients and physics of flow. Previous studies on Gurney flap have shown that the calculated aerodynamic coefficients such as lift and drag coefficients from quasi-steady approach are in good agreement with the time averaged values of these quantities in time accurate computations. However, these investigations were conducted in regimes of medium to high Reynolds numbers where the flow is turbulent. Whether this is true for the regime of ultra-low Reynolds number is open to question. Therefore, it is deemed necessary to examine the previous investigations in the regime of ultra-low Reynolds numbers. The unsteady incompressible laminar flow over a Gurney flapped airfoil is investigated using three approaches; namely unsteady accurate, unsteady inaccurate, and quasisteady. Overall, all the simulations showed that at ultra-low Reynolds numbers quasi-steady solution does not necessarily have the same correlation with the time averaged results over the unsteady accurate solution. In addition, it was observed that results of unsteady inaccurate approach with very small time steps can be used to predict time-averaged quantities fairly accurate with less computational cost. Recently, there has been a growing interest in studies concerning ionization sensors for aerospace applications, power generation, and fundamental research. In aerospace research, they have been used for studies of shock and detonation waves. Two key features of these sensors are their short response time, of the order of microseconds, and the fact that they are activated when exposed to high temperature air. In this sense, the present paper describes the development of an ionization sensor to be used in shock tube facilities. The sensor consists of 2 thorium-tungsten electrodes insulated by ceramic, with a stainless steel adapter for proper mounting, as well as 2 copper seal rings. An electrical circuit was also built with 2 main purposes: to provide the electrodes with a sufficient large voltage difference in order to ease ionization of the air and to assure a short time response of the sensor. The tests were carried out in a shock tube with the objective of observing the response of the sensor under stagnation conditions. For that, we chose initial driven pressures of 1.0; 1.2 and 1.5 kgf/cm(2), with a constant driver pressure equal to 70 kgf/cm(2). We analyzed the response of the sensor as a function of the initial driven pressure, stagnation temperature, and density. For the studied conditions, the results showed that the mean amplitude of the ionization sensor signal varied from 8.29 to 19.70 mV. Cook-off tests are commonly used to assess thermal behaviour of energetic materials under external thermal stimuli. Numerical simulation became a powerful tool to reduce the costs with experimental tests. However, numerical simulations are not able to predict the violence of thermal response, but instead accurately reproduce radial heat flow in the test vehicle and satisfactorily predict the delay time to ignition and ignition temperature. This paper describes the slow cook-off simulation of 3 selected PBX based on RDX in a small-scale test vehicle, using the equilibrium equation of Frank-Kaminetskii and testing 2 kinetic models: Johnson-Mehl-Avrami (n) and Sestak-Berggren (m, n). The influence of successive addition of binder elements (HTPB, DOS, and IPDI) on slow cook-off results of selected PBX was assessed. The variation of +/- 10% in input data was performed to determine the influence on the slow cook-off results. Results showed that the addition of binder elements reduces the delay time to ignition as well as ignition temperature and that the Sestak-Berggren (m, n) kinetic model generates smaller values and with less deviation linked to the variation of input data. The selection of kinetic model as well as the variation of +/- 10% in input data had a negligible influence on the slow cook-off results of cured PBX. Multifunctional composites combine structural and other physicochemical properties, with major applications in aeronautical, space, telecommunication, automotive, and medical areas. This research evaluates electromagnetic properties of multifunctional composites based on glass fiber woven fabric pre-impregnated with epoxy resin laminated together carbon fiber non-woven veil metalized with Ni. In this way, searching for possible application as radar absorbing structures or electromagnetic interference shielding structures. The scattering parameters, in the frequency range of 8.2 to 12.4 GHz, show that the epoxy resin/glass fiber prepreg allows the transmission of the electromagnetic waves through its microstructure, independently of the glass fiber reinforcement orientation (98% transmission, S-24 = -0.09 dB). However, the carbon fiber/Ni veil shows highly reflector behavior (91% reflection, S-22 = -0.43 dB). Energy dispersive spectroscopy of the veil, before and after nitric acid attacks, confirmed the Ni coating removal from the carbon fiber surface. Still, the scattering parameters show reflector behavior (77% reflection, S-22 = -1.13 dB), attributed to the electrical conductivity of carbon fibers. Multifunctional composites based on glass fiber/epoxy/carbon fiber/Ni veil laminates were processed by hot compression molding. The scattering parameters show that the laminates do not behave as good radar absorbing structures. Nevertheless, the laminates present promising results for application as light weight and low thickness structural composites with electromagnetic interference shielding effectiveness (91.4% reflection for 0.36 mm thickness and 100% for similar to 1.1 mm) for buildings, aircraft, and space components. This study presents the complex index of refraction and the complex permittivity of a magnetic ceramic material made of copper, cobalt, and iron oxides. The index of refraction and the extinction coefficient of the CuCo-ferrite exhibit an almost frequency independent behavior and were averaged to n = 3.62 +/- 0.05 and k = 0.06 +/- 0.02, respectively, over the frequency range from 0.2 to 1 THz. The corresponding complex permittivity was epsilon' = 13.12 +/- 0.35 for the real part and epsilon'' = 0.46 +/- 0.15 for the imaginary one. The absorption coefficient and the transmittance of the CuCo-ferrite were also determined. The absorption coefficient exhibits a dip at similar to 0.35 THz, which corresponds to a peak in transmittance at this frequency. The impact of the observations on the potential realization of novel THz electronic devices is discussed. This article draws on stories of success in higher education by mature-age students of diverse backgrounds to highlight some key implications for institutional support. We begin by reviewing the post-World War II background of mature-age study in Australian higher education to provide a context for presenting some major findings from a small, in-depth research project. We examine these findings to focus on the role of institutional support in the success of mature-age students, particularly given recent sectoral factors affecting their access and support. The study findings show students' primary supports were families and friends. Participants all belonged to equity categories as designated by the Australian government, but many did not use institutional supports. Some lacked the confidence to approach staff; others were unaware support services existed or lacked the time to access them. The participants' stories demonstrate the complex disadvantages experienced by mature-age students. They highlight universities' need to ensure support services are 'student-centred' in order to ensure improved educational and equity outcomes for their mature-age student populations. The purpose of the paper is to propose the service-dominant logic in marketing as a framework for analysing the value co-creation process in the higher education sector and present the results of a quantitative study (a survey) conducted among business students from four Polish public universities. The results of the study led to identification of 40 factors of importance, later classified into seven value types expected by business students from their universities: functional, relational, intrinsic, epistemic, conditional, extrinsic and emotional value, with the first three types of value being the most important from students' perspective. These findings lead to several managerial implications regarding the teaching methods and academic curriculum design, which are presented in the final section of the paper. Because there is close cooperation on quality assurance in the Scandinavian countries, one would expect there to be convergence of quality assurance policies and practices in Scandinavian higher education. Few studies have analysed these quality assurance policies and practices from a comparative viewpoint. Based on empirical evidence produced in connection with studies of recent quality reforms in Scandinavia and an approach based on linking diffusion and translation theories with institutionalist perspectives focusing on path dependency, the paper contributes to the current debate on Scandinavian quality assurance. The debate is compelling with regard to the Swedish case in particular, with its one size fits all' approach and exclusive focus on outcomes which has been heavily criticised by the higher education institutions and has turned out to be controversial from the European viewpoint. This paper examines institutional governance of the public university in China, investigating the extent to which government has sponsored the autonomy of universities since the inception of the opening up reforms of 1978. The paper sets out to explain how the party governance system of China is interconnected with aspects of the university's governance, little commented upon in academic literature outside of China. In particular, it explores how the Presidential Accountability System under the leadership of the University Committee of Communist Party (UCCP) operates. One focus of gender equity policies in universities has been the creation of 'retention' part-time work for professional staff, which allows employees to move between full-time and part-time hours at their request. This paper examines whether such 'good' part-time jobs can contribute to or at least not impede women's career advancement. The paper examines the correlation between job classification and part-time work, and whether a period of part-time work acts as a significant 'brake' on a woman's career trajectory. This study uses data from the 2011 Work and Careers in Australian Universities survey. Part-time work is used extensively by lower-classified women, but rarely by those in higher classifications. Part-time work stalls career advancement compared to working full-time, but this brake is reduced if a woman transitions back to full-time work. This article examines academic administrators' attitudes towards the academic evaluation process in the US and those factors that are utilised to improve teaching. We use path regressions to examine satisfaction with evaluation procedures, as well as the direct and indirect effects of these factors on perceptions of whether the evaluation process facilitates quality instruction. With increased pressure for accountability being placed on higher education, it is important to ensure that we are meeting both public and academic expectations. The evaluation process is an important tool to ensure the university's goals and values are articulated and that academics can be successful in their individual career paths. The problem is most research finds flaws with the current method of evaluation, and academics and academic administrators are sceptical about the process and results. We find there are environmental factors that influence academic administrators' perceptions of academic evaluations and the ability to improve classroom instruction. This study investigates the influence of the elements of academic culture on quality management system ISO 9001 maintenance within Malaysian universities. There is a dearth of empirical studies on maintaining ISO 9001, particularly in the higher education context. From the literature review, academic culture was classified according to four elements - academic freedom, individualism, professionalism and collegiality. Two case studies were conducted within Malaysian universities that had been ISO 9001-certified for 5 years. At the time of this research, these two were the only universities that had certification for their entire organisation. (Most organisations gain certification for specific departments). The findings showed that academic freedom, individualism and collegiality had worked against ISO 9001 maintenance, while professionalism had influenced ISO 9001 maintenance both positively and negatively. The opposites of individualism (teamwork) and collegiality (managerialism) had supported ISO 9001 maintenance in one of the cases. Gender equity is increasingly seen as an indicator of development and global acceptance in networks of higher education. Despite this, gender divergence in research productivity of academics coupled with under-representation of women in science has been reported to beset female's scholarly activities. Previous studies provide differing results, hence a need for each academic institution to know its status for the purpose of formulating appropriate policy towards achieving gender equity without trading off productivity. Using a scientometric method, the present study investigates the representation and research productivity of male and female lecturers in the Faculty of Science, University of Ibadan. The study shows that while female lecturers are significantly less represented in the faculty and publish in journals having lower impact factors, their research productivity in terms of number of publications and citation impact are significantly not different from those of their male counterparts. Focusing on Lichtwark's concept of museology, this article shows what role he envisaged for art in public life at a time when the rise of mass consumption and popular culture created new lifestyles. Lichtwark's concept of artistic and aesthetic education did not only extend to museums and classrooms but also to dilettantism as a basis for educating taste and developing an appreciation of the arts that would have a positive economic impact. The article looks at the contemporary entanglements and different contexts of Lichtwark's ideas and relates them to recent approaches to cultural learning. Generally speaking, it argues that concepts of cultural learning are a bundle of entangled threads that connect and concern not only the sphere of art but also contradictory values and norms, economic production, and the emergence of new important status groups such as consumers. In both England and Japan, art education was viewed as having nothing to do with self-expression, but was considered to be an efficient means for industrial development. In England, it was designed to train the eyes and hands of artisans. The art critic Ruskin has often been referred to in the context of the transition to self-expression in the history of art education. This article shows, however, that Ruskin was not an advocate of self-expression. In Japan, drawing was introduced into the general education curriculum at the beginning of the Meiji era, and the aim of that instruction was to train the students' eyes and hands. In response to this trend, the Free Drawing Movement was introduced by Kanae Yamamoto. He attempted to introduce the methods of creation used by professional artists into general education. But this aspect was neglected by both his supporters and opponents, and Yamamoto has been presented instead as an advocate of self-expression. Drawing on the genealogical approach, as developed by Foucault, this paper re-examines this well-known history of art education. By replacing Ruskin and Yamamoto in the historical context of the transition of art education in two respective countries, the genealogy of self-expression will be revealed. Revisions of textual and audio-visual materials reveal the educational vision of Spanish anarchists. Through research, we have discovered the importance of aesthetical education and art in general for this protest political party. By studying the three key historical moments of the movement (1868-1939/ 1901-1910/ 1910-1936-1939) we have traced the evolution of the concept and practice of cultural learning. What stands out in the origins of the movement is the concern to introduce art and culture into school subjects while disseminating this knowledge to the whole population. Later, when the Modern School opened, the arts were introduced into the syllabus of teaching-learning. The aesthetic principles defended in the first period were turned into literary works with the aim of educating children from rationalist schools. Finally, we identify a time when the materials created by the Modern School were disseminated to working-class schools as a form of resistance against the politics proposed by government parties. The outbreak of the Civil War turned the corpus of aesthetical education into a cultural programme of demilitarised political resistance. In Radical Education and the Common School (2011), Michael Field and Peter Moss argue for a radical alternative to the failed and dysfunctional contemporary discourse about education and the school with its focus on markets, competition, instrumentality, standardisation, and managerialism. They argue that it is necessary, if we are to progress social alternatives in education, to construct micro-histories of schools that have developed as real utopias through radically revising their practice. They call these micro-histories critical case studies of possibilities. In To Hell with Culture (1963), the art educator and anarchist Herbert Read returned to a theme he had been exploring since the early 1930s - the purpose of education. For him, education implied many things, but he saw modern practice as socially disintegrating. Instead, Read offered an alternative to the dominant discourse about education under capitalism in the 1960s which would create that collective consciousness which is the spiritual energy of a people and the only source of its art and culture. To what extent was Read's conception of education an ideal, a dream unfulfilled? Following Fielding and Moss this paper will seek to trace critical case studies of possibilities drawn from the past which reflect the fundamental connection identified by Read between school learning, collective consciousness, art, and culture. This article examines how and to what extent Luxembourg society was exposed to visual representations of the prospering steel industries and labour and working-class culture(s) from the 1880s until the 1920s - a period of massive industrialisation - and how it thus gradually learned to labour. Indeed, modern visual media were seen as ideal catalysts for the circulation, transmission, and production of meaning, since they were considered to be appealing, objective, direct, and capable of inspiring the imagination. The article takes the reader through various mundane moments and events of industrial enculturation (annual funfair, slide lecture, vocational school, etc.) and engages with different technologies of display (photographs, fair albums, postcards, scale models, etc.) that subtly calibrated, conveyed, and inculcated the new industrial reality through the eye and, in the process, (re)produced national identifications. By zooming in on these different visual encounters with industry and by bringing these isolated encounters together in one story, the article (re)constructs a learning route - one among many possible pathways through this huge dynamic field of learning resources (or, cultural ecology) - and thus suggests how (informal) cultural learning might have taken place at the time. While accompanying us on this journey, the reader gains insights into how this field of resources evolved and how the industrial present was (re)framed, visually performed, and (re)configured over time. This article makes an exercise in the archaeology of education and focuses on the City of Birmingham (UK) in the year 1935 where the Education Committee allowed an experiment on the use of classroom film in senior elementary schools. Arrangements were made to provide projectors, films, operators, and screens for a series of exhibits at 80 schools. The aim of the experiment was to test the value of cinema for class teaching purposes. This article argues that this experiment with sound film could equally be considered an experiment in cultural learning. The first section describes the experiment and the local context in which it took place. The second section broadens the perspective by providing context beyond the local level that puts the experiment in time and place. The third and final section picks up on some of the findings of the first two sections and considers contemporary sources, mainly articles published in the British Film Institute's film magazine Sight and Sound, as well as recent scholarship on both educational and documentary film in order to discuss the notions of background and excursive film and to show that the experiment was a genuine adventure in cultural learning. During the 1930s and 1940s art increasingly came to be used as a therapeutic tool with children who were perceived as damaged by their experiences of war or displacement. This article explores two related exhibitions - Children's Art from All Countries (1941) and The War as Seen by Children (1943) - which provided a platform for children's impressions and experiences of war as seen through their drawings, whilst also raising money and awareness for refugee children's causes. Although supported by an influential network of British educators and cultural figures, the exhibitions were conceived and organised by displaced German, Austrian, and Czech artists and cultural educators who were members of the Free German League of Culture in London during the Second World War. The exhibitions are considered as sites of educational and political interventions by adult refugees in the context of therapeutic interventions with refugee children in British educational settings. In so doing, this article argues that the exhibition organisers conceived of cultural and creative learning as a transformative vehicle for supporting and re-forming personal identities, and for the re-imagining of collective democratic futures. The goal of this article is to reveal how through school theatre activities under authoritarian rule, changes took place in pupil knowledge, skills, attitudes, and behaviour regarding culture, namely, how the process of cultural learning occurs. I use a historical case study, specifically the case of the Valmiera School Theatre, which was the leading theatre group, not only in Soviet Latvia, but also in the entire Soviet Union. My primary sources are eight unstructured interviews, 20 published memoirs, articles in the press, theatre programmes, and photographs. One part of Soviet pedagogy was aesthetic upbringing, which was implemented through state-funded collectives, including school theatre groups. By participating in theatre activities, students gained knowledge of cultural heritage (literature, theatre, art, etc.), the ability to perform and acquire skills in other practical fields, and developed an appreciation of culture as a value. I argue that cultural learning through theatre was demonstrated by the fact that the students transferred their knowledge, skills, and attitudes to a new context, namely, their places of work and public cultural activities (e.g. amateur theatres). This case study also reveals the specific role of school theatre in the process of cultural learning, as well as some sensitive issues in the relationship between knowledge-orientated or formal educational environments, and the informal creativity of school theatre. The purpose of this study was to determine whether playing Quest for the Code, a computer game designed to teach children about asthma, would help healthy children acquire knowledge about and attitudes towards asthma and whether the beneficial effects would be maintained over time. The sample consisted of 155 children from four middle schools who were randomly assigned to play Quest for the Code or a game about nutrition serving as the control condition. Data were collected on knowledge and attitude pre-intervention, post-intervention, and at follow-up four weeks later. The results revealed that children who played Quest for the Code were more knowledgeable and had more positive attitudes than their peers in the control condition. And these benefits were maintained on follow-up tests. Our findings indicate the effectiveness and potential of using Quest for the Code as a tool for asthma education in a classroom setting. The new pedagogical opportunities that massive open online course (MOOC) learning environments offer for the teaching of fee-paying students on university-accredited courses are of growing interest to educators. This paper presents a case study from a postgraduate-taught course at the Open University, UK, where a MOOC performed the dual role of a core teaching vehicle for fee-paying students and also as a free-to-join course for open learners. An analysis of survey data revealed differences between the two groups in respect to prior experience, knowledge, expectations and planned time commitment. The nature and experience of interaction was also examined. Fee-paying student feedback revealed four conditions in which MOOCs could be considered a pedagogic option for taught-course designers. These are: when there is a subject need; when used to achieve learning outcomes; when there is acknowledgement or compensation for the financial disparity; and when issues of transition and interaction are supported. When faced with excessive detail in an online environment, typical users have difficulty processing all the elements of representation. This in turn creates cognitive overload, which narrows the user's focus to a few select items. In the context of e-learning, we translated this aspect as the learner's demand for a system that facilitates the retrieval of learning content - one in which the representation is easy to read and understand. We hypothesized that the representation of content in an e-learning system's design is an important antecedent for learner preferences. The aspects of isolation and distinctiveness were incorporated into the design of e-learning representation as an attempt to promote student cognition. Following its development, the model was empirically validated by conducting a survey of 300 university students. We found that isolation and distinctiveness in the design elements appeared to facilitate the ability of students to read and remember online learning content. This in turn was found to drive user preferences for using e-learning systems. The findings provide designers with managerial insights for enticing learners to continue using e-learning systems. Although the importance of boundary spanning in blended and online learning is widely acknowledged, most educational research has ignored whether and how students learn from others outside their assigned group. One potential approach for understanding cross-boundary knowledge sharing is Social Network Analysis (SNA). In this article, we apply four network metrics to unpack how students developed intra- and inter-group learning links, using two exemplary blended case studies in Spain and the UK. Our results indicate that SNA based upon questionnaires can provide researchers some useful indicators for a more fine-grained analysis how students develop these inter- and intra-group learning links, and which cross-boundary links are particularly important for learning performance. The mixed findings between the two case-studies suggest the relevance of pre-existing conditions and learning design. SNA metrics can provide useful information for qualitative follow-up methods, and future interventions using learning analytics approaches. Augmented reality (AR) technologies could enhance learning in several ways. The quality of an AR-based educational platform is a combination of key features that manifests in usability, usefulness, and enjoyment for the learner. In this paper, we present a multidimensional model to measure the quality of an AR-based application as perceived by students. The model is specified by a second-order factor (perceived quality) and three dimensions: ergonomic quality, learning quality, and hedonic quality. The purpose of this model is to embody previous research into a coherent framework for the evaluation of AR-based educational platforms and to provide guidance for researchers and practitioners. The model was empirically validated on a Chemistry learning scenario and the results confirm the importance of both the learning and hedonic quality. This study investigated kindergarten teachers' decision-making process regarding the acceptance of computer technology. We incorporated the Technology Acceptance Model framework, in addition to computer self-efficacy, subjective norm, and personal innovativeness in education technology as external variables. The data were obtained from 160 kindergarten teachers, from public kindergartens in Daejeon, South Korea. According to the results, subjective norm had the strongest effect on computer acceptance. In addition, perceived usefulness and computer self-efficacy had a direct effect on computer technology acceptance. On the other hand, perceived ease of use and personal innovativeness in education technology had an indirect effect on computer technology acceptance. The measures accounted for approximately 32% of the variance of intentions to use computers in kindergartens. Prior research has attempted to incorporate different personal variables within extant theories of technology acceptance models (TAMs). This study further extends TAM by incorporating teachers' conceptions of teaching and learning (CoTL) in two forms: constructivist and traditional conceptions. The moderating effects of teachers' demographic variables including age, gender, teaching experience, teaching level, and technology experience were tested. Our findings demonstrated that incorporating CoTL could provide a richer and more nuanced understanding of technology acceptance, although no moderating effects on any demographic variables were found. Previous studies of multimedia presentations have determined the effects of the combination of text and pictures on vocabulary learning, but not those of the sound of new words. This study was intended to confirm those previous findings from the integration of mobile technologies and the approach of cognitive load. It adopted a within-subjects design and recruited 32 eighth graders in central Taiwan to participate in a vocabulary learning program on mobile phones. During the program the participants needed to learn four sets of target words in four different weeks. Each set was presented in one of the four modes: text mode, text-picture mode, text-sound mode, and text-picture-sound mode. Immediately after learning each set, all participants took a vocabulary test and completed a cognitive load questionnaire; and, two weeks later, they took the vocabulary test again. Their perceptions of the vocabulary learning program were also collected in a post-program questionnaire. The findings were that audio input helped our participants recall new words' meanings after two weeks; and, it reduced their cognitive load of learning new words. Our participants also provided positive feedback on the mobile-assisted vocabulary learning program featuring multimedia presentations. This experimental study was intended to examine whether the integration of game characteristics in the OpenSimulator-supported virtual reality (VR) learning environment can improve mathematical achievement for elementary school students. In this pre- and posttest experimental comparison study, data were collected from 132 fourth graders through an achievement test. The same math problem-solving tasks were provided to the experimental and control groups in the VR learning setting. Tasks for the experimental group involved the game characteristics of challenges, a storyline, immediate rewards, and the integration of game-play in the learning content. Analysis of covariance with the achievement test results indicated a significant effect of game-based learning (GBL) in the VR environment, in comparison with non-GBL in the VR environment, in promoting math knowledge test performance. This article explores the discourse practices of an Indigenous, community-based charter school and its efforts to create space for Indigenous both/and identities across rural-urban divides. The ethnographic portrait of Urban Native Middle School (UNMS) analyzes the discourse of making a space for you', which brings together rural and urban youth to braid binary constructs such as Indigenous and western knowledge, into a discourse of Indigenous persistence constraining contexts of schooling. We use the concept of reterritorialization' to discuss the significance of UNMS's community effort to create a transformative space and place of educational opportunity with youth. The local efforts of this small community to reterritorialize schooling were ultimately weakened under the one-size-fits-all accountability metrics of No Child Left Behind policy. This ethnographic analysis talks back' to static definitions of identity, space and learning outcomes which fail to recognize the dynamic and diverse interests of Indigenous communities across rural - urban landscapes. In mainstream discourse, rural generally implies white, while urban signifies not-white. However, what happens when rural' communities experience demographic change? This paper examines how students from a rural, New Latino Diaspora community in a Midwestern state complicate traditional notions of rurality. Data from participant observations and ethnographic interviews indicate that students from this near-majority-Latino community do not view it as rural even though its population is under 2500. Students allude to an alternative youth subculture influenced by incoming Latino students from cities in Mexico, Guatemala, and California. They contrast this with the more stereotypical subcultures they observe in neighboring, rural, majority-white communities. Demographically transitioning, rural schools present unique contexts for students to not only encounter their own privilege, but also to learn how to leverage that privilege to further the aims of social justice. However, this will not occur without explicit and careful planning, implementation, reflection, and teacher training. This paper examines representations of rural lesbian lives in three young adult novels. The novels analysed are Beauty of the broken by Tawni Waters (2014), Julie Anne Peters (2005) Pretend you love me, and Forgive me if you've heard this one before by Karelia Stetz-Waters (2014). The first of the novels by Waters (2014. Beauty of the broken. NewYork, NY: Simon Pulse) presents a very negative portrait of rural life for queer youth. Its message is that the only positive queer life is one that is lived in the urban. In contrast, the texts by Peters (2005. Pretend you love me. NewYork, NY: Little Brown) and Stetz-Waters (2014. Forgive me if you've heard this one before. Portland, OR: Ooligan Press) present rural spaces as potentially both inclusive and exclusive for queer youth. These novels also demonstrate that urban spaces can be equally problematic for queer youth. While we do not discount that Waters (2014. Beauty of the broken. NewYork, NY: Simon Pulse) description of rural life may be the experience of some queer youth, we argue that the novels by Peters (2005. Pretend you love me. NewYork, NY: Little Brown) and Stetz-Waters (2014. Forgive me if you've heard this one before. Portland, OR: Ooligan Press) offer a more nuanced and complicated notion of place and its relationship to non-normative sexual subjectivities. Through an examination of the visual rhetoric of identity presented by reality shows, especially Here Comes Honey Boo Boo, this paper explores ways in which American reality television and related media images construct, deploy, and reiterate visual stereotypes about whites from rural regions of the United States. Its focus is the relationship between image rhetoric and social identity expression and how they converge to create a discourse loop. Combining identity theory and visual rhetoric studies as a unique methodological lens, this paper focuses on why and how stereotypes circulate in the so-called realistic media. The implications of broadening stereotype study to include all varieties of visual artifacts in analysis of specific tropes are particularly important to the study of stereotypes of white rural others, especially since such imagery has increased in volume in recent years and appears in several different types of media. This article explores the use of critical and post-critical pedagogies in a rural Australian high school for the purposes of unsettling life-limiting gender beliefs and practices. The paper problematises two examples whereby site-specific knowledges, curriculum dictates, media texts and critical pedagogies were enmeshed to create politically charged spaces for re-seeing, re-thinking and re-doing gender. The first example involves a unit of work in which students were required to critically analyse and evaluate a well-known Australian documentary film for the particular version of hypermasculinity that it was valorising. The second example involves the collaborative critiquing of a well-known local text. At the conclusion of the paper, I turn a critically reflexive eye upon myself as a way of considering the ethics and issues for educators of challenging power asymmetries from the inside'. It is at this point that I discover it is possibly I who have been disrupted most of all. This paper focuses on the educative role of the farm in the development of relationships between young people and the homeplace they grew up on. The paper is based on qualitative interviews with a cohort of 30 Irish university students (15 men and 15 women) brought up on Irish family farms who would not become full-time farmers. The farm acts as an educational tool through which broader cultural and familial norms of land ownership, succession and affiliations with the land are transmitted to the next generation. This is manifested through, for example, the creation of foundational stories about their forebearers' influence on the physical appearance of the farm. The resulting place attachments are of profound depth and serve a key role in the succession process in helping to build a sense of duty and responsibilisation into the next generation's relationship with the landholding. This paper critically analyses the legitimation of exploitative human-nonhuman animal relations in online farming' simulation games, especially the game Hay Day. The analysis contributes to a wider project of critical analyses of popular culture representations of nonhuman animals. The paper argues that legitimation is effected in Hay Day and cognate games through: the construction of idyllic rural utopias in gameplay, imagery, and soundscape; the depiction of anthropomorphized nonhumans who are complicit in their own subjection; the suppression of references to suffering, death, and sexual reproduction among farmed' animals; and the colonialist transmission of Western norms of nonhuman animal use and food practices among the global audience of players. Hay Day thereby resonates with the wider cultural legitimation of nonhuman animal exploitation through establishing emotional connections with idealized representations of nonhuman animals at the same time as they inhibit the development of awareness and empathy about the exploitation of real nonhuman animals. This paper explores migrant women's encounters with formal and informal education in what can be termed new immigration rural destinations. We ask to what extent educational opportunities are realized in these new destinations. We show that education aspirations may be jeopardized because of the desire to achieve economic goals and thus require remedial action. Specifically, we refer to qualitative data collected in rural (and remote) Boddington in Western Australia, and rural Armagh in Northern Ireland. The paper engages with two interrelated dimensions of this migrant/migration experience. English is not a first language for our participants and we first examine the provision and consumption of informal English Language classes. In doing so, we demonstrate the complex social and cultural dimensions of community-based English language instruction. Second, we attend to migrant mothers' perceptions of and responses to children's formal education. We highlight transnational senses of, and tensions around, local/rural' pedagogies and resultant migrant strategies. We argue here that critical educational scholarship is crucial to developing educational analysis attuned to the nuances of place, mobility, and change in rural locations. Critical sociological analysis, we argue, can also nuance and complicate simplistic portrayals of rural communities and their social, economic, and cultural character. Two central narratives in rural education today relate first, to the economic and social problems faced by challenged left behind' communities faced by depopulation and restructuring, and second, boomtown' communities that experience rapid infusions of wealth and population. We offer two linked case studies from Australia and Canada that draw on what we call a rural sociological imagination to interrogate how education is framed in contemporary convergences of history and biography in rural locations. These framings complicate and even confound meritocratic and human capital assumptions that underpin contemporary educational policy discourse, particularly as it relates to rural education. There is a growing body of research interrogating the discursive construction of rural' in negative terms - as lacking, in decline or in crisis. This paper contributes to this body of literature by taking as its point of departure skilled trades training in Canada's most easterly province, Newfoundland and Labrador. To meet the labour demand associated with industrial projects in rural and remote areas, the provincial government has invested in strategies to encourage youth to enrol in certified training programmes in the skilled trades. This paper examines the contradictory and incomplete ways in which individualized labour market subjects are produced through a combination of economic restructuring and government policy initiatives related to training and apprenticeships, and considers what this means for how young people think about and experience the rural. I argue that rural places are largely framed in economic terms, either as in decline and crisis or as industrial sites of resource extraction, and that by discursively linking youth outmigration and skilled labour shortages, the sustainability of rural places and the province is individualized and downloaded onto youth, ignoring the structured inequalities that mediate access to training and employment. This article examines how children collectively appropriate brands as cultural resources. From the New Childhood Studies perspective, an ethnographic study was conducted in schools to investigate the engagement of 10- to 11-year-old children in brand culture. The findings demonstrate that, through a process similar to Corsaro's model of interpretive reproduction, children do not simply reproduce brand culture; instead, they actively use branding to fuel their peer culture. Mastering and manipulating brands are thus sources for integration or exclusion within the peer group and for differentiation from the adult world. We show the paradoxical impacts of branding on children's well-being and participate in the debate on their vulnerability to marketing by highlighting how they deploy brand culture to interact in their social spheres, with the consequence being that their would-be empowerment remains entangled in the brandscape. Last, we contribute to a better understanding of the concept of culturally based brand literacy. Meaning is a fundamental aspect of symbolic consumption and lies at the heart of consumer culture theory (CCT). Although consumption meanings are considered dynamic, heterogeneous, and contextual, meaning itself is considered an inherent aspect of consumer culture and a constitutive force of consumer experiences. Utilizing deconstruction as a critical strategy, this paper interrogates the concept of meaning in the CCT literature and contends that meaning is not only present in consumption practices, but it is also absent. As meaning circulates in the infinite possibilities of language, meaninglessness emerges as an important aspect of this process. The dialectical tension between meaning and meaninglessness, though, does not converge within a particular consumption practice, but continuously diverges in the anti-synthetic space between them. We empirically explore the consumption of this anti-synthetic space in three popular culture exemplars. We conclude by discussing the broader implications of the deconstruction of symbolic consumption for CCT. Gifts are a major part of both economic and social life. This intertwined relationship between the market and moral economies has long been unsettling to those concerned about rationalized marketplace meanings contaminating and eroding the sacred social role of gift giving. Consumer researchers have analysed the important relationship work done through gift giving in the moral economy and the ways that the marketplace facilitates such work (or not). However, little has explored when, how, and why a store bought gift, rather than a homemade one, actually became acceptable. This article uses three case studies from the early to mid-1800s to trace the rise of the store bought gift in the American marketplace. It highlights how the sociocultural context, marketing innovations, retailers, and meanings surrounding gifting all helped to ensconce gift giving as both a central component in the contemporary marketplace and a tool for symbolic communication in social life. Sustainable consumption manifolds and mobilizes one's conscious choice to express a politically implied stance on environmental/cultural/social issues, to address social and/or ecological injustices, to reproduce or restore order and justice, as well as to fulfill responsibilities of a citizen consumer. Based on this premise, this paper attempts to explore what sustainable consumption means to young adults in Hong Kong. Findings from three focus groups and six follow-up interviews reveal that Hong Kong young adults' sustainable consumption embeds their political ideals to construct collective civil power to fight against the structural inequalities, market hegemonies, imperial dominance and social/ecological injustice locally. The findings point to the need to further define and refine the unspecified concept of reflexivity in existing literature. The paper also unveils how the concept of sustainable consumption has evolved from the individual, global, rational, remotely moral and ideological to the communal, local, emotional, politically civil and actional. What makes a simple wine, grown in a rather mediocre wine-growing region, one of the most famous and magical marketplace icons of today? How did champagne establish such a unique position, against all the odds, and become the global symbol of celebration? In seeking answers to these questions, this marketplace icon contribution elaborates on what 250 years of avant-garde champagne marketing can tell us about champagne's ever-shifting image and role in consumer culture. I argue that the imperishable fame of champagne stems primarily from four epic myth-making moments that not only came to shape a national identity but also modern consumption ideologies in important ways. This article analyzes the tension generated by the admission of Wichi youths to higher education in the province of Salta (Argentina). The main goal is to show how access to higher education generates continuities and discontinuities in the indigenous social organization. The article is based on ethnographic fieldwork that examined how young Wichi undergraduate students made sense of their schooling experiences. This article presents data from a two-year ethnographic case study to explore how immigrant and refugee youth in the United States made sense of participation in a weekly human rights club after school. Three types of student responses to human rights education are exemplified through the profiles of students. The article offers new insights on studies of immigrant youth as well as possibilities that exist at the intersection of human rights education and anthropology of education. Drawing on ethnographic data collected in two primary schools, this paper examines the nature of the exclusion experienced by three children of linguistically, culturally, and socioeconomically diverse families labeled as having special education needs. Ambiguities and dilemmas surrounding the intersection of cultural diversity and special education are described, and ways in which the routines performed in mainstream classrooms produce a seemingly harmless, but pervasive, form of exclusion are discussed. There are negative consequences for children and youth when a primary caregiver leaves to migrate. However there are unforeseen experiences related to schooling. I compare how Mexican maternal migration has influenced the education experiences of the children left behind in Mexico and their siblings living in the United States. These microcontexts of where and how siblings live in Mexico and in New York City present us with a somewhat surprising picture of the different education experiences. Participation in extracurricular activities has been associated with enhanced academic achievement in Latino youth. Based on a longitudinal case study of one immigrant adolescent, this article finds that athletic participation is in itself neither a wholly positive or negative influence on Latino school achievement. Rather, effects of extracurricular sports on academic outcomes are mediated by complex factors including but not limited to ethnicity and are more contextually dependent and individually variable than existing scholarship suggests. It is argued that once biological systems reach a certain level of complexity, mechanistic explanations provide an inadequate account of many relevant phenomena. In this article, I evaluate such claims with respect to a representative programme in systems biological research: the study of regulatory networks within single-celled organisms. I argue that these networks are amenable to mechanistic philosophy without need to appeal to some alternate form of explanation. In particular, I claim that we can understand the mathematical modelling techniques of systems biologists as part of a broader practice of constructing and evaluating mechanism schemas. This argument is elaborated by considering the case of bacterial chemotactic networks, where some research has been interpreted as explaining phenomena by means of abstract design principles. In this article we argue for the existence of 'analogue simulation' as a novel form of scientific inference with the potential to be confirmatory. This notion is distinct from the modes of analogical reasoning detailed in the literature, and draws inspiration from fluid dynamical 'dumb hole' analogues to gravitational black holes. For that case, which is considered in detail, we defend the claim that the phenomena of gravitational Hawking radiation could be confirmed in the case that its counterpart is detected within experiments conducted on diverse realizations of the analogue model. Aprospectus is given for further potential cases of analogue simulation in contemporary science. According to the Gottesman-Knill theorem, quantum algorithms that utilize only the operations belonging to a certain restricted set are efficiently simulable classically. Since some of the operations in this set generate entangled states, it is commonly concluded that entanglement is insufficient to enable quantum computers to outperform classical computers. I argue in this article that this conclusion is misleading. First, the statement of the theorem (that the particular set of quantum operations in question can be simulated using a classical computer) is, on reflection, already evident when we consider Bell's and related inequalities in the context of a discussion of computational machines. This, in turn, helps us to understand that the appropriate conclusion to draw from the Gottesman-Knill theorem is not that entanglement is insufficient to enable a quantum performance advantage, but rather that if we limit ourselves to the operations referred to in the Gottesman-Knill theorem, we will not have used the resources provided by an entangled quantum system to their full potential. We argue that David Lewis's principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. The article sets out a primitive ontology of the natural world in terms of primitive stuff-that is, stuff that has as such no physical properties at all-but that is not a bare substratum either, being individuated by metrical relations. We focus on quantum physics and employ identity-based Bohmian mechanics to illustrate this view, but point out that it applies all over physics. Properties then enter into the picture exclusively through the role that they play for the dynamics of the primitive stuff. We show that such properties can be local (classical mechanics), as well as holistic (quantum mechanics), and discuss two metaphysical options to conceive them, namely, Humeanism and modal realism in the guise of dispositionalism. Philosophers have traditionally addressed the issue of scientific unification in terms of theoretical reduction. Reductive models, however, cannot explain the occurrence of unification in areas of science where successful reductions are hard to find. The goal of this essay is to analyse a concrete example of integration in biology-the developmental synthesis-and to generalize it into a model of scientific unification, according to which two fields are in the process of being unified when they become explanatorily relevant to each other. I conclude by suggesting that this methodological conception of unity, which is independent of the debate on the metaphysical foundations of science, is closely connected to the notion of interdisciplinarity. The aim of this article is to discuss the relation between indigenous and scientific kinds on the basis of contemporary ethnobiological research. I argue that ethnobiological accounts of taxonomic convergence-divergence patters challenge common philosophical models of the relation between folk concepts and natural kinds. Furthermore, I outline a positive model of taxonomic convergence-divergence patterns that is based on Slater's ([2015]) notion of 'stable property clusters' and Franklin-Hall's ([2015]) discussion of natural kinds as 'categorical bottlenecks'. Finally, I argue that this model is not only helpful for understanding the relation between indigenous and scientific kinds, but also makes substantial contributions to contemporary debates about natural kinds. Category theory has become central to certain aspects of theoretical physics. Bain ([2013]) has recently argued that this has significance for ontic structural realism. We argue against this claim. In so doing, we uncover two pervasive forms of category-theoretic generalization. We call these 'generalization by duality' and 'generalization by categorifying physical processes'. We describe in detail how these arise, and explain their significance using detailed examples. We show that their significance is two-fold: the articulation of high-level physical concepts, and the generation of new models. Many theorists have proposed that we can use the principle of indifference to defeat the inductive sceptic. But any such theorist must confront the objection that different ways of applying the principle of indifference lead to incompatible probability assignments. Huemer ([2009]) offers the explanatory priority proviso as a strategy for overcoming this objection. With this proposal, Huemer claims that we can defend induction in a way that is not question-begging against the sceptic. But in this article, I argue that the opposite is true: if anything, Huemer's use of the principle of indifference supports the rationality of inductive scepticism. This article explores some of the roles of computer simulation in measurement. A model-based view of measurement is adopted and three types of measurement-direct, derived, and complex-are distinguished. It is argued that while computer simulations on their own are not measurement processes, in principle they can be embedded in direct, derived, and complex measurement practices in such a way that simulation results constitute measurement outcomes. Atmospheric data assimilation is then considered as a case study. This practice, which involves combining information from conventional observations and simulation-based forecasts, is characterized as a complex measuring practice that is still under development. The case study reveals challenges that are likely to resurface in other measuring practices that embed computer simulation. It is also noted that some practices that embed simulation are difficult to classify; they suggest a fuzzy boundary between measurement and non-measurement. A characteristic hallmark of medieval astronomy is the replacement of Ptolemy's linear precession with so-called models of trepidation, which were deemed necessary to account for divergences between parameters and data transmitted by Ptolemy and those found by later astronomers. Trepidation is commonly thought to have dominated European astronomy from the twelfth century to the Copernican Revolution, meeting its demise only in the last quarter of the sixteenth century thanks to the observational work of Tycho Brahe. The present article seeks to challenge this picture by surveying the extent to which Latin astronomers of the late Middle Ages expressed criticisms of trepidation models or rejected their validity in favour of linear precession. It argues that a readiness to abandon trepidation was more widespread prior to Brahe than hitherto realized and that it frequently came as the result of empirical considerations. This critical attitude towards trepidation reached an early culmination point with the work of Agostino Ricci (De motu octavae spherae, 1513), who demonstrated the theory's redundancy with a penetrating analysis of the role of observational error in Ptolemy's Almagest. When did the concept of model begin to be used in mathematics? This question appears at first somewhat surprising since "model" is such a standard term now in the discourse on mathematics and "modelling" such a standard activity that it seems to be well established since long. The paper shows that the term- in the intended epistemological meaning-emerged rather recently and tries to reveal in which mathematical contexts it became established. The paper discusses various layers of argumentations and reflections in order to unravel and reach the pertinent kernel of the issue. The specific points of this paper are the difference in the epistemological concept to the usually discussed notions of model and the difference between conceptions implied in mathematical practices and their becoming conscious in proper reflections of mathematicians. The following article has two parts. The first part recounts the history of a series of discoveries by Otto Neugebauer, Bartel van der Waerden, and Asger Aaboe which step by step uncovered the meaning of Column , the mysterious leading column in Babylonian System A lunar tables. Their research revealed that Column gives the length in days of the 223-month Saros eclipse cycle and explained the remarkable algebraic relations connecting Column to other columns of the lunar tables describing the duration of 1, 6, or 12 synodic months. Part two presents John Britton's theory of the genesis of Column and the System A lunar theory starting from a fundamental equation relating the columns discovered by Asger Aaboe. This article is intended to explain and, hopefully, to clarify Britton's original articles which many readers found difficult to follow. Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional performance. However, survey response is typically low, rendering these uses unwarranted if students who respond to the evaluation are not randomly selected along observed and unobserved dimensions. This paper is the first to fully quantify this problem by analyzing the direction and size of selection bias resulting from both observed and unobserved characteristics for over 3000 courses taught in a large European university. We find that course evaluations are upward biased, and that correcting for selection bias has non-negligible effects on the average evaluation score and on the evaluation-based ranking of courses. Moreover, this bias mostly derives from selection on unobserved characteristics, implying that correcting evaluation scores for observed factors such as student grades does not solve the problem. However, we find that adjusting for selection only has small impacts on the measured effects of observables on SETs, validating a large related literature which considers the observable determinants of evaluation scores without correcting for selection bias. Females are underrepresented in certain disciplines, which translates into their having less promising career outlooks and lower earnings. This study examines the effects of socio-economic status, academic performance, high school curriculum and involvement in extra-curricular activities, as well as self-efficacy for academic achievement on choices of academic disciplines by males and females. Disciplines are classified based on Holland's theory of personality-based career development. Different models for categorical outcome variables are compared including: multinomial logit, nested logit, and mixed logit. Based on the findings presented here, first generation status leads to a greater likelihood of choosing engineering careers for males but not for females. Financial difficulties have a greater effect on selecting scientific fields than engineering fields by females. The opposite is true for males. Passing grades in calculus, quantitative test scores, and years of mathematics in high school as well as self-ratings of abilities to analyze quantitative problems and to use computing are positively associated with choice of engineering fields. Public colleges and universities depend heavily on state appropriations and legislatures must decide how much to fund higher education. This study applies punctuated equilibrium theory to characterize the distribution of annual changes in higher education appropriations and defines the threshold for a dramatic budget cut. Using data for the 50 states from years 1980 to 2009, this study investigates the relationship between such unique policy events and state characteristics using a Cox proportional hazards model. Results show that economic and political conditions are most predicative of dramatic budget cuts. High unemployment rates increase the probability of cuts while rapid increases in tax revenue and wider income inequality are protective against cuts. Unified Republican and unified Democratic governments are both more likely to cut spending compared to a divided government. Sensitivity analyses of state characteristics associated with small budget cuts demonstrates that large cuts are indeed unique events catalyzed by different conditions. Our study uses data from the Wabash National Study of Liberal Arts Education to interrogate the affinity disciplines hypothesis through students' perceptions of faculty use of six of Chickering and Gamson's (AAHE Bull 39(7):3-7, 1987) principles of good practice for undergraduate education. We created a proportional scale based on Biglan's (J Appl Psychol 57(3):195-203, 1973) classification of paradigmatic development (with higher scores on the scale corresponding to students taking a higher proportion of courses in 'hard' fields compared to 'soft' fields), our study tests differences by the paradigmatic development of the disciplines or fields in which students take their courses within the first year of college. Our findings suggest that as paradigmatic development increases (toward a higher proportion of courses taken in hard disciplines), student perceptions of both faculty use of prompt feedback and faculty use of high expectations/academic challenge decrease, while student perceptions of cooperative learning increase. Further, no statistically significant differences were found between the paradigmatic development of fields in which students' take their courses and students' perceptions of faculty use of student-faculty contact, active and collaborative learning, or teaching clarity and organization. This study replicates the findings from Braxton et al. (Res High Educ 39(3):299-318, 1998) using student-level rather than faculty-level reports of faculty use of good teaching practices. Higher education in America is characterized by widespread access to college but low rates of completion, especially among undergraduates at less selective institutions. We analyze longitudinal transcript data to examine processes leading to graduation, using Hidden Markov modeling. We identify several latent states that are associated with patterns of course taking, and show that a trained Hidden Markov model can predict graduation or nongraduation based on only a few semesters of transcript data. We compare this approach to more conventional methods and conclude that certain college-specific processes, associated with graduation, should be analyzed in addition to socio-economic factors. The results from the Hidden Markov trajectories indicate that both graduating and nongraduating students take the more difficult mathematical and technical courses at an equal rate. However, undergraduates who complete their bachelor's degree within 6 years are more likely to alternate between these semesters with a heavy course load and the less course-intense semesters. The course-taking patterns found among college students also indicate that nongraduates withdraw more often from coursework than average, yet when graduates withdraw, they tend do so in exactly those semesters of the college career in which more difficult courses are taken. These findings, as well as the sequence methodology itself, emphasize the importance of careful course selection and counseling early on in student's college career. In this paper, a reduced-order modeling approach based on computational fluid dynamics is presented for an elastic wing with control surfaces in the transonic regime. To treat the computational fluid dynamics grid around the geometrical discontinuities, due to the deflection of control surfaces, the constant volume tetrahedron method and the transpiration method are combined together without deforming the grid. Based on the input-output data from the computational fluid dynamics solver, one multiple-input/multiple-output discrete-time state-space model for the wing is identified via a robust subspace algorithm. For each control surface, one one-input/multiple-output discrete-time state-space model is identified using the same algorithm. With the precomputed state-space models for a few flight parameters, the generalized aerodynamic forces over a range of flight parameters can be computed by interpolating the output data. The methodology is applied to an elastic wing model with two control surfaces and the generalized aerodynamic forces are compared with the results from the computational fluid dynamics solver to validate the reduced-order modeling approach in transonic regime. (C) 2016 American Society of Civil Engineers. Numerical studies on the applications of the microblowing technique (MBT) on a supercritical airfoil are performed based on a microporous wall model (MPWM) to represent the macroscaled collective characteristics of the huge number of microjets. The influences on the aerodynamic characteristics by microblowing with a MBT zone on different locations are analyzed. It is found that a MBT zone near the leading edge of the airfoil could achieve more reduction of skin-friction drag than a zone near the trailing edge. While for pressure drag, microblowing does not always result in a pressure drag penalty but could even reduce the pressure drag if the MBT porous zone is arranged on the region near the trailing edge. For the flow field without a shock wave, the MBT zone should be arranged on the lower wall and near the trailing edge. The typical configuration followed this guideline could simultaneously decrease the pressure drag and skin-friction drag while also increasing the lift. Numerical results indicate that a 12.8-16.8% reduction of total drag and 14.7-17.8% increase of lift could be achieved by this typical configuration with a blowing fraction 0.05. However, for the flow field with a shock wave on the upper wall, the performance of the microblowing is obviously suppressed by the existence of the shock wave. (C) 2016 American Society of Civil Engineers. Force measurements and wake surveys have been conducted on two swept NACA 0021 wings. One wing had a smooth leading edge, while the other wing had a tubercled leading edge. The force measurements and the wake survey results were in good agreement. Between 0 and 8 degrees angles of attack, tubercles reduced the lift coefficient by 4-6%. For the same range of angles of attack, tubercles reduced the drag coefficient by 7-9.5%. Tubercles increased the lift-to-drag ratio of this wing by 2-6%, and increased the maximum lift-to-drag ratio by 3%. At angles of attack higher than 8 degrees, tubercles typically decreased the lift coefficient and the lift-to-drag ratio, while substantially increasing the drag coefficient. The wake surveys revealed that tubercles reduced the drag coefficient near the wingtip and that they also spatially modulated the drag coefficient into local maxima and minima in the spanwise direction. Typically, tubercles reduced the drag coefficient over the peaks where the tubercle vortices produced downwash. Conversely, tubercles increased the drag coefficient over the troughs, where upwash occurred. The majority of the drag coefficient reduction occurred over the tubercled wingspan. (C) 2016 American Society of Civil Engineers. In this paper, effects of the surface-catalysis efficiency on aeroheating characteristics are studied. The Navier-Stokes solver with a cell-centered finite-volume scheme, including the finite-rate chemistry and two-temperature thermal nonequilibrium models, is implemented in this work. The ELECTRE flight trajectory at 53 km is used for validating the developed solver. Several cases with different catalytic recombination coefficients are studied to investigate the effects of catalysis efficiency under the condition of radiative equilibrium for the C series of Radio Attenuation Measurement project II flight trajectory at 71 km. It is revealed that distributions of heat flux and wall temperature have the similar tendency for all cases except for the Stewart model. Heat flux and wall temperature do not grow endlessly as the catalytic recombination coefficients are increased. It is the gradient of species density rather than gradient of temperature that leads to the significant differences of heat flux and wall temperature for different cases. The results indicate that the zone influenced by catalysis efficiency for atomic oxygen is larger than the one for atomic nitrogen and that the recombination reactions of atomic oxygen are more active. (C) 2016 American Society of Civil Engineers. The present paper reports a numerical study of the aerodynamic properties for a novel disc-shaped airship. Different configurations are considered, some of which present a circular opening connecting the bottom and top surface of the airship. The aim of the study is to understand the flow dynamics, in order to define the aerodynamic efficiency and the stability properties of the flying vehicle. Such information is crucial for the design of the propulsion system and of the mission profile of these innovative airships. Results show that, in general, disc-shaped airships are characterized by large values of drag and small levels of lift. Interestingly, it appears that lift keeps increasing up to very high angles of attack. This feature is found to be related with strong tip effects, which induce a significant flow of air from the high-pressure region at the bottom surface to the low-pressure region at the top surface. This air flow energizes the upper boundary layer, thus contrasting the flow separation on the top surface. This phenomenon is found to be useful for the stability properties of the airship: in fact, it shifts the center of pressure closer to the geometrical center of the airship, hence implying a reduction of the aerodynamic moment. The role of openings is also addressed and found to positively contribute to the stability properties of the airship, by further reducing the levels of aerodynamic moment. (C) 2016 American Society of Civil Engineers. The platform for resource observation and in situ prospecting for exploration, commercial exploitation, and transportation (PROSPECT) instrument package is under development by the European Space Agency for the upcoming Luna-27 mission to the lunar south pole. The purpose of the instrument is to detect and quantify volatiles on the lunar surface with the processing and analysis unit ProSPA. This paper describes the feasibility study and early breadboarding activities on ProSPA sample ovens during the Phase A study. The review of similar sample oven concepts led to the conclusion that none of these concepts satisfies new requirements of ProSPA regarding sample size and target temperatures. The trade studies presented in this paper include the estimation of power demands for scaled-up ovens, the influence of oven insulation, the compatibility of the utilized materials, and an experimental validation of the design. Experimental tests showed that the new oven design allows reaching the target temperatures and following most of the specified heating profiles with an imposed maximum power of 70 W. During the heating tests with the lunar regolith simulant NU-LHT-2M, sintering of the sample, reduction of the FeO content, and the creation of gas cavities were observed. (C) 2016 American Society of Civil Engineers. Efficient global optimization of aerodynamic shapes with the high-fidelity method is of great importance in the design process of modern aircrafts. In this study, modifications are made to the particle swarm optimization (PSO) algorithm and the radial basis function (RBF) model for further improvements of efficiency and accuracy of optimization. Specifically, a PSO algorithm with randomly distributed cognitive and social parameters, exponential decrease of maximum velocity and inertia weight, and periodic mutation of particle position is proposed. Furthermore, a nonuniform shape parameters strategy is introduced for the RBF surrogate model. Validations on test functions show that the new PSO has remarkable speed of convergence, and the new RBF model has superior approximation accuracy. The PSO algorithm and RBF model are then combined to construct the computational fluid dynamics (CFD)-based optimization framework. Finally, optimizations of transonic airfoil and supersonic launch vehicle are performed and results show that the drag coefficients in the two cases are significantly reduced (14 and 15%, respectively). The successful applications also indicate that the proposed PSO and RBF as a whole have apparent advantages over their original versions, and the optimization framework is effective and practical for design of aerodynamic shapes. (C) 2016 American Society of Civil Engineers. In this paper, a spacecraft radiator formed in a honeycomb structure is designed to enhance the thermal performance while reducing its mass. Examples of design guidelines for radiator configurations, such as the distance between heat pipes, facesheet thickness, and honeycomb core density, are suggested. To derive the analytic solution of the governing equation, a linear approximation is used and the accuracies of the solutions are verified with a fourth-order finite-difference method. There exist optimal combinations of design parameters that minimize the radiator mass while maintaining its heat rejection capacity. The heat rejection rate that minimizes the mass per unit heat rejection and the pertinent radiator shape also is presented. The combinations of optimal design are different among the three surface treatments and their characteristics are investigated. (C) 2016 American Society of Civil Engineers. Textbooks state that the lift, drag, and thrust coefficients CL, CD, and CT, are numbers that must be dimensionally proper (i.e., dimensionless) for their use in comparing different data sets. This paper posits that for the meaningful comparison of different data sets, numbers must be dimensionally and physically proper. The physical propriety is satisfied by expressing the number as a ratio of work and the energy available at an aerodynamic system during the generation of lift, drag, or thrust. The aerodynamic systems addressed in this paper are aircraft, propellers and lift rotors, cylinders in Magnus effect, and flapping wings. This paper introduces the normalized lift, eta(L), normalized drag eta(D), and normalized thrust, eta(T), numbers that are dimensionally and physically proper and can evaluate the ability of generating lift, drag, and thrust of these aforementioned systems and compare this ability between different systems. These numbers are shown to act like the thermal efficiency. in thermodynamics as they represent the ratio of work exerted onto the surrounding flowfield and the kinetic energy available at the system, are associated with a maximum value eta(max) (that may exceed 1) and can be read on a stand-alone basis. Their common mathematical format facilitates crosspollination between engineering, biomechanics, and biology. A numerical calculation of the normalized lift eta(L) of the blades of the record-breaking quadcopter AeroVelo Atlas is presented and compared with its lift coefficient CL as calculated by the AeroVelo group. (C) 2016 American Society of Civil Engineers. To improve the overall performance of mission planning for the unmanned aerial vehicles (UAVs), a multistage path prediction (MPP) for trajectory generation with rendezvous is addressed in this paper. The proposed real-time algorithm consists of four stages: path estimation, path planning, flyable trajectory generation, and trajectory modification for rendezvous. In every planning horizon, each UAV utilizes the local A* algorithm to estimate all probable paths and then the results serve as input for the task assignment system. A simple assignment algorithm is briefly introduced to validate the effectiveness of MPP. Based on the assignment, the polygonal paths are further obtained by using the global A* algorithm. Then these paths are smoothed to be flyable by using the cubic b-spline curve. In the last stage, the trajectories are modified for rendezvous of the UAVs to execute specific many-to-one tasks. Results of different stages are continuously revised and delivered to the task assignment system as feedback in the whole mission process. Numerical results demonstrate the performance of the proposed approach for stochastic scenarios. (C) 2016 American Society of Civil Engineers. The lightweight design of the sandwich mirror, as a commonly used space primary mirror structure, is one of the key topics for the design of space-based optomechanical systems. Owing to the limitation of traditional manufacturing capabilities, the induced holes on the mirror back are usually of the open or half-open form, which compresses the optimization design space. With rapid development of additive manufacturing (AM) technologies, it is possible to fabricate a closed-back sandwich mirror with a complex internal structure to achieve outstanding performance. In this paper, a novel topology optimization model for a closed-back primary mirror of a large-aperture space telescope is proposed. First, extrusion constraints are considered in the optimization model to obtain the layout design of stiffening webs inside the mirror core. Then, a simply connected constraint, as one type of constraint in AM, is considered to avoid enclosed voids in the structures. Through solving the proposed model, a new closed-back sandwich mirror configuration with nonclosed treelike vertical stiffening webs, is achieved. In addition, the thicknesses of the internal stiffening webs are optimized for minimizing the weight with the constraint of the surface shape error of the mirror face. Compared with the classical and existing sandwich mirror configurations, the optimized mirror has significant superiorities on optical performance and the lightweight ratio, which illustrates the effectiveness of the presented method. The method is a prospective study in the design of a space mirror fabricated using AM. (C) 2016 American Society of Civil Engineers. This paper presents a robust impact angle guidance law for maneuvering target interception with autopilot dynamics compensation based on a systematic backstepping control technique. Since the future course of action of the target, an independent entity, cannot be predicted beforehand, an adaptive-control approach is introduced in guidance law derivation. In order to address the problem of explosion of terms caused by analytic differentiation of the virtual control laws in the standard backstepping method, a tracking differentiator is adopted as an alternative way to obtain the derivatives of the virtual control laws. Stability analysis shows that both the line-of-sight angular rate and impact angle error can be stabilized in a small region around zero asymptotically. Simulation results explicitly show that accurate interception is achieved with a wide range of impact angles. (C) 2016 American Society of Civil Engineers. In this paper, an adaptive backstepping controller, which can attenuate external disturbance, is proposed for flight vehicle attitude control subject to amplitude and rate saturations of the elevator. First, the elevator rate saturation is incorporated into the amplitude saturation, which results in realistic amplitude-saturation bounds. Then, an attitude-control law is proposed based on the adaptive-backstepping scheme in which the comprehensive elevator amplitude saturation is explicitly considered. In this design approach, two adaptive gains, which are related to the external disturbance and realistic saturation bounds, respectively, are tuned in the control law. The resulting closed-loop system is proved to be uniformly ultimately bounded by the Lyapunov stability analysis approach. Simulation results show the effectiveness of the proposed method. (C) 2016 American Society of Civil Engineers. This paper proposes a scale-dependent model to investigate the dynamic-pull-in characteristics of a functionally graded carbon nanotubes (FGCNTs) reinforced nanodevice with a piezoelectric layer. Based on nonlocal beam theory, the nonlinear thermoelectromechanical coupling dynamic governing equation of an electrostatically actuated nanodevice is derived. The material properties of the functionally graded layer depend on temperature environment, thickness, volume ratio, and distribution of carbon nanotubes (CNTs) reinforcement. The van der Waals interaction and Casimir force are considered in the dynamic-pull-in analysis. The homotopy perturbation method is used to obtain a second-order approximated analytical function of nature frequency with respect to initial amplitude. The influences of piezoelectric effect, temperature change, nonlocal parameters, distribution, and volume ratio of CNTs and initial amplitude on dynamic-pull-in behaviors and natural frequencies of the nanodevice are discussed. The results show that the system has one stable focus point at the domain of small initial amplitude and appears in a particular homoclinic orbit originating point and ends at an unstable saddle point. (C) 2016 American Society of Civil Engineers. In this paper, a new guidance law, which is called virtual sliding target (VST) guidance law, is designed based on the concept of the virtual target (VT). The presented law is applicable for short- and medium-range missiles. It is shown that by using proportional navigation (PN) and considering the aerodynamic characteristics of the missiles, this law leads to a better performance than the PN law. In this approach, motion of the VT is started from a position higher than the real target. By controlling the speed of VT, which slides toward a predicted intercept point (PIP), speed, position, and trajectory of the missile can be controlled. Since arrival times of the missile to the VT and the VT to the real target are equal, the collision will happen. Furthermore, a new optimal guidance law is presented for long-range missiles based on the concept of the waypoint (WP) and VT. In the mentioned law, there are two important points. The first one is a constant point that is called a waypoint and the second one is called a virtual target, which slides toward the PIP. Then a teaching-learning-based optimization (TLBO) algorithm is used to find the optimal initial position of the WP and the VT to maximize the intercept speed. Simulation results illustrate better performance of the VST law over the PN law in terms of the intercept speed. (C) 2016 American Society of Civil Engineers. The authors have developed an electrostatic sampler for the reliable and autonomous collection of regolith particles on asteroids. The sampler, which employs Coulomb and dielectrophoresis forces to capture regolith particles and transport them to a collection capsule, can collect a lunar regolith simulant containing particles of various sizes less than approximately 1.0 mm in diameter in a low-gravity environment. However, there might be large particles with diameters of 1.0 mm or larger on asteroid surfaces. The authors conducted a numerical calculation and a model experiment to confirm whether the sampler can collect particles larger than 1.0 mm in diameter in a low-gravity environment. The numerical calculation, performed using the distinct element method, predicted the effect of the particle diameter on the sampler performance, indicating that particles 1.0 mm in diameter or larger could be successfully sampled in a low-gravity environment. Glass particles 2 mm in diameter were experimentally sampled in a 0.01 g environment reproduced by a parabolic aircraft flight, and rocks 4 mm in diameter were agitated under 0.01 g and successfully sampled under microgravity. (C) 2016 American Society of Civil Engineers. Targeted at one kind of on-orbit servicing spacecraft that contains flue containers, momentum actuators, space manipulators, and captured unknown objects, a specific attitude control problem during orbital maneuvers is under investigation in this paper. To overcome strong uncertainties during the onboard service, a parameter-identification algorithm is developed to estimate all inertial parameters of each part of the spacecraft system, and the identification results are obtained based solely on the measurement of the inertial navigation system. Therefore, the disturbance from the torque coupling of the orbital engine can be thus effectively compensated. Based on the identified parameters, the attitude stabilization problem is taken into practical consideration to tackle the strong influence from the orbital engine disturbance of concern. In order to specifically suppress the disturbance, a hybrid control strategy based on the momentum actuators and thrusters is proposed, which not only provides better precision than the conventional thruster-based method but also guarantees the momentum actuators against the momentum saturation issues. Numerical simulations are conducted to verify the effectiveness of the proposed algorithms. (C) 2016 American Society of Civil Engineers. The nonlinear static deflections of a functionally graded carbon nanotube (FG-CNT) reinforced flat composite panel are examined under a uniform thermal environment for different end conditions. The temperature-dependent material properties of the matrix and the fiber (carbon nanotubes) are considered in conjunction with the different gradients of carbon nanotube concentrations through the thickness direction. The mathematical model of the carbon nanotube reinforced flat composite panel is formulated for different types of gradients using the Green-Lagrange geometrical strain in the framework of the shear deformable higher-order kinematic theory. Further, the equilibrium equation of the panel is obtained using the variational method and discretized through the finite-element concept. The nonlinear deflections are worked out numerically using the direct iterative method via a suitable computer code. The validation and the convergence performance of the present numerical results were checked. Finally, the significance and inevitability of such a higher-order nonlinear model for the analysis of gradient carbon nanotube structure are established by solving different numerical examples for various parameters (types of grading, aspect ratios, thickness ratios, volume fractions, thermal field, and end conditions) and discussed in detail. (C) 2016 American Society of Civil Engineers. In this paper, an alternative approach was developed for the form-finding of cable-membrane structures. The prestress of a triangular finite element was firstly equivalent to the axial force densities and uniform vertical loads on the three edges. The static equations and tangent stiffness matrices of the cable-membrane were then formulated with the aid of the nonlinear force density method. In order to obtain a high accurate surface of cable-membrane structures, an iterative strategy was presented to solve the nonlinear force density equations and optimize the initial geometrical shape based on the Newton-Raphson method. Eventually, the proposed method was applied to the form-finding of the cable-membrane structure, a hoop truss antenna, and a numerical example was conducted to confirm the efficiency of the method. (C) 2016 American Society of Civil Engineers. This paper investigates two robust finite-time controllers for the attitude control of spacecraft based on rotation matrix. The first controller can compensate external disturbances with known bounds, whereas the second one can deal with both external disturbances and input saturation by using the hyperbolic tangent function and auxiliary system. Both controllers can avoid the singularity and converge to zero in finite time by using novel fast nonsingular terminal sliding mode control. Because the controllers are designed based on a rotation matrix that represents the set of attitudes both globally and uniquely, the system can overcome the drawbacks of unwinding. Numerical simulations are presented to demonstrate the effectiveness of the proposed control schemes. (C) 2016 American Society of Civil Engineers. Orbital maneuvering of microsatellite and nanosatellite requirements can be efficiently addressed by planar nozzles, in which real-time thrust control is gained actuating on the nozzle throat area. In the present work, the method of characteristics is used to design a hypersonic contour; then, the resulting profile is laser cut, assembled, and built into a propulsion system to finally perform tests in a vacuum chamber under several working conditions. Results are used to validate the profile and modeling assumptions and investigate overall behavior. Tests showed that thrust-inlet pressure ratio is a linear function of separation between nozzle contours and that no additional losses are introduced in the system. The corresponding design/manufacture/assembly tools can be used to a provide low-cost/low-weight controllable propulsion systems to the small-satellite industry with no efficiency loss. (C) 2016 American Society of Civil Engineers. In recent years, the study of wing-body-tail interference has gained greater significance in aerodynamics because of its importance in the modern aircraft design. One of the interference consequences is formation of downwash phenomenon. Downwash reduces the wing's effective angle of attack and as a result reduces lift force and also produces induced drag. Downwash changes the flow field downstream of the main wing and consequently changes the aerodynamic coefficients of the airplane's tail. These changes directly affect the stability derivatives and control coefficients of an airplane. Because of the importance of the effects of the rate of downwash on the stability derivatives of an airplane, a three-dimensional airplane with T-shape tail in incompressible and compressible subsonic flows was studied numerically in this paper. A fully structured grid was used for the whole domain around the airplane. Aerodynamic coefficients of the horizontal tail of the airplane and also the downwash and downwash rate with respect to the angle of attack of the horizontal tail were investigated. In addition, the compressibility effects on the downwash rate of the airplane's tail were also studied in this paper. (C) 2017 American Society of Civil Engineers. The flow regimes associated with a 2:1 aspect ratio, elliptical planform cavity in a turbulent flat-plate boundary layer have been systematically examined for various depth/width ratios (0.1-1.0) and yaw angles (0-90 degrees), using a combination of wind tunnel experiments (including particle image velocimetry) and computational fluid-dynamics (CFD) simulations. For each of the three categories specified according to yaw angle, which include the symmetric flow regime (yaw angle = 0 degrees), straight vortex regime (yaw angle = 90 degrees), and asymmetric flow regime (15 degrees = yaw angle = 60 degrees), different flow structures are found to exist depending on cavity depth. For each combination of yaw angle and depth, the flow has been analyzed through investigation of shear layer parameters, three-dimensional (3D) vortex structure, pressure distribution and drag, and wake flow. While the elliptical cavity flows have been found to have some similarities with those of nominally two-dimensional and rectangular cavities, the 3D effects due to the low aspect ratio and curvature of the walls give rise to features exclusive to low-aspect ratio elliptical cavities, including formation of cellular structures at intermediate depths and distinct vortex structures within and downstream of the cavity. The 3D structure of the flow is most pronounced in the asymmetric regimes with large yaw angles (45 and 60 degrees). The dominant feature in this regime is the formation of a trailing vortex that is associated with high drag. (C) 2017 American Society of Civil Engineers. In this paper, the effects of temperature and oxidation on cyclic-fatigue life of two dimensional (2D) woven SiC/SiC ceramic-matrix composites (CMCs) have been investigated. The relationships between fatigue life, fibers failure, interface shear stress and fibers strength degradation, temperature, and oxidation have been established using micromechanical approach. The tension-tension fatigue life S-N curves of 2D SiC/SiC composite at room and elevated temperatures in moisture, argon, air, and steam have been predicted. It was found that the presence of steam causes accelerated damage evolution and degradation in fatigue life compared with that in air. (C) 2017 American Society of Civil Engineers. Improved guaranteed cost control and quantum adaptive control are developed in this study for a quadrotor helicopter with state delay and actuator faults. Improved guaranteed cost control is designed to eliminate disturbance effects and guarantee the robust stability of a quadrotor helicopter with state delay. The inapplicability of guaranteed cost control to the quadrotor linear model is addressed by combining guaranteed cost control with a model reference linear quadratic regulator. In the event of actuator faults, quadrotor tracking performance is maintained through quantum adaptive control. Finally, the availability of the proposed scheme is verified through numerical simulation. (C) 2017 American Society of Civil Engineers. This research intends to characterize numerically the impact of sweep angle on the supersonic sharp corner flows. For this purpose, the effect of sweep angle on the variations of flow parameters such as shear stress, temperature, and pressure on the corner surface and also on the shock structure is examined. To ensure the accuracy of current numerical model, validation is performed with previous researches and reasonable agreements are observed. Results indicate that the inclusion of sweep angle has a significant effect on the Mach reflection strength and thus on the flow parameters' variations on the corner surface. It is found that the role of sweep angle in changing the flow parameters' variations in the zones near the trailing edge is of less significance than those that are close to the leading edge. It is shown that the inclusion of sweep angle has a little effect on the separation zone size although it tends to increase slightly the separation region length along the wing span at the trailing edge. Results demonstrate that the increase in sweep angle causes the shock structure and thus the Mach stem to become closer to the wing geometry. It is also seen that an increase in sweep angle leads to an increase in the Mach stem length. (C) 2017 American Society of Civil Engineers. Against a background of various techniques for gust load alleviation (GLA), this paper aims at proposing an improved linear quadratic Gaussian (LQG) method, which is robust to variations of flight parameters, structural parameters, and modeling errors and suitable for application in structure/control design optimization. This new technique differs from the traditional LQG methodology by the introduction of properly constructed fictitious high-frequency noise. Furthermore, to accurately measure the stability margins of the multi-input multi-output (MIMO) controllers, a variable-structure mu analysis method is proposed. The parameters of the Dryden continuous gust model are adjusted according to the structural natural frequencies to meet the design requirements, and model reduction combined with input signal scaling is applied to reduce the controller order. Using a general transport aircraft (GTA) model, the robust performance and robust stability of the improved LQG method are compared with those of modern robust controllers, including the H-infinity controller and the mu-synthesis controller. The numerical results demonstrate the successful application of this new technique. (C) 2017 American Society of Civil Engineers. Bistable tape springs are suitable as deployable structures thanks to their high packaging ratio, self-deployment ability, low cost, light weight, and stiffness. A deployable booms assembly composed of four 1-m long bistable glass fiber tape springs was designed for the electromagnetically clean 3U CubeSat Small Explorer for Advanced Missions (SEAM). The aim of the present study was to investigate the deployment dynamics and reliability of the SEAM boom design after long-term stowage using onground experiments and simulations. A gravity offloading system (GOLS) was built and used for the onground deployment experiments. Two booms assemblies were produced and tested: a prototype and an engineering qualification model (EQM). The prototype assembly was deployed in a GOLS with small height, whereas the EQM was deployed in a GOLS with tall height to minimize the effects of the GOLS. A simple analytical model was developed to predict the deployment dynamics and to assess the effects of the GOLS and the combined effects of friction, viscoelastic relaxation, and other factors that act to decrease the deployment force. Experiments and simulations of the deployment dynamics indicate significant viscoelastic energy relaxation phenomena, which depend on the coiled radius and stowage time. In combination with friction effects, these viscoelastic effects decreased the deployment speed and the end-of-deployment shock vibrations. (C) 2017 American Society of Civil Engineers. In this paper, numerical simulations are performed to investigate the influence of aerodisk size on the drag reduction and thermal protection of highly blunted bodies flying at hypersonic speeds. The compressible, axisymmetric Navier-Stokes equations are solved with an one-equation turbulence model for free-stream Mach number of 5.75. To ensure the validity of used numerical model, results are compared with data of analytical and experimental works and good agreement is observed. Results show that an increase in the aerodisk size initially decreases the drag coefficient and then increases the drag force. It is found that that the flat-faced spike leads to a lower drag than the spherical one at low aerodisk size, whereas the reverse is observed at high aerodisk size. The findings show that the spike length plays a crucial role in the drag coefficient. Moreover, results reveal that an increase in the aerodisk size significantly reduces heat load over the cone, while the heat load of the whole assembly increases at high aerodisk size. Numerical findings exhibit that heat load of the flat-faced spike is always lower than the spherical spike. According to the numerical results, the optimum aerodisk in terms of having lower drag and heating level is obtained. (C) 2017 American Society of Civil Engineers. A robust unscented Kalman filter based on a multiplicative quaternion-error approach is proposed for high-precision spacecraft attitude estimation using quaternion measurements under measurement faults. The global attitude parameterization is given by a quaternion, whereas the local attitude error is defined using a generalized three-dimensional attitude representation. To guarantee quaternion normalization in the filter, the unscented Kalman filter is formulated with a multiplicative quaternion-error approach derived from the local attitude error. A standard unscented Kalman filter provides sufficiently good estimation results even for initial large error conditions. However, in the case of measurement sensor malfunctions, the unscented Kalman filter fails in providing the required estimation accuracy and may even collapse over time. The proposed algorithm uses a statistical function including measurement residuals to detect measurement faults and then uses an adaptation scheme based on a multiple scale factor so that the filter may stand robust against faulty measurements. The proposed algorithm is demonstrated for attitude estimation of a spacecraft using quaternion measurements in three measurement fault cases. The estimation performance of the proposed algorithm is also compared with those of the standard extended Kalman filter and unscented Kalman filter under the same simulation conditions. (C) 2017 American Society of Civil Engineers. A splitter plate is a key component of the inlet system in turbine-based combined-cycle engines, which divides the whole captured air flow into different engines, namely turbojet and ramjet. The aerodynamic force acting on the thin splitter plate with a single pivot may engender vibration and, in turn, flow-field variations at the start and end of the mode transition phase. A loosely-coupled method was used to simulate the process of fluid-structure interaction. The results showed that the deformation of the splitter plate is, in fact, a process in which the elastic restoring force struggles against the aerodynamic force under the action of damping. At turbojet mode, the splitter plate can attain the maximum displacement of 7.20 mm. The terminal shock was observed to move back and forth in the flowpath. The mass flow rate in turbojet and ramjet flowpaths varied by 5.91 and 44.34%, respectively. At ramjet mode, the inlet fell into the unstart state with a greater displacement of 8.95 mm. The mass flow rate in turbojet and ramjet flowpaths, and slot-coupled cavity varied by 1.69, 23.91, and 51.85%, respectively. (C) 2017 American Society of Civil Engineers. This paper studies the use of a fuzzy proportional integral derivative (PID) controller based on a genetic algorithm (GA) in a docking maneuver of two spacecraft in the space environment. The docking maneuver consists of two parts: translation and orientation. To derive governing equations for the translational phase, Hill linear equations in a local vertical-local horizontal (LVLH) frame will be used. In fuzzy PID (FPID) controller design, two fuzzy inference motors will be utilized. The single input fuzzy inference motor (SIFIM) is the first to have only one input, and for each state variable, a separate SIFIM is defined. Another fuzzy inference motor, the preferrer fuzzy inference motor (PFIM), represents the control priority order of each state variable and a supervisory role in large deviations. This FPID controller covers a servicer's translation of a docking maneuver, which tries to dock with a stable nonrotating target. Various conflicting objective functions are distance errors from the set point and control efforts. To enter the control limit in an optimization problem, the maximal value of the thrust force is constrained. Considering these objective functions, a statistical analysis on the GA parameters will be performed, and the test with the best minimum fuel consumption and minimum deviations of the servicer from the equilibrium point will be chosen as the best test. The three-dimensional (3D) Pareto frontiers corresponding to the best test will be plotted, and the optimal points related to the objective functions will be demonstrated on them; the time response figures corresponding to these points will then be generated. The results prove that this controller shows an efficient performance in the docking maneuver of the servicer spacecraft. In comparison with similar work, a number of system performance parameters (e.g., settling time) will be improved, and overshoot (as a critical parameter in docking maneuver) will be truncated. (C) 2017 American Society of Civil Engineers. Towing spinning debris in space is subjected to the risk of twist, increasing the likelihood of collision. To reduce the twist potential, a twist suppression method is proposed for the viscoelastic tether in this paper. The system two-dimensional model is explored taking into account the attitude of the two end bodies and the tether slackness. The twist model describing the twist length is established to investigate the tension change during twist. The impedance based tension controller is designed to regulate tension actively by altering the tether unstretched length. Three cases where the thrust is 1 N, 2 N, and 5 N are studied to validate the feasibility of the method. It is shown that without tension control, towing removal is challenging for the system owing to the twist potential and large tension induced. However, the twist length is reduced dramatically and the relative distance of the two bodies is maintained under the tension control. It is also shown that the controller is robust to the measurement noise, and its performance is better for the lager thrust. (C) 2017 American Society of Civil Engineers. An adaptive predictor-corrector guidance algorithm is presented to improve the performance of atmospheric entry guidance. In the predictor, a virtual entry terminal (VET) is employed instead of an actual terminal to reduce the computational request. The VET is adaptively selected from a reference trajectory according to the vehicle's authority to cancel disturbances; thus, it is reachable by the vehicle in different cases. In the corrector, an effective modification method for the bank angle and the angle of attack is designed. With the predictor-corrector algorithm, the vehicle is able to reach the VET. After the VET, a reference trajectory tracking law is employed to fly the vehicle to the actual entry terminal. The adaptive entry guidance method is verified using Monte Carlo simulations. A comparison with the conventional predictor-corrector algorithm shows that the computation time is significantly reduced by employing the VET in the adaptive algorithm. (C) 2017 American Society of Civil Engineers. The authors investigate mechanical properties of sintered lunar regolith. Using JSC-1A and DNA lunar simulants, they study the influence of changes in glass content, main plagioclase series, and ilmenite content on a defined sintering process and on the mechanical properties of resulting sintered samples. Ilmenite addition up to 20 wt% of the regolith showed a negligible effect on the sintered product. The anorthite plagioclase endmember cannot be replaced by albite, responsible for the low sintering temperature of DNA and covering up the effect of the glass phase. The vacuum environment was revealed to have a positive effect on JSC-1A sintering: the grains bond at lower temperature than in air, thus preventing the formation of additional porosity and increasing the compression strength up to 152 MPa compared with only 98 MPa for sintering JSC-1A in air. (C) 2017 American Society of Civil Engineers. In a wind tunnel, the Mach number in the test section is an important parameter that should be predicted quickly and accurately. In building a Mach number prediction model, large-scale and high-dimensional data is the main issue. To solve the issue, the feature subsets ensemble (FSE) method has been proposed. However, a major drawback of the FSE method is that a large number of submodels are necessarily combined. In this paper, the maximum entropy pruning (MEP) method is proposed to overcome this drawback in the FSE Mach number prediction model. The MEP method refers to finding a subset of submodels that best approximates the entire submodels, while maximizing the quadratic Renyi entropy criterion. Experiments demonstrate that, with much fewer submodels than the FSE and other Mach number models, the MEP-FSE Mach number model can improve the prediction performance (i.e., the generalization), and meet the requirements of the forecasting speed and the root mean square error (less than 0.002). (C) 2017 American Society of Civil Engineers. In this paper, a robust optimal controller design problem is investigated for the longitudinal dynamics of generic hypersonic vehicles. The vehicle dynamics involve parametric uncertainties, nonlinear and coupling dynamics, unmodeled uncertainties, and external atmospheric disturbances, which are considered as equivalent disturbances. A linear time-invariant robust controller is proposed with two parts: an optimal controller to achieve the desired tracking performance and a robust compensator to restrain the influence of the equivalent disturbances. The robustness properties and the optimal tracking control performance can be achieved simultaneously without compromise. Theoretical analysis and simulation results are given to demonstrate the advantages of the proposed control approach. (C) 2017 American Society of Civil Engineers. A two-axis gimbal-type X-band antenna used to transmit real-time image data from a satellite to a ground station is a potential source of microjitter, which can degrade image quality from high-resolution observation satellites. Activation of discontinuous stepper motors during pointing of the antenna mechanism in the azimuth and elevation directions is one of the primary sources of microjitter disturbances. It is desirable to enhance microjitter attenuation capability by isolating disturbances from stepper motor activation with a reliable technical solution not requiring major design modification of the antenna. In this study, the application of a low-torsional-stiffness spring-blade isolator on the output shaft of the stepper actuator is proposed. The spring-blade isolators were designed based on the derived torque budgets of the antenna, ensuring structural safety of the blades. The design of the spring-blade isolators was verified through structural analysis and a torque measurement test. In addition, the effectiveness of the isolators was demonstrated in a microjitter measurement test of the X-band antenna. (C) 2017 American Society of Civil Engineers. This study investigates an onboard aeroengine model (OBEM)-tuning system (OTS) based on a hybrid Kalman filter, which is mostly used to estimate the aeroengine performance. The OTS structure comprises two parts. One is an OBEM, the other a linear parameter-varying Kalman filter. The Kalman filter is used as a regulator to minimize the mismatch of the measured outputs between the OBEM and the actual engine. This study describes the method and procedure used to construct the OTS. The results from the application of the technique to the aeroengine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in online Kalman filter estimation accuracy. (C) 2017 American Society of Civil Engineers. In this paper, a novel model-free fuzzy adaptive control (MFFAC) scheme is developed to control the heading angle of fixed-wing unmanned aerial vehicles (UAVs). It is common knowledge that the aerodynamics of the heading angle of fixed-wing UAVs are difficult to accurately model and are subject to wind disturbances. Therefore, it is difficult to implement conventional model-based control to the heading angle control problem. To overcome this difficulty, the authors propose a novel data-driven control approach. First, an adaptive neuro-fuzzy inference system (ANFIS) is designed to estimate the pseudo partial derivative (PPD), which is described as an equivalent dynamic linearization (EDL) technique for unknown nonlinear systems. Secondly, an extended model-free adaptive control (MFAC) strategy is proposed to control the heading angle of fixed-wing UAVs with wind disturbances. Finally, a discrete Lyapunov-based stability analysis is presented to prove the globally asymptotic stability of the proposed control scheme. The high-fidelity semiphysical simulations illustrate that accurate and stable control is achieved in this designed control strategy. (C) 2017 American Society of Civil Engineers. This investigation highlights the performance of epoxy-novolac interpenetrating polymer network (IPN) adhesive bonding of surface-modified polyether imide (PEI) to titanium (Ti) under elevated temperatures and an aggressive chemical environment as well as humid conditions. The results of physical and thermomechanical properties reveal that epoxy-novolac (4:1) IPN adhesive is the best combination. X-ray photoelectron spectroscopic (XPS) analysis demonstrates formation of oxide functional groups due to low-pressure plasma treatment onto the surface PEI and formation of oxide as well as nitride functional groups due to anodization and plasma nitriding on the titanium surfaces. Surface energy of the anodized titanium is relatively higher in comparison to plasma-nitrided titanium due to the high polarity of oxygen compared to nitrogen, and it consequently shows higher adhesive joint strength. Epoxy-novolac (4:1) IPN adhesive bonding of polyether imide to titanium reveals that anodized titanium to plasma-treated polyether imide adhesive joints show higher bond strength in ambient conditions. However, when exposed to an aggressive environment, the plasma-nitrided titanium to polyether imide adhesive joint shows higher bond strength because the nitride layer is more stable than the oxide layer and consequently results in a durable interface between titanium and adhesive, leading to cohesive failure of the adhesive joints. (C) 2017 American Society of Civil Engineers. The aim of this work is to investigate the analytical solutions for the equations of motion of a rigid body about a fixed point through the process of decoupling Euler's dynamic equations. This body is acted upon by a gyrostatic torque l = (l(1); l(2); l(3)) about the axes of rotation, and in the presence of a moment about the same axes, it depends on an external loading in which its components have been expressed as a harmonic function of time. The achieved analytical solutions for the equations of motion are obtained under conditions consistent with the physical nature of the body, and the uniqueness of the solution is proved. Some new theoretical applications are presented when the body is symmetric about one of its principal axes and when the body is in complete symmetry. The graphical representations for the motion of the body are represented to show the effectiveness of the physical parameters of the body. Moreover, the phase plane plots are given to ensure that the considered motion is free of chaos. (C) 2017 American Society of Civil Engineers. This study investigated the influence of pre-service teachers' (n=142) perceived endogenous/exogenous instrumentality, goal commitment, and intrinsic/extrinsic motivation on their use of self-regulation strategies (effort regulation, management of time and study environment) for their teacher-education courses. Data were drawn from a customised survey and were statistically analysed using hierarchical multiple regressions. Results demonstrated that pre-service teachers' endogenous instrumentality was a significant contributor for explaining their use of self-regulation strategies. To facilitate pre-service teachers' use of self-regulation strategies for learning, our findings suggest that, in addition to having intrinsic motivation for learning in their teacher-education courses, they need to have appropriate understandings of how their current course content connects to their future goals to be teachers in order to promote their use of strategies for self-regulation and learning. Research into the experiences of casual relief teachers (CRTs) (substitute or supply teachers) across Australia and internationally has reported feelings of marginalisation among participants. These findings are concerning when one considers that students might be in the care of CRTs for an equivalent of 1year or more throughout their schooling. When CRTs describe such feelings there is a suggestion that they do not feel a part of the community of practice in which they work. Accordingly, their opportunities for professional learning are often compromised, which has implications for their ability to maintain pedagogical knowledge and skills. This study used cluster sampling survey data to offer insights into professional challenges faced by CRTs. The discussion examines the self-determined skills of 59 Australian CRTs and the way schooling is organised that may leave them feeling excluded rather than members of what should be their communities of practice. Teachers' capacity to learn intentionally and responsively in the classroom is particularly vulnerable during the first years in the profession. This study investigated the interrelations between early career teachers' turnover intentions, perceived inadequacy in teacher-student interaction, and sense of professional agency in the classroom. The survey data were collected from 284 in-service teachers with not more than 5 years of experience and analysed by structural equation modelling (SEM). The results showed that the negative relation between turnover intentions and early career teachers' sense of professional agency was completely mediated by perceived inadequacy in teacher-student interaction. The results indicate that experiences of insufficient abilities to solve pedagogically and socially challenging student situations have a crucial effect on early career teacher's capacity for adaptive reflection and active transformation of instruction. This study examined whether prospective teachers' teaching-specific hopes significantly predicted their sense of personal responsibility. A total of 503 prospective teachers voluntarily participated in the study. Correlation and structural equation modelling analyses were conducted to examine the links between prospective teachers' teaching-specific hopes and sense of personal responsibility. Results showed that the relationships between prospective teachers' teaching-specific hopes and sense of personal responsibility were significant. Furthermore, the relationship between responsibility for teaching and hope for teaching was significantly more discernible than the relationships between responsibility for teaching and hopes for student achievement, student motivation, and relationships with students. Results also showed that, regardless of the effects of career choice satisfaction, prospective teachers' teaching-specific hopes for student motivation, relationships with students, and teaching significantly and selectively predicted the diverse aspects of their sense of personal responsibility. Implications for teacher education and directions for further studies were also discussed in the study. Educating for sustainability has been a key principle underpinning the primary/middle undergraduate teacher education programme at an Australian University for the past decade. Educating for sustainability seeks to provide knowledge and understanding of the physical, biological, and human world, and involves students making decisions about a range of ethical, social, environmental and economic issues, and acting upon them. This study (a part of the ongoing evaluation of our courses) focuses on pre-service teachers (PSTs) who have selected a minor in science and mathematics. Participatory and inclusive learning processes, transdisciplinary collaborations, experiential learning, and the use of local environment and community as learning resources as outlined by Sterling (2001) have formed the basis of much of our practice to develop PSTs' confidence and competence to teach science. This paper explores one pedagogical practice, environmental pledges which the preservice teachers undertook for 15 weeks. The focus is on the impact that undertaking an environmental pledge has had on the personal and professional lives of two groups, first, four cohorts of final-year science and mathematics pathway PSTs, and second, a small group of early-career teachers who had completed the course in previous years. Data have been collected from final-year science and mathematics students and early-career teachers using ethnographic methods to provide insight into their experiences of using the pledge. School physical education (PE) aims to develop students' knowledge and skills for lifelong participation in physical activity (PA). Unfortunately, many PE teachers report that motivating students is a significant challenge. The purpose of this study was to explore PE teacher perceptions about the effectiveness and acceptability of three self-determination theory-based motivational strategies on students' PA, motivation, and learning during PE lessons. Thirteen PE teachers from five schools in Western Sydney, Australia, participated in this study. We carried out semi-structured post-lesson interviews with PE teachers to gather information about the perceived effectiveness and acceptability of the three intervention strategies and whether these were sustainable teaching methods: (1) explaining relevance; (2) providing choice; and (3) complete free choice. Analysis of interview data revealed that teachers believed each strategy successfully enhanced student PA, enjoyment, motivation, and student learning. The findings also showed that our motivational teaching strategies were acceptable when embedded within certain PE contexts. Overall, the results have implications for future pre-service and in-service PE teaching practice. This work is in honour of Franz Kossmat (1871-1938) and his esteemed paper the Gliederung des varistischen Gebirgsbaues published 1927 in Abhandlungen des Sdchsischen Geologischen Landesamts, Volume 1, pages 1 to 39. It constitutes the foundation of the general subdivision of the Central European Variscides into several geotectonic zones and the idea of large-scale nappe transport of individual units. In the English translation presented here an attempt is made to provide a readable text, which should still reflect Kossmat's style but would also be readable for a non-German speaking community either working in the Variscan Mountains or having specific interests in historical aspects of geosciences. Supplementary notes provide information about Kossmat's life and the content of the text. Kossmat's work is a superb example of how important geological fieldwork and mapping are for progress in geoscientific research. This study aims to examine the value of personal norms in addition to the theory of planned behavior (TPB) variables (i.e., attitude toward behavior, subjective norm, perceived behavioral control, and behavioral intention) in explaining consumers' pro-environmental purchasing behavior. The hypotheses and model were formulated and tested with structural equation modeling using the data from 281 consumers who are active members of a U.S.-based recycling company. Model fit statistics indicate a good fit of empirical data and model structure for pro-environmental purchasing behavior. The findings suggest that while personal and subjective norms, attitudes toward behavior, and intention explain consumers' pro-environmental purchasing behavior, perceived behavioral control does not have any power in explaining behavior-related intention. Policy makers and marketing professionals are advised to adopt various social and sustainability marketing strategies that focus on communicating different normative aspects of purchasing decisions to promote pro-environmental consumer behaviors. The normative concerns covered in the environmental behavior studies are mostly limited to subjective norms as represented in the TPB, which has been widely adopted in the behavioral studies. By extending the TPB with personal norms, this study contributes to the better explanation of environmentally relevant purchase behaviors of consumers. Social marketing has gained popularity in health used to achieve the social good. However, there is lack of research beyond the aggregated sales data exploring the debate on socioeconomic and cultural relevance of such programs in public health in a low middle income context. The purpose of this article is therefore (a) to understand the importance of socioeconomic context that moderate acceptability, affordability, accessibility, and availability (4As) of social marketing for HIV/AIDS prevention in India and (b) analyze the barriers they may pose during the operationalization of the programs. This article performed theory-driven evaluation using qualitative research tools like documentary analysis and in-depth interviews of relevant stakeholders. Thematic analysis was followed to classify the emergent data. The 4As comprising acceptability, affordability, availability, and accessibility proposed in this article encapsulate the important role played by socioeconomic and cultural dimensions in shaping the social marketing programs for HIV/AIDS prevention in India. Social marketing for whom, at what cost, and under what circumstances emerge as an interesting debate in this article. An enhanced understanding of how social marketing harnesses the prevalent sociocultural and economic dynamics in a heterogeneous context like India not only refines social marketing as a theory but also provides opportunities for effective health policy making resulting in better health outcomes. While the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) provides numerous benefits to many enrolled families across the United States, including access to nutritious foods, some recent drops in maternal participation in Kentucky resulted from failures to retrieve those benefits. We explored perceived benefits of and encountered barriers to food benefit retrieval. Journey mapping included direct observations of client appointments, clinic lobby areas, and a shopping experience and was augmented with focus groups conducted in two urban and two rural areas. Major touchpoints before WIC appointments, during those appointments at clinics, and after appointments when redeeming food benefits were identified. Across touchpoints, mothers identified childcare, transportation issues, long waits, confusion regarding eligibility, problems scheduling appointments, and stigma as barriers to their ability to retrieve food instruments. Despite these barriers mothers value the benefits of WIC, especially access to healthy foods, infant formula, and nutrition education. This work demonstrates a method by which WIC mothers' experiences shed light on client service shortfalls and possible opportunities to improve client services. This study examines Latino parent-child interactions about foods and beverages requested in food retail environments in San Diego, CA. It seeks to extend our understanding of parent-child request interactions and purchases by studying how the number of product request interactions and purchases differ based on four factors that have been understudied in previous parent-child interaction research: parent gender, child gender, product healthfulness, and who initiated the request interaction (parent or child). By unobtrusively observing Latino parent-child dyads for the duration of a brief shopping trip, we found that parent and child gender are related to the number of request interactions initiated by parents and children. For gender-specific child-initiated request interactions, sons initiated more request interactions with fathers while daughters initiated more request interactions with mothers. Most request interactions were for products that were categorized as calorie dense, and a higher percentage of these products were purchased as a result of parent-initiated (vs. child-initiated) request interactions. The results provide important considerations for practitioners and researchers working on improving nutrition and reducing obesity. Assumptions about who is influencing whom in food store request interactions are challenged, requiring more research. A clear understanding of the knowledge, perceptions, beliefs, attitudes, and values of those whose behavior we hope to change is an essential task in a social marketing project. This formative research used mixed methods to identify consumer psychographics about high-fat milk among a low-income priority audience. High-fat milk users believe they choose the type of milk that is the healthiest for them and their families but had poor milk nutrition knowledge and an inaccurate appraisal of low-fat milk. Habit and taste preferences reinforced high-fat milk use but 2% milk users, a group that already adapted to reduced-fat milk, were the audience segment most willing to consider low-fat milk use. Low-fat milk use can be promoted by raising awareness that 2% milk is not low fat and addressing knowledge deficiencies while assuring the priority audience that 1% milk has all of the vitamins and minerals of whole or 2% milk but with less fat. The resonance of the promotion can be enhanced by incorporating words that work such as vitamins, minerals, healthy, choice, habit, and avoiding the words switch and skim that were viewed negatively by this low-income audience. Thirty-five percent of forestland in the United States is owned by individuals. The purpose of this research was to identify woodland owners' barriers to harvesting trees using the advice of a forester. Harvesting trees with the advice of a forester ensures a sustainable harvest that meets the needs of the woodland owner as the forester makes recommendations depending on what the woodland owner wants to gain from their land. The research further informed the marketing mix by identifying woodland owners' perceptions about trusted communication channels, providing a framework for segmenting the audience, and pointing to viable outreach strategies for rural interventions. Results of mail (New England) and telephone (Mississippi) surveys indicated that selling trees for income was the lowest rated land use activity reported by woodland owners. Additionally, across both regions, the surveys indicated that the primary barrier to using a forester involved some form of distrust. When comparing trusted sources of information across the two regions, forestry experts were rated similarly, but family and other woodland owners were perceived as more trustworthy in New England compared to Mississippi. Both groups preferred to receive information in written form, a preference that was almost twice as high as receiving an e-mail. This research provides the foundation for a marketing mix, improves the conservation community's body of knowledge regarding woodland owner's barriers to sustainable forest management actions, and provides broad recommendations for practitioners to use going forward. The competition between the chess computer Deep Blue and the former chess world champion Garri Kasparov in 1997 was a spectacle staged for the media. However, the chess game, like other games, was also a test field for artificial intelligence research. On the one hand Deep Blue's victory was called a "milestone" for AI research, on the other hand, a dead end, since the superiority of the chess computer was based on pure computing power and had nothing to do with "real" AI. The article questions the premises of these different interpretations and maps Deep Blue and its way of playing chess into the history of AI. This also requires an analysis of the underlying concepts of thinking. Finally, the essay calls for assuming different "ways of thinking" for man and computer. Instead of fundamental discussions of concepts of thinking, we should ask about the consequences of the human-machine division of labor. The long established German Society for Internal Medicine (DGIM) profoundly incriminated itself through its actions and positions during the National Socialist era. The German clinical physician Paul Martini assumed the part of reorganizing the DGIM prior to its first post-war convention in 1948 in Karlsruhe. Martini, who himself had opposed the Nazi regime, adopted a course of comprehensive integration. He strived to incorporate both physicians who had been persecuted by the Nazi Regime as well as former moderate National Socialists into the DGIM. At the same time he campaigned to preserve the pan-German nature of the conferences and aimed rapidly to make the DGIM re-compatible with international research. However, this path led to an allegedly apolitical focus on science and decades of largely failing to confront its Nazi past. This article discusses Michel Foucault's main writings on "madness and psychiatry" from his early works up to the 1970s. On the one hand, we reconstruct the overall theoretical and methodological development of his positions over the course of the different periods in his oeuvre. On the other hand, we also take a closer look at Foucault's philosophical considerations regarding the subjects of his investigations. After an initial introduction of our conceptual approach, we draw on the most recent research on Foucault to show to what extent the phenomenological description of the topic at hand and the historical-critical perspective that are reflected in his early writings of 1954 (the Introduction to Binswanger's Dream and Existence and Mental Illness and Personality) laid the ground for his later work. Moving on to Foucault's work during the 1960s, we look at the core features and methodological bases of his 1961 classic Folie et d,raison (History of Madness). His propositions regarding the "absence of madness" in modernity are conceptualized as an inherently contradictory attempt to liberate the topic under study from the common assumptions at that time. We then situate his 1973/74 lectures on Psychiatric Power in the context of his shift towards analyzing the dynamics of power and highlight the renewed shift of focus in his statements on the "productivity" of madness as an effect of power. Finally, we sum up our critique by taking into account the history of the reception of Foucault's writings and ask about their potential significance for the contemporary philosophy and history of psychiatry. aDNA studies are a cooperative field of research with a broad range of applications including evolutionary biology, genetics, anthropology and archaeology. Scientists are using ancient molecules as source material for historical questions. Colleagues from the humanities are observing this with both interest and concern because aDNA research is affecting academic identities and both concepts of history and historiography. aDNA research developed in a way that can be described as a Hype Cycle (Chackie Fenn). Technological triggers such as Sanger Sequencing and the Polymerase Chain Reaction kicked off a multitude of experiments with ancient DNA during the 1980s and 1990s. Geneticists, microbiologists, anthropologists and many more euphorically joined a "molecule hunt". aDNA was promoted as a time machine. Media attention was enormous. As experiments and implementations began to fail and contamination was discovered to be a tremendous problem, media interest waned and many labs lost their interest. Some turned their disillusionment into systematic research into methodology and painstakingly established lab routines. The authenticity problem was first addressed by control oriented measures but later approached from a more cognitive theoretical perspective as the pitfalls and limits of aDNA became clearer. By the end of the 2000s the field reached its current plateau of productivity. Cross-disciplinary debates, conflicts and collaborations are increasing critical reflection among all participants. Historians should consider joining the field in a kind of critical friendship to both make the most of its possibilities and give an input from a constructivist perspective. Most of the important genomic regions, especially the G,C rich gene promoters, consist of sequences with potential to form G,C-tetraplexes on both the DNA strands. In this study, we used three C-rich oligonucleotides (11Py, 21Py, and HTPy), of which 11Py and 21Py are located at various transcriptional regulatory elements of the human genome while HTPy sequence is a C-rich strand of human telomere sequence. These C-rich oligonucleotides formed i-motif structures, verified by Circular Dichroism (CD), UV absorption melting experiments, and native gel electrophoresis. The CD spectra revealed that 11Py and 21Py form i-motif structures at acidic pH values of 4.5 and 5.7 in the presence of 100 mM NaCl but remain unstructured at pH 7.0. However, 21Py can form stable i-motif structure even at neutral pH in presence of 1 mM MgCl2. UV-thermal melting studies showed stabilization of 21Py i-motif at pH 5.7 in the presence of Na+ or K+ with increasing concentration of MgCl2 or CaCl2 from 1 to 10 mM. Significant shift in the CD peak of HTPy sequence was observed as the positive peak from 286 nm shifted to 276 nm while the negative peak from 265 to 254 nm. Further, inevitable necessity of 1 mM Mg2+ to form i-motif structure at neutral pH was observed. Under similar ionic conditions and neutral pH, all the three C-rich sequences were able to form stable i-motif structures (11Py, 21Py) or altered i-motif/homoduplex structures (HTPy) in the presence of MgCl2 and cell mimicking molecular crowding conditions of 40 wt% PEG 200. It is concluded that presence of Mg2+ ions and molecular crowding agents induce and stabilize i-motif structures at physiological solution environment. Rising ecological concerns and depletion of the potentially harmful environmental impacts caused by rubber products, are of prime importance in the industry. Therefore, implementation of sustainable greener materials is required to minimize the detrimental influences. In this research, we investigated the beneficial influence of naturally derived bio-resin toward the effects of association with Zinc Oxide Nanoparticles in highly dispersible silica (HDS) reinforced Natural rubber (NR)/Epoxidized Natural Rubber (ENR)-based composites. This novel green composite offers impressive properties which were analyzed based on bound rubber content, transmission electron microscopy, physico-mechanical, dynamic mechanical, and cure characteristics. Nanoindentation studies demonstrated the enhanced hysteresis phenomenon of the green composites. The small angle X-ray scattering (SAXS) characterization has been studied by using a Beaucage model and results corroborates that the insertion of bio-resin exhibits ameliorated state of silica dispersion in the green composites. Overall, the study with the bio-resin has provided the impetus in employing it as an alternative to the expensive synthetic route of silane coupling agent and toxic process oil. This paper re-examines the low participation of young people from deprived communities through the lens of the capability approach. A fundamental problem for tackling widening participation is that much of the thinking of policymakers is grounded on the flawed poverty of aspiration thesis'. This paper contends that Sen's [1992. Inequality Re-Examined. Oxford: Clarendon Press; 1999. Development as Freedom. 1st ed. New York: Oxford University Press] capability approach offers a better way of theorising and understanding the persistent under-representation in higher education of young people from deprived communities. A comparative case study approach was conducted in two secondary schools in Scotland, each serving a deprived area, each of which has an intervention programme that promotes higher education. The study employed mixed methods (i.e. questionnaires and interviews) to investigate young people's aspirations and perceptions of their capabilities. The findings confirm findings from previous studies that are critical of the poverty of aspirations' thesis, which suggest that young people have high aspirations. However, an understanding of this is enriched when appraised within the framework of the capability approach, as aspirations are rationalised against findings, which demonstrate that these same young people are also confident in their capabilities and that social arrangements are instrumental in supporting capability development. This article argues that to better investigate the enduring relationship between social class background and inequalities in post-compulsory education necessitates a more comprehensive approach to thinking with Bourdieu, but also a need to move beyond his seminal, much used concepts. Through meta-analysis, we review how Bourdieusian theory has been used in widening participation research in mainly Anglophone contexts, and consider how including concepts from his wider toolbox' can aid this pursuit. We consider new theories and concepts that have emerged largely after Bourdieu and their appropriateness for research in Australian higher education. We explore how a practice-based' theory of widening participation might be developed, drawing on the work of Schatzki and Kemmis which permits researchers to usefully consider the internal goods of a practice and the role of institutions and the non-human. We also suggest that incorporating intersectionality, as both a social theory of knowledge and an approach to analysis, facilitates exploration of routine practices and struggles and reveals the complexities, provisionality and becomingness of social positioning, subjectivities and change. Such theoretical extensions to Bourdieu's legacy enable more nuanced understandings of how complex and intersecting social inequalities in higher education are realised or challenged in countries beyond the global north. This paper discusses the layered nature of lifelong learning participation, bringing together fragmented insights in why adults do or do not participate in lifelong learning activities. The paper will discuss the roles and responsibilities of individual adults, education and training providers and countries' social education policies, often labelled as the micro-, meso- and macro-level. The aim of this work is to add a new model to the knowledge base that attempts to integrate separate insights at the three different levels. Apart from discussing the relevance of the micro-, meso- and macro-level, together with a comprehensive model, the paper provides some recommendations for future research in the area of adult lifelong learning participation, such as the adoption of multilevel models, the need for more data linkage and the desire for more diversification of research in terms of geographical spread and types of educational activities adults can undertake. Comparative research has often emphasized the importance of external barriers (e.g. enrolment costs) to explain inequalities in lifelong learning participation. However, individuals, in particular the low educated, are often not only prevented from participation by external barriers, but also by negative psychological dispositions about learning. In this article, we study how dispositions about learning as measured in PIAAC (2012) vary between countries. In particular, we assess how these cross-country differences are related to a number of design characteristics of the initial school system. We improve the cross-sectional research design by controlling attitudes among adults for attitudes collected among primary school students, making use of diff-in-diff and pseudo-panel-techniques. Overall, we find that strong external differentiation mechanisms, in particular tracking students at a young age and making extensively use of grade retention, are associated with less positive attitudes towards learning among adults. However, a number of methodological issues, related to small country samples and differences in data definition between surveys, calls for further investigation. This article provides some insight into the constraints on the potential of recognition of prior learning (RPL) to widen access to educational qualifications. Its focus is on a conceptual framework that emerged from a South African study of RPL practices across four different learning contexts. Working from a social realist perspective, it argues that RPL needs to be seen as a specialised form of pedagogy that enables navigation across different cultures of knowledge; this is inevitably a contested process because it questions dominant forms of knowledge and modes of knowledge production. The research found that a range of contextual factors impact on the feasibility of RPL, including the nature of the disciplinary domain and its associated knowledge structures, but the inner workings' of the practice also need to be taken into account. Drawing on Cultural Historical Activity Theory , the article presents a conceptual model of RPL as a specialised, boundary-crossing practice for engaging complex sociologies of knowledge, and it offers three generic configurations of practice, each requiring its own artistry of practice'. It concludes that further theoretical work is required in order to adequately conceptualise what is identified as the specialised discourses of experiential knowledge'. When attempting to use data to inform practice and policy, the availability, accuracy and relevance of that data are paramount. This article maps the range of users interested in data relating to the UK1 widening participation (WP) agenda. It explores some challenges associated with identifying, defining, obtaining and using data to inform decisions about targeting and monitoring WP initiatives associated with student access, achievement and progression. It considers the pragmatic and strategic response by different users of institutional WP data within the UK. We use examples from previous institutional and commissioned WP research and evaluations undertaken over the past decade to illustrate some of the tensions concerning the access and assessment of WP data. We argue that whilst the increasing interest in WP participation data and evaluative feedback is commendable, attempts to establish a causal link between WP activity and changes in student awareness, aspiration, access and achievement are not straightforward. The diversity of producers, uses and users of WP data working in different sectors and institutions produces many challenges. The paper concludes with suggestions on ways data could be improved. The Volume-Synchronized Probability of Informed Trading (VPIN) metric is proposed by Easley et al. (2011, 2012) (Journal of Portfolio Management, 37:118-128; Review of Financial Studies, 25:1457-1493) as a real-time measure of order flow toxicity in an electronic trading market. This study examines the performance of VPIN around inventory announcements and price jumps in crude oil and natural gas futures markets with a sample period from January 2009 to May 2015. We obtain several interesting results: (i) VPIN increases significantly around inventory announcements with price jumps as well as at jumps not associated with any scheduled announcements. (ii) VPIN does not peak prior to the events but shortly after. (iii) A minor variation of VPIN based on exponential smoothing significantly improves the early warning signal property of VPIN, and this estimate of toxicity returns faster to the pre-event level. (c) 2017 Wiley Periodicals, Inc. Jrl Fut Mark 37:542-577, 2017 We address an important yet unanswered question: what would be the economic determinants of the implied volatility during the zero lower bound periods? To answer this question, we examine time variations of the cap market implied volatility and investigate economic determinants on slopes and curvatures of the implied volatility curves. We find that unexpected unemployment and inflation shocks play an important role in explaining implied volatility curves for different maturities. We associate negative jumps in the volatility dynamics (Jarrow, Li, & Zhao, 2007) with two unexpected macroeconomic shocks. Our results provide an important implication for practitioners who prepare future exit strategies. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:578-598, 2017 There is a close link between prices of equity options and the default probability of a firm. We show that in the presence of positive expected equity recovery, standard methods that assume zero equity recovery at default misestimate the option-implied default probability. We introduce a simple method to detect stocks with positive expected equity recovery by examining option prices and propose a method to extract the default probability from option prices that allows for positive equity recovery. We demonstrate possible applications of our methodology with examples that include financial institutions in the United States during the 2007-09 subprime crisis. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:599-613, 2017 Cumulative prospect theory argues that the human decision-making process tends to improperly weight unlikely events. Another behavioral phenomenon, anchoring bias, is the failure to update beliefs away from established anchor points. In this study, we find evidence that equity option market investors both anchor to prices and incorporate a probability weighting function similar to that proposed by cumulative prospect theory. The biases result in inefficient prices for put options when firms have relatively high or relatively low implied volatilities. This has implications for the cost of hedging long portfolios and long individual equity positions. (c) 2017 Wiley Periodicals, Inc. Jrl Fut Mark 37:614-638, 2017 We prove that one cannot algorithmically decide whether a finitely presented Z-extension admits a finitely generated base group, and we use this fact to prove the undecidability of the BNS invariant. Furthermore, we show the equivalence between the isomorphism problem within the subclass of unique Z-extensions, and the semi-conjugacy problem for deranged outer automorphisms. (C) 2016 Elsevier B.V. All rights reserved. Let I be a height two perfect ideal with a linear presentation matrix in a polynomial ring R = k[x, y, z]. Assume furthermore that after modulo an ideal generated by two variables, the presentation matrix has rank one. We describe the defining ideal of the Rees algebra R(I) explicitly and we show that R(I) is Cohen-Macaulay. (C) 2016 Elsevier B.V. All rights reserved. In this paper, we show that it is possible for a commutative ring with identity to be non-atomic (that is, there exist nonzero nonunits that cannot be factored into irreducibles) and yet have a strongly atomic polynomial extension. In particular, we produce a commutative ring with identity, R, that is antimatter (that is, R has no irreducibles whatsoever) such that R[t] is strongly atomic. What is more, given any nonzero nonunit f (t) is an element of R[t] then there is a factorization of f(t) into irreducibles of length no more than deg(f(t)) + 2. (C) 2016 Elsevier B.V. All rights reserved. In this paper, we study certain properties of the stable homology groups of modules over an associative ring, which were defined by Vogel [12]. We compute the kernel of the natural surjection from stable homology to complete homology, which was itself defined by Triulzi [21]. This computation may be used in order to formulate conditions under which the two theories are isomorphic. Duality considerations reveal a connection between stable homology and the complete cohomology theory defined by Nucinkis [19]. Using this connection, we show that the vanishing of the stable homology functors detects modules of finite flat or injective dimension over Noetherian rings. As another application, we characterize the coherent rings over which stable homology is balanced, in terms of the finiteness of the flat dimension of injective modules. (C) 2016 Elsevier B.V. All rights reserved. The irreducible spin character values of the wreath products of the hyperoctahedral groups with an arbitrary finite group are determined. (C) 2016 Elsevier B.V. All rights reserved. Using the theory of D-modules, we prove some finiteness properties for local cohomology modules with respect to a family of supports on the spectrum of a k-algebra R which satisfies certain conditions introduced by L. Nunez-Betancourt in [15]. These results generalize in a sense most of the finiteness properties established by G. Lyubeznik in [9]. We also introduce the class of quasi-holonomic D-modules which properly contains the class of holonomic D-modules and is closed by taking submodules, quotients, extensions, direct limits, localizations at multiplicative subsets of R and local cohomology. (C) 2016 Elsevier B.V. All rights reserved. We apply the Nash-Moser theorem for exact sequences of R. Hamilton to the context of deformations of Lie algebras and we discuss some aspects of the scope of this theorem in connection with the polynomial ideal associated to the variety of nilpotent Lie algebras. This allows us to introduce the space H-k-nil(2)(g,g), and certain subspaces of it, that provide fine information about the deformations of g in the variety of k-step nilpotent Lie algebras. Then we focus on degenerations and rigidity in the variety of k-step nilpotent Lie algebras of dimension n with n <= 7 and, in particular, we obtain rigid Lie algebras and rigid curves in the variety of 3-step nilpotent Lie algebras of dimension 7. We also recover some known results and point out a possible error in a published article related to this subject. (C) 2016 Elsevier B.V. All rights reserved. In this paper, we prove that relation-extensions of quasi-tilted algebras are 2-Calabi-Yau tilted. With the objective of describing the module category of a cluster-tilted algebra of euclidean type, we define the notion of reflection so that any two local slices can be reached one from the other by a sequence of reflections and coreflections. We then give an algorithmic procedure for constructing the tubes of a cluster-tilted algebra of euclidean type. Our main result characterizes quasi-tilted algebras whose relation-extensions are cluster-tilted of euclidean type. (C) 2016 Elsevier B.V. All rights reserved. Let A be a commutative ring containing the rationals. Let S be a multiplicatively closed subset such that 1 is an element of S and 0 is not an element of S, T a cone in A such that S subset of T and I an ideal in A. Then rho I-S,I-T = {a vertical bar sa(2m) + t is an element of I-2m for some m is an element of N, s is an element of S and t is an element of T} is an ideal. For a commutative ring the collection of non-reduced orders (total cones) is a fibration of the real spectrum. Both concepts carry information regarding multiple solutions in the constructible set associated with I, T and S. When the ring is a real regular domain, a non-reduced Nullstellensatz is presented that extends the real Nullstellensatz and relates these concepts. The notion of real multiplicity is proposed and examined for elements that are either positive definite (PD) or positive semi-definite (PSD) on the real spectrum. (C) 2016 Published by Elsevier B.V. Let K be an algebraically closed field of characteristic p >= 0. A generalized Fermat curve of type (k, n), where k, n >= 2 are integers (for p not equal 0 we also assume that k is relatively prime to p), is a non-singular irreducible projective algebraic curve F-k,F-n defined over K admitting a group of automorphisms H congruent to Z(k)(n) so that D-k,D-n/H is the projective line with exactly (n + 1) cone points, each one of order k. Such a group H is called a generalized Fermat group of type (k, n). If (n - 1)(k - 1) > 2, then F-k,F-n has genus g(n,k) > 1 and it is known to be non-hyperelliptic. In this paper, we prove that every generalized Fermat curve of type (k, n) has a unique generalized Fermat group of type (k, n) if (k - 1)(n - 1) > 2 (for p > 0 we also assume that k - 1 is not a power of p). Generalized Fermat curves of type (k, n) can be described as a suitable fiber product of (n - 1) classical Fermat curves of degree k. We prove that, for (k - 1) (n - 1) > 2 (for p > 0 we also assume that k - 1 is not a power of p), each automorphism of such a fiber product curve can be extended to an automorphism of the ambient projective space. In the case that p > 0 and k - 1 is a power of p, we use tools from the theory of complete projective intersections in order to prove that, for k and n + 1 relatively prime, every automorphism of the fiber product curve can also be extended to an automorphism of the ambient projective space. In this article we also prove that the set of fixed points of the non-trivial elements of the generalized Fermat group coincide with the hyper-osculating points of the fiber product model under the assumption that the characteristic p is either zero or p > k(n-1). (C) 2016 Elsevier B.V. All rights reserved. For a pivotal finite tensor category C over an algebraically closed field k, we define the algebra CF(C) of class functions and the internal character ch(X) is an element of CF(C) for an object X is an element of C by using an adjunction between C and its monoidal center Z(C). We also develop the theory of integrals and the Fourier transform in a unimodular finite tensor category by using the same adjunction. Our main result is that the map ch : Gr(k)(C) -> CF(C) given by taking the internal character is a well-defined injective homomorphism of k-algebras, where Gr(k)(C) is the scalar extension of the Grothendieck ring of C to k. Moreover, under the assumption that C is unimodular, the map ch is an isomorphism if and only if C is semisimple. As an application, we show that the algebra Gr(k)(C) is semisimple if C is a non degenerate pivotal fusion category. If, moreover, Gr(k) (C) is commutative, then we define the character table of C based on the integral theory. It turns out that the character table is obtained from the S-matrix if C is a modular tensor category. Generalizing corresponding results in the finite group theory, we prove the orthogonality relations and the integrality of the character table. (C) 2016 Elsevier B.V. All rights reserved. Let C C P-2 be an irreducible and reduced curve of degree e. Let X be the blow up of P-2 at r distinct smooth points p(1), ...., p(r) is an element of C. Motivated by results in [10,11, 7], we study line bundles on X and establish conditions for ampleness and k-very ampleness. (C) 2016 Elsevier B.V. All rights reserved. A smooth complex projective curve is called pseudoreal if it is isomorphic to its conjugate but is not definable over the reals. Such curves, together with real Riemann surfaces, form the real locus of the moduli space M-g. This paper deals with the classification of pseudoreal curves according to the structure of their automorphism group. We follow two different approaches existing in the literature: one coming from number theory, dealing more generally with fields of moduli of projective curves, and the other from complex geometry, through the theory of NEC groups. Using the first approach, we prove that the conformal automorphism group Aut(X) of a pseudoreal Riemann surface X is abelian if X/Z(Aut(X)) has genus zero, where Z(Aut(X)) is the center of Aut(X). This includes the case of hyperelliptic Riemann surfaces, already known by results of B. Huggins. By means of the second approach and of elementary properties of group extensions, we show that X is not pseudoreal if the center of G = Aut(X) is trivial and either Out(G) contains no involutions or Inn(G) has a group complement in Aut(G). This extends and gives an elementary proof (over C) of a result by P. Debes and M. Emsalem. Finally, we provide an algorithm, implemented in MAGMA, which classifies the automorphism groups of pseudoreal Riemann surfaces of genus g >= 2, once a list of all groups acting for such genus, with their signature and generating vectors, is given. This program, together with the database provided by J. Paulhus in [33], allowed us to classify pseudoreal Riemann surfaces up to genus 10, extending previous results by E. Bujalance, M. Conder and A. F. Costa. (C) 2016 Elsevier B.V. All rights reserved. Previous research has adopted various approaches to examining teachers' and students' relationships to mathematics. The current study extended this line of research and investigated six prospective elementary school teachers' experiences in mathematics and how they saw themselves as learners of mathematics. One-on-one interviews with the participants were conducted, and their written reflections were collected. A grounded-theory approach and a framework for analyzing mathematics identities were adopted in data analysis. The findings showed that the participants' development of obligations-to-oneself was associated with not only their opportunity to exercise conceptual agency but also their aesthetic experience with mathematics. Their views on themselves as learners of mathematics had cognitive, affective, and aesthetic dimensions. The findings suggest that teachers and students can engage in a reflection on their aesthetic involvement in doing mathematics. There is a need for a local theory of aesthetics in K-12 mathematics. Researchers have argued that integrating early algebra into elementary grades will better prepare students for algebra. However, currently little research exists to guide teacher preparation programs on how to prepare prospective elementary teachers to teach early algebra. This study examines the insights and challenges that prospective teachers experience when exploring early algebraic reasoning. Results from this study showed that developing informal representations for variables and unknowns and learning about the two interpretations of the equal sign were meaningful new insights for the prospective teachers. However, the prospective teachers found it a conceptual challenge to identify the relationships contained in algebraic expressions, to distinguish between unknowns and variables, to bracket their knowledge of formal algebra and to represent subtraction from unknowns or variables. These findings suggest that exploring early algebra is non-trivial for elementary prospective teachers and likely necessary to adequately prepare them to teach early algebra. This study investigated a highly accomplished third-grade teacher's noticing of students' mathematical thinking as she taught multiplication and division. Through an innovative method, which allowed for documenting in-the-moment teacher noticing, the author was able to explore teacher noticing and reflective practices in the context of classroom teaching as opposed to professional development environments. Noticing was conceptualized as both attending to different elements of classroom instruction and making sense of classroom events. The teacher paid most attention to student thinking and was able to offer a variety of rich interpretations of student thinking which were presented in an emergent framework. The results also indicated how the teacher's noticing might influence her instructional decisions. Implications for both research methods in studying noticing and teacher learning and practices are discussed. This article uses a self-study research methodology to explore teaching an online course for mathematics specialists. The course included weekly videoconferencing sessions and focused on supporting their development as mathematics coaches working with K-8 teachers to enhance mathematics teaching and learning. The central question for the self-study was about the design of the course and the characteristics of the learning environment that resulted from the design. The study included journal reflections and survey data. Three themes emerged in the analysis of the instructional decision making for the course: student autonomy and engagement, authenticity and practicality, and fostering community. Sparked by the conjunction of food, fuel, and financial crises, there has been an increasing awareness in recent years of the scarce and finite character of natural resources. Productive resources such as agricultural land have been touted by financial actors-such as merchant banks, pension funds, and investment companies-as providing the basis for a range of new "alternative" financial asset classes and products. While the drivers, motives, and rationales behind the increasing interest of turning farmland into a financial asset class have been traced by a number of scholars, the interpretations of, and interactions with, financial actors at the community level have received less attention. Based on qualitative research in rural Australia, this paper reveals the grounds on which finance-backed investments have been accepted and accommodated by communities in rural Australia and delineates the reasons that have led to feelings of unease or refusal. The paper thereby demonstrates that the financialization of farmland is neither abstract nor one-sided but rather a multidimensional process that not only includes financial actors but also the impacted rural populations in various ways. Positioning the activities of financial actors in Australia within the emerging research on the financialization of farmland, the paper endorses context-sensitive analyses to better interpret these recent transformations of the agri-food system. The treaties established between the United States federal government and American Indian nations imply U.S. recognition of Native political sovereignty. Political sovereignty encompasses not only the ability to govern oneself but also self-determination regarding resource use, including food. This paper addresses The White Pine Treaty of 1837, which acknowledges the Ojibwe people's right to hunt, fish, and harvest wild rice in their traditional landscape. This acknowledgement by extension recognizes the Ojibwe's right to food sovereignty. From the perspective of the Ojibwe, continuing these activities requires not simply controlling access to important food resources but also protecting their rights to maintain traditional relationships with the plants and animals that provide food and to manage the landscapes that provision them. Therefore, true food sovereignty necessitates protecting a people's relationships with the landscape. Appropriation of wild rice over the past century, however, has threatened food sovereignty among the Ojibwe because it has compromised their ability to maintain their traditional relationship with a staple food resource that is also central to their identity. In light of the White Pine Treaty, this threat to the Ojibwe's food sovereignty is effectively a threat to their political sovereignty and, we argue, a violation of the treaty agreement. The United Kingdom's approach to encouraging environmentally positive behaviour has been three-pronged, through voluntarism, incentives and regulation, and the balance between the approaches has fluctuated over time. Whilst financial incentives and regulatory approaches have been effective in achieving some environmental management behavioural change amongst farmers, ultimately these can be viewed as transient drivers without long-term sustainability. Increasingly, there is interest in 'nudging' managers towards voluntary environmentally friendly actions. This approach requires a good understanding of farmers' willingness and ability to take up environmental activities and the influences on farmer behavioural change. The paper aims to provide insights from 60 qualitative farmer interviews undertaken for a research project into farmers' willingness and ability to undertake environmental management, particularly focusing on social psychological insights. Furthermore, it explores farmers' level of engagement with advice and support networks that foster a genuine interest, responsibility and a sense of personal and social norm to sustain high quality environmental outcomes. Two conceptual frameworks are presented for usefully exploring the complex set of inter-relationships that can influence farmers' willingness to undertake environmental management practices. The research findings show how an in-depth understanding of farmer's willingness and ability to adopt environmental management practices and their existing level of engagement with advice and support are necessary to develop appropriate engagement approaches to achieve sustained and durable environmental management. Recent attention to communities "localizing" food systems has increased the need to understand the perspectives of people working to foster collaboration and the eventual transformation of the food system. University Cooperative Extension Educators (EEs) increasingly play a critical role in communities' food systems across the United States, providing various resources to address local needs. A better understanding of EEs' perspectives on food systems is therefore important. Inspired by the work of Stevenson, Ruhf, Lezberg, and Clancy on the social food movement, we conducted national virtual focus groups to examine EEs' attitudes about how food system change should happen, for what reasons, and who has the resources, power, and influence to effect change. The institutions within which EEs are embedded shape their perceptions of available resources in the community, including authority and power (and who holds them). These resources, in turn, structure EEs' goals and strategies for food system change. We find that EEs envision working within the current food system: building market-centric alternatives that address inequity for vulnerable consumers and producers. EEs bring many resources to the table but do not believe they can influence those who have the authority to change policy. While these findings could suggest EEs' limited ability to be transformative change agents, EEs can potentially connect their efforts with new partners that share perceptions of food system problems and solutions. As EEs increasingly engage in food system work and with increasingly diverse stakeholders, they can access alternative, transformational frames within which to set goals and organize their work. In the mid-1990s, fairtrade-organic registration data showed that only 9 % of Oaxaca, Mexico's organic coffee 'farm operators' were women; by 2013 the female farmer rate had increased to 42 %. Our research investigates the impact of this significant increase in women's coffee association participation among 210 members of two coffee producer associations in Oaxaca, Mexico. We find that female coffee organization members report high levels of household decision-making power and they are more likely than their male counterparts to report control over their coffee income. These significant advances in women's agency within the household are offset by the fact that the women experience significant time poverty as they engage in coffee production while bearing a disproportionate share of domestic labor obligations. The women coffee producers view organizational labor as a third burden on their time, after their reproductive and productive labor. The time poverty they experience limits their ability to fully participate in coffee organizational governance and consequently there are few women leaders at all levels of the coffee producer businesses. This is problematic because it limits women's ability to fully benefit from organizational membership: when women fully participate in governance they gain valuable business and leadership skills and producer associations with active female members may also be more likely to develop and maintain programs and policies that enhance gender equity. Our findings indicate that targeted agricultural development programs to improve gender equity among agricultural smallholders should involve creative ways to ease women's labor burdens and reduce their time poverty in order to facilitate full organizational participation. The research findings fill a gap in existing studies of agricultural global value chains (GVCs) by demonstrating how the certified coffee GVC depends on women's under and un-paid labor not only within the household but also within producer organizations. This paper examines farmer intentions to adapt to global climate change by analyzing responses to a climate change scenario presented in a survey given to large-scale farmers (n = 4778) across the US Corn Belt in 2012. Adaptive strategies are evaluated in the context of decision making and farmers' intention to increase their use of three production practices promoted across the Corn Belt: no-till farming, cover crops, and tile drainage. This paper also provides a novel conceptual framework that bridges a typology of adaptation with concepts that help predict intentionality in behavior change models. This conceptual framework was developed to facilitate examination of adaptive decision making in the context of agriculture. This research effort examines key factors that influence farmers' intentions to increase their use of the practices evaluated given a climate change scenario. Twenty-two covariates are examined across three models developed for no-till farming, cover crops, and tile drainage. Findings highlight that farmers who believed they should adjust their practices to protect their farm from the negative impacts of increased weather variability were more likely to indicate that they would increase their use of each of the practices in response to climate change. Additionally, visiting with other farmers to observe their practices was positively associated with farmers' intentions to increase their use of the adaptive strategies examined. Farmers who were currently using no-till farming, cover crops, and tile drainage were also more likely to plan to increase their use of these practices in response to increased weather variability associated with climate change. However, farmers who reported high levels of confidence in their current practices were less likely to plan on changing their use of these practices in response to climatic changes. Smallholder farmers in Rattanakmondol District, Battambang Province, Cambodia face challenges related to soil erosion, declining yields, climate change, and unsustainable tillage-based farming practices in their efforts to increase food production within maize-based systems. In 2010, research for development programs began introducing agricultural production systems based on conservation agriculture (CA) to smallholder farmers located in four communities within Rattanakmondol District as a pathway for addressing these issues. Understanding gendered practices and perspectives is integral to adapting CA technologies to the needs of local communities. This research identifies how gender differences regarding farmers' access to assets, practices, and engagement in intra-household negotiations could constrain or facilitate the dissemination of CA. Our mixed-methods approach includes focus group discussions, semi-structured interviews, famer field visits, and a household survey. Gender differences in access to key productive assets may affect men's and women's individual ability to adapt CA. Farmers perceive the practices and technologies of CA as labor-saving, with the potential to reduce men's and women's labor burden in land-preparation activities. However, when considered in relation to the full array of productive and reproductive livelihood activities, CA can disproportionately affect men's and women's labor. Decisions about agricultural livelihoods were not always made jointly, with socio-cultural norms and responsibilities structuring an individual's ability to participate in intra-household negotiations. While gender differences in power relations affect intra-household decision-making, men and women household members collectively negotiate the transition to CA-based production systems. This paper uses a multiple case study approach to researching people's everyday lives and experiences of six community farms and gardens in diverse settings in China and England. We argue that collective understandings of community are bound up in everyday action in particular spaces and times. Successful community farms and gardens are those that are able to provide suitable spaces and times for these actions so that their members can enjoy multiple benefit streams. These benefits are largely universal: in very different situations in both England and China, CSA members make strong connections with the land, the farmers and other members, even in cases where they rarely visit the farms and gardens. This suggests that community farming and gardening initiatives possess multi-dimensional transformational potential. Not only do they offer a buffer against industrialised and remote food systems, but they also represent therapeutic landscapes valued by those who have experienced time spent at or in connection with them. Our findings indicate that-regardless of location or cultural context-these benefits are durable, so that people who have been engaged in multiple activities at a community farm or garden continue to enjoy these benefits long after most of their engagement has ceased. The occurrence of genetic erosion in local maize varieties in Mexico is intensely debated. Recent publications from Mexico show contradicting results about the loss of local varieties. Genetic erosion is a complex process, and well-documented examples of actual genetic erosion are not common in the literature. We worked in a region in which adoption of improved varieties was negligible, but other factors affecting maize agriculture were at play. The objectives of the study were to describe changes in maize diversity in the last 10 years and to associate them with socio-economic and environmental changes in a region in Mexico's Central Highlands. We used richness and abundance of local varieties and diversity indices of races as indicators of maize diversity changes over time. We analyzed statistics and based on interviews we evaluated maize diversity changes between 2005 and 2015. We interviewed 113 farmers on two occasions with intervals from 5 to 10 years. According to climate statistics, rain has declined and temperature has increased. We also found a decrease in the lake level during the past 35 years. The total population in the region has doubled since the 1960s. The indigenous population has not changed significantly. Number of people working in agriculture has decreased since the 1960s. Rain fed agriculture decreased 8.1 % from 1990 to 2007. In four villages studied, farmed land area had decreased between 1995 and 2015. This reduction varies between 22 and 39 % depending on the village. Maize planted area decreased from 9675 to 8115 ha from 2003 to 2014. In the same period, avocado plantations grew from 34 to 786 ha. In despite of these changes, we did not find significant changes in average landraces per farmer (2.13 +/- 0.28 in 2015) nor per village (4.15 +/- 1.26 in 2015). Significant changes in maize races were not found either (1.91 +/- 0.26 per farmer, 2.85 +/- 0.86 per village in 2015). These results show that maize landrace diversity in the region is resilient but dynamic. The pig sector is struggling with negative attitudes of citizens. This may be the result of conflicting attitudes toward pig husbandry between citizens and other stakeholders. To obtain knowledge about these attitudes, the objectives of this study were (1) to determine and compare attitudes of various stakeholders toward animals, humans and the environment in the context of pig husbandry and (2) to determine and compare the acceptability of publically discussed issues related to pig husbandry of various stakeholders. A questionnaire was distributed to citizens, conventional pig farmers, organic pig farmers, pig husbandry advisors and pig veterinarians. Respondents could indicate their attitude toward aspects related to animals, humans and the environment in the context of pig husbandry and they could indicate their opinion about the acceptability of issues of pig husbandry, e.g. piglet mortality and inside pig housing. Based on measured attitudes and the acceptability of issues, the studied stakeholders could be divided into three distinctive groups. The group of citizens and organic pig farmers showed negative attitudes toward all aspects of pig husbandry, the group of conventional pig farmers and pig husbandry advisors only showed negative attitudes toward aspects related to economics and the group of pig veterinarians showed negative attitudes to specific aspects of pig husbandry. This indicates that stakeholders have different interests and different perspectives with regard to pig husbandry. The pig sector should learn to understand citizens' perspectives and take these into account in their line of work, the implementation of animal welfare measures and in their communication. In this paper, we explore the entrepreneurial leadership strategies and routine work of actors located across a diverse array of organizational settings (i.e., farmers' markets, community farms, community-supported agriculture programs, food and seed banks, local food print media) that combine to shape and sustain the Southern Arizona (AZ) local food system (LFS). We use the theoretical principles of institutional entrepreneurship and logic multiplicity to show how the strategies and routine work of local food actors at the organizational level combine to negotiate system-level meaning and structure within and across the Southern AZ LFS, which is an otherwise seemingly fragmented and contentious social space. We illustrate how the entrepreneurial work performed within multiple organizations and organizational types converge to form a hybrid (or blended) local food logic. Implications are discussed and recommendations for practice are proposed. This paper explores the movements, meanings and potential movements of men and women as they seek to secure food resources. Using a gendered mobilities framework, we draw on 66 in-depth interviews in the Kongwa district of rural Tanzania, illustrating how people move, their motivations and understandings of these movements, the taboos, rituals, and cultural characteristics of movement that hold implications for men and women and their food security needs. Results show that male potential mobility and female relative immobility is a critical factor in understanding how mobility affects food security differentially for men and women. We identify the links between mobilities and the development of social capital, particularly amongst men. We also illustrate problems with greater integration of women into the agricultural sector when these women risk stigma and censure from the increased physical movement that this integration requires. Implications from this study are examined in light of gender transformative approaches to agricultural interventions in sub-Saharan Africa. In the U.S. there has been considerable interest in connecting low-income households to alternative food networks like Community Supported Agriculture (CSA). To learn more about this possibility we conducted a statewide survey of CSA members in California. A total of 1149 members from 41 CSAs responded. Here we answer the research question: How do CSA members' (1) socioeconomic and demographic backgrounds, (2) household conditions potentially interfering with membership, and (3) CSA membership experiences vary between lower-income households (LIHHs) and higher-income households (HIHHs)? We divided members into LIHHs (making under $50,000 annually) and HIHHs (making over $50,000 annually). We present comparisons of LIHHs' and HIHHs' (1) employment, race/ethnicity, household composition and education, use of food support, and enjoyment of food-related activities; (2) conditions interfering with membership and major life events; and (3) sources of information influencing decision to join, reasons for joining, ratings of importance of and satisfaction with various CSA attributes, gaps between importance of and satisfaction with various CSA attributes, valuing of the share and willingness to pay more, and impacts of membership. We find that LIHHs are committed CSA members, often more so than HIHHs, and that CSA members in California are disproportionately white, but that racial disproportionality decreases as incomes increase. We conclude by considering: (1) the economic risks that LIHHs face in CSA membership, (2) the intersection of economic risks with race/ethnicity and cultural coding in CSA; and (3) the possibilities of increasing participation of LIHH in CSA. Agriculture remains the backbone of most African economies, yet land degradation severely hampers agricultural productivity. Over the last decades, scientists and development practitioners have advocated integrated soil fertility management (ISFM) practices to improve soil fertility. However, their adoption rates are low, partly because many farmers in sub-Saharan Africa are not fully aware of the principles of this system innovation. This has been attributed to a wide communication gap between farmers and other agricultural actors in agricultural knowledge and innovation systems (AKIS). We add to the literature by applying innovation system approaches to ISFM awareness processes. This study aims to assess if AKIS are effectively disseminating ISFM knowledge by comparing results from two sites in Kenya and Ghana, which differ in the uptake of ISFM. Social network measures and statistical methods were employed using data from key formal actors and farmers. Our results suggest that the presence of weak knowledge ties is important for the awareness of ISFM at both research sites. However, in Kenya AKIS are more effective as there is a network of knowledge ties crucial for not only dissemination but also learning of complex innovations. This is largely lacking in Ghana where integration of formal and informal agricultural knowledge systems may be enhanced by fostering the function of informal and formal innovation brokers. Biofuels have transitioned from a technology expected to deliver numerous benefits to a highly contested socio-technical solution. Initial hopes about their potential to mitigate climate change and to deliver energy security benefits and rural development, particularly in the Global South, have unravelled in the face of numerous controversies. In recognition of the negative externalities associated with biofuels, the European Union developed sustainability criteria which are enforced by certification schemes. This paper draws on the literature on stewardship to analyse the outcomes of these schemes in two countries: the UK and Guatemala. It explores two key issues: first, how has European Union biofuels policy shaped biofuel industries in the UK and Guatemala? And second, what are the implications for sustainable land stewardship? By drawing attention to the outcomes of European demand for biofuels, we raise questions about the ability of European policy to drive sustainable land practices in these two cases. The paper concludes that, rather than promoting stewardship, the current governance framework effectively rubberstamps existing agricultural systems and serves to further embed existing inequalities. As ecologically and socially oriented food initiatives proliferate, the significance of these initiatives with respect to conventional food systems remains unclear. This paper addresses the transformative potential of alternative food networks (AFNs) by drawing on insights from recent research on food and embodiment, diverse food economies, and more-than-human food geographies. I identify several synergies between these literatures, including an emphasis on the pedagogic capacities of AFNs; the role of the researcher; and the analytical and political value of using assemblage and actor-network thinking to understand the far-reaching forces and power disparities confronting proponents of more ethical and sustainable food futures. Histories of dynamic psychotherapy in the late 19(th) century have focused on practitioners in continental Europe, and interest in psychological therapies within British asylum psychiatry has been largely overlooked. Yet Daniel Hack Tuke (1827-95) is acknowledged as one of the earliest authors to use the term psycho-therapeutics', including a chapter on the topic in his 1872 volume, Illustrations of the Influence of the Mind upon the Body in Health and Disease. But what did Tuke mean by this concept, and what impact did his ideas have on the practice of asylum psychiatry? At present, there is little consensus on this topic. Through in-depth examination of what psycho-therapeutics meant to Tuke, this article argues that late-19(th)-century asylum psychiatry cannot be easily separated into somatic and psychological strands. Tuke's understanding of psycho-therapeutics was extremely broad, encompassing the entire field of medical practice (not only psychiatry). The universal force that he adopted to explain psychological therapies, the Imagination', was purported to show the power of the mind over the body, implying that techniques like hypnotism and suggestion might have an effect on any kind of symptom or illness. Acknowledging this aspect of Tuke's work, I conclude, can help us better understand late-19(th)-century psychiatry - and medicine more generally - by acknowledging the lack of distinction between psychological and somatic in psychological' therapies. This article explores the history of subordination-authority-relation' (SAR) psychotherapy, a brand of psychotherapy largely forgotten today that was introduced and practised in inter-war Vienna by the psychiatrist Erwin Stransky (1877-1962). I situate SAR' psychotherapy in the medical, cultural and political context of the inter-war period and argue that - although Stransky's approach had little impact on historical and present-day debates and reached only a very limited number of patients - it provides a particularly clear example for the political dimensions of psychotherapy. In the early 20th century, the emerging field of psychotherapy was largely dominated by Freudian psychoanalysis and its Adlerian and Jungian offshoots. Psychotherapists' relations with academic psychiatry were often uneasy, but the psychodynamic schools succeeded in establishing independent institutions for training and treatment. However, as this article shows, the gulf between mainstream psychiatry and psychotherapy was not as wide as many histories of the psy-disciplines in the early 20th century suggest. In inter-war Vienna, where these conflicts raged most fiercely, Stransky's SAR' psychotherapy was intended as an academic psychiatrist's response to the challenge posed by the emerging competitors. Moreover, Stransky also proposed a political alternative to the existing psychotherapeutic schools. Whereas psychoanalysis was a liberal project, and Adlerian individual psychology was closely affiliated with the socialist movement, SAR' psychotherapy with its focus on authority, subordination and social hierarchy tried to translate a right-wing, authoritarian understanding of society into a treatment for nervous disorders. The First International Congress for Analytical Psychology was held in Zurich from 7 to 12 August 1958. On this occasion a small group of Israeli psychologists, represented by Erich Neumann, was accepted as a charter group member of the International Association for Analytical Psychology (IAAP), which marked the foundation of the Israel Association of Analytical Psychology. The history leading up to this official birth date is mainly associated with the efforts of Erich Neumann - and rightly so; however, a number of other therapists, scholars and patients have been forgotten or deleted from this historical narrative, to their detriment. While I was working on the edition of the correspondence between C. G. Jung and Erich Neumann I came across their names, which were often only casually mentioned re some episode, and I have since tried to find out their stories and what happened to them. In this article I discuss the contributions to the development of analytical psychology in British Mandate Palestine, later Israel, of two such figures, Max M. Stern (1895-1982) and Margarete Braband-Isaac (1892-1986). Both had been in personal contact with C. G. Jung and built a bridge between the isolated Jewish therapists in British Mandate Palestine and the Zurich circles. In Tel Aviv they collaborated for a while with Neumann, with whom for different reasons both fell out. The article shows the cause of these controversies with Neumann and tries to find out why those two characters were historically marginalized. The Netherne Hospital in Surrey is perhaps the most prestigious site in the history of British art therapy, associated with the key figures Edward Adamson and Eric Cunningham Dax, whose pioneering work involved the setting-up of a large studio for psychiatric patients to create expressive paintings. What is little-known, however, is the work of the designated scientist for psychiatric research, Hungarian Jewish emigre Francis Reitman, who was charged with an overall scientific analysis of the artistic products of the studio. Schooled in the biological psychiatric tradition of Ladislas J. Meduna in Budapest prior to his exile to the Maudsley Hospital in 1938 - and committed to treatments such as leucotomy and electro-convulsive therapy (ECT) - Reitman was an unusual candidate for research into the unconscious processes behind art and psychosis. Yet he authored two highly popular and widely reviewed books on his analyses of the abundant artistic output created by patients with schizophrenic diagnoses at the Netherne. In his Psychotic Art (1950) and Insanity, Art and Culture (1954), Reitman compared such schizophrenic images with those produced by artists under the influence of mescaline and examined the artistic output of patients having undergone leucotomy. This article draws on archival materials and Reitman's original research publications in order to reconstruct his theory of schizophrenic art within the complex context of postwar British psychiatry, negotiating as he did between biologically reductive understandings of Freudian and Jungian psychoanalytic categories, and ultimately synthesizing concepts from both. It also analyses Reitman's implicit theory of the therapeutic mechanism of art in the treatment of psychiatric patients. This article investigates the changing justifications of one of the hallmarks of orthodox psychoanalytic practice, the neutral and abstinent stance of the psychoanalyst, during the middle decades of the 20th century. To call attention to the shifting rationales behind a supposedly cold, detached style of treatment still today associated with psychoanalysis, explanations of the clinical utility of neutrality and abstinence by classical' psychoanalysts in the United States are contrasted with how intellectuals and cultural critics understood the significance of psychoanalytic abstinence. As early as the 1930s, members of the Frankfurt School discussed the cultural and social implications of psychoanalytic practices. Only in the 1960s and 1970s, however, did psychoanalytic abstinence become a topic within broader intellectual debates about American social character and the burgeoning therapy culture' in the USA. The shift from professional and epistemological concerns to cultural and political ones is indicative of the changing appreciation of psychoanalysis as a clinical discipline: for psychoanalysts as well as cultural critics, I argue, changing social mores and the professional decline of psychoanalysis infused the image of the abstinent psychoanalyst with nostalgic longing, making it a symbol of resistance against a culture seen to be in decline. This article outlines the emergence of ABA (Applied Behaviour Analysis) in the mid-20th century, and the current popularity of ABA in the anglophone world. I draw on the work of earlier historians to highlight the role of Ole Ivar Lovaas, the most influential practitioner of ABA. I argue that reception of his initial work was mainly positive, despite concerns regarding its efficacy and use of physical aversives. Lovaas' work, however, was only cautiously accepted by medical practitioners until he published results in 1987. Many accepted the results as validation of Lovaas' research, though both his methods and broader understanding of autism had shifted considerably since his early work in the 1960s. The article analyses the controversies surrounding ABA since the early 1990s, considering in particular criticisms made by autistic people in the neurodiversity movement'. As with earlier critics, some condemn the use of painful aversives, exemplified in the campaign against the use of shock therapy at the Judge Rotenberg Center. Unlike earlier, non-autistic critics, however, many in this movement reject the ideological goals of ABA, considering autism a harmless neurological difference rather than a pathology. They argue that eliminating benign autistic behaviour through ABA is impermissible, owing to the individual psychological harm and the wider societal impact. Finally, I compare the claims made by the neurodiversity movement with those made by similar 20th- and 21st-century social movements. This article traces the history of the US FDA regulation of nutrition labeling, identifying an informational turn' in the evolving politics of food, diet and health in America. Before nutrition labeling was introduced, regulators actively sought to segregate food markets from drug markets by largely prohibiting health information on food labels, believing such information would confuse' the ordinary food consumer. Nutrition labeling's emergence, first in the 1970s as consumer empowerment and then later in the 1990s as a solution to information overload, reflected the belief that it was better to manage markets indirectly through consumer information than directly through command-and-control regulatory architecture. By studying product labels as information infrastructure', rather than a knowledge fix', the article shows how labels are situated at the center of a legally constructed terrain of inter-textual references, both educational and promotional, that reflects a mix of market pragmatism and evolving legal thought about mass versus niche markets. A change to the label reaches out across a wide informational environment representing food and has direct material consequences for how food is produced, distributed, and consumed. One legacy of this informational turn has been an increasing focus by policymakers, industry, and arguably consumers on the politics of information in place of the politics of the food itself. This article contains the first detailed historical study of one of the new high-frequency trading (HFT) firms that have transformed many of the world's financial markets. The study, of Automated Trading Desk (ATD), one of the earliest and most important such firms, focuses on how ATD's algorithms predicted share price changes. The article argues that political-economic struggles are integral to the existence of some of the pockets' of predictable structure in the otherwise random movements of prices, to the availability of the data that allow algorithms to identify these pockets, and to the capacity of algorithms to use these predictions to trade profitably. The article also examines the role of HFT algorithms such as ATD's in the epochal, fiercely contested shift in US share trading from fixed-role' markets towards all-to-all' markets. In this paper we reflect on a project called Synthetic Aesthetics', which brought together synthetic biologists with artists and designers in paired exchanges. We - the STS researchers on the project - were quickly struck by the similarities between our objectives and those of the artists and designers. We shared interests in forging new collaborations with synthetic biologists, opening up' the science by exploring implicit assumptions, and interrogating dominant research agendas. But there were also differences between us, the most important being that the artists and designers made tangible artefacts, which had an immediacy and an ability to travel, and which seemed to allow different types of discussions from those initiated by our academic texts. The artists and designers also appeared to have the freedom to be more playful, challenging and perhaps subversive in their interactions with synthetic biology. In this paper we reflect on what we learned from working with the artists and designers on the project, and we argue that engaging more closely with art and design can enrich STS work by enabling an emergent form of critique. Contributing to recent scholarship on the governance of algorithms, this article explores the role of dignity in data protection law addressing automated decision-making. Delving into the historical roots of contemporary disputes between information societies, notably European Union and Council of Europe countries and the United States, reveals that the regulation of algorithms has a rich, culturally entrenched, politically relevant backstory. The article compares the making of law concerning data protection and privacy, focusing on the role automation has played in the two regimes. By situating diverse policy treatments within the cultural contexts from which they emerged, the article uncovers and examines two different legal constructions of automated data processing, one that has furnished a right to a human in the loop that is intended to protect the dignity of the data subject and the other that promotes and fosters full automation to establish and celebrate the fairness and objectivity of computers. The existence of a subtle right across European countries and its absence in the US will no doubt continue to be relevant to international technology policy as smart technologies are introduced in more and more areas of society. This article discusses the co-production of search technology and a European identity in the context of the EU data protection reform. The negotiations of the EU data protection legislation ran from 2012 until 2015 and resulted in a unified data protection legislation directly binding for all European member states. I employ a discourse analysis to examine EU policy documents and Austrian media materials related to the reform process. Using the concept sociotechnical imaginary', I show how a European imaginary of search engines is forming in the EU policy domain, how a European identity is constructed in the envisioned politics of control, and how national specificities contribute to the making and unmaking of a European identity. I discuss the roles that national technopolitical identities play in shaping both search technology and Europe, taking as an example Austria, a small country with a long history in data protection and a tradition of restrained technology politics. Using an analysis of the British Medical Journal over the past 170years, this article describes how changes in the idea of a population have informed new technologies of medical prediction. These approaches have largely replaced older ideas of clinical prognosis based on understanding the natural histories of the underlying pathologies. The 19(th)-century idea of a population, which provided a denominator for medical events such as births and deaths, was constrained in its predictive power by its method of enumerating individual bodies. During the 20(th) century, populations were increasingly constructed through inferential techniques based on patient groups and samples seen to possess variable characteristics. The emergence of these new virtual populations created the conditions for the emergence of predictive algorithms that are used to foretell our medical futures. Why did the recumbent bicycle never become a dominant design, despite the fact that it was faster than the safety bicycle on the racetrack? Hassaan Ahmed et al. argue in their recently published paper that the main reason for the marginalization of the recumbent bicycle was semiotic power deployed by the Union Cycliste Internationale (UCI). Here, I demonstrate that the authors drew their conclusions from an incomplete application of the Social Construction of Technology (SCOT) framework. Understanding the diffusion of alternative bicycle designs requires considering more than speed, and more than the UCI as a powerful actor. The recumbent bicycle was fast, but rather tricky to ride, and was not really feasible for the transport needs of the working classes, which constituted the most relevant social group of bicycle users during the 1930s. Globally, there is increasing pressure on schools to enact change, and the literature indicates that transformational leadership is positively associated with school leaders' effectiveness at implementing positive reforms. Here, we report on a study conducted in the United Arab Emirates (UAE) within the current context of intense educational restructuring in the K-12 system. The purpose was to investigate whether school principals in the UAE practise transformational leadership, and whether they and their teachers perceived principals' leadership styles differently to their western counterparts. This study adopted a mixed methodology, and revealed variation in perceptions between principals and teachers related to whether principals were practising transformational leadership. However, when analysed using Hofstede's cultural framework, this variation may be related to cultural differences between the western orientation of the leadership model adapted by Emirati principals and the Islamic orientation of the population. Therefore, a new model of transformational leadership is proposed, based on a paradigm that may be more appropriate for Middle Eastern/Islamic contexts. This Modified Transformational Model may be useful to those leaders who wish to adopt transformational leadership with cultural accommodations. This work explores how mindful leadership practice can inform school and district leadership specifically as it occurs in professional learning communities (PLC). When school and district leaders create PLC cultures that encourage rich thinking and intentional practice, individual and organizational mindfulness is present. As leaders work to craft informed responses to the demands before them, it is argued that such mindfulness places them in a position to maximize learning from the experiences of the moment. As an organizational structure, the PLC provides school members with a location for the practice of mindful leadership. Yet, the PLC can become an end in itself, a structure that organizes participants but lacks the necessary goal orientation for actors to engage in motivating and purposeful work and create meaningful outcomes. We assert that the mindful institutionalization of PLCs orients the school toward cultural change embodied in a collective attention that orients the work of its members. We argue that to do so requires attention to deeply developed explanations of activities within the school setting, opportunities for formative, substantive data use, and on-the-ground real-time orientation to communal learning. We argue that attention to these themes would further enhance the knowledge and skill set available to PLC members. This article explores qualitative shadowing as an interpretivist methodology, and explains how two researchers participating simultaneously in data collection using a video recorder, contextual interviews and video-stimulated recall interviews, conducted a qualitative shadowing study at six early childhood centres in Norway. This paper emerged through the discussion of this experience with another researcher, who had shared interests in early childhood leadership, about the benefits of this research methodology in studying leadership practices in early childhood centres. We argue that qualitative shadowing methodology is a powerful resource that can enrich leadership, learning and development within the early childhood sector. By facilitating reflective engagement between practitioners and researchers through qualitative shadowing, it is possible to enhance the exploration of complex phenomenon such as early childhood leadership practice. In recent years, a principal supply shortage crisis has emerged in the USA. This problem has been exacerbated by an increase in principal departures, which has been found to be negatively related to school outcomes. While research exists on several determinants of principal turnover, any examination of the relationship between principals' affective reaction to pay relative to their intent to leave their position at a particular school is missing from the literature. This research seeks to fill this void by examining the association between California (USA) high school principals' pay satisfaction and turnover intentions (n=156). The importance of potential referent sources (i.e., teachers within the school districts, other high school principals within the school district, and other high school principals in different school districts) for pay satisfaction and the relationship between achievement and turnover intentions were also examined. This study uses a two-stage structural equation modelling approach and finds evidence to suggest that high school principals' pay satisfaction is influenced by the salaries of comparative peers and is negatively associated with principals' intention to turnover. Achievement was not found to be related to turnover intentions. Policy implications and future research recommendations are discussed. The necessity for schools to implement human resources management (HRM) is increasingly acknowledged. Specifically, HRM holds the potential of increasing student outcomes through the increased involvement, empowerment and motivation of teachers. In educational literature, however, little empirical attention is paid to the ways in which different HRM practices could be bundled into a comprehensive HRM system (content) and how HRM could best be implemented to attain positive teacher and student outcomes (process). Regarding the content, and following the AMO theory of performance', it is argued that HRM systems should comprise (A) ability-, (M) motivation- and (O) opportunity-enhancing HRM practices. Regarding the process, and based on HRM system strength' literature, it is argued that when teachers perceive HRM as distinctive and consistent, and if they perceive consensus, this will enhance teachers' and schools' performance. By combining insights from educational studies on single HRM practices with HRM theories, this paper builds a conceptual framework which can be used to design HRM systems and to understand the way they operate. The notion of schools as loosely coupled' organizations has been widely discussed in the research literature. Many argue it is either a protective mechanism for schools to buffer external pressure or a barrier for implementing new reforms. Against the backdrop of systemic change and accountability, we applied a two-level hierarchical linear model to nationally representative data in the US, testing the loosely coupled' theory through examining the association between data-informed improvement efforts at the school level and data-informed instruction at the classroom level. Statistically significant associations were identified but with a small proportion of variance explained, indicating that the top-down systemic change strategy failed to tighten the system as intended. Alternatively, bottom-up strategies, such as professional learning communities, which operate under the assumption of working with loose coupling, should be considered. Effective education reform depends on its successful realization by the school leadership carrying out the reform. School principals and middle leaders in the 21st century re-examine their traditional role so as to understand complexities and ambiguities characterizing their various responsibilities within the context of school reform. As critical change agents and system players, formal leaders interpret reform demands and translate them into school practices through a process of sense-making. Though sense-making is an ongoing process that school leaders undergo personally and collectively during policy reforms, little attention has been paid to the role principals and middle leaders perform as sense-makers. This literature review article explores sense-making in school leadership through a holistic approach. It demonstrates how sense-making is framed in both theoretical and empirical studies as well as suggests implications and avenues for future research. Systems thinking is a holistic approach that puts the study of wholes before that of parts. This study explores systems thinking among school middle leaders - teachers who have management responsibility for a team of teachers or for an aspect of the school's work. Interviews were held with 93 school coordinators, among them year heads, heads of departments, evaluation coordinators, instruction coordinators, and information and communications technology coordinators. Data analysis revealed that systems thinking among school middle leaders consists of four characteristics: (1) seeing wholes; (2) using a multidimensional view; (3) influencing indirectly; and (4) assessing significance. The findings of this study expand the existing knowledge on systems thinking in school leadership, discussing practical implications as well as further research avenues. A continuing challenge for the education system is how to evaluate the wider outcomes of schools. Wider measures of success - such as citizenship or lifelong learning - influence each other and emerge over time from complex interactions between students, teachers and leaders, and the wider community. Unless methods are found to evaluate these broader outcomes, which are able to do justice to learning and achievement as emergent properties of the learner's engagement with his or her world the education system will continue to focus on narrow measures of school effectiveness which do not properly account for complexity. In this article we describe the rationale and methodology underpinning a pilot research project that applied hierarchical process modelling to a group of schools as complex living systems, using software developed by engineers at the University of Bristol, called Perimeta. The aim was to generate a stakeholder owned systems design which was better able to account for the full range of outcomes valued by each school, and for the complex processes which facilitate or inhibit them, thus providing a more nuanced leadership decision-making analytic. The project involved three academies in the UK. Launched in December 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion vehicle's Exploration Flight Test 1 successfully completed the objective to stress the system by placing the uncrewed vehicle on a high-energy parabolic trajectory, replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for Exploration Flight Test 1 are presented with an emphasis on how redundancy and robustness influenced the architecture. Two inertial measurement units, one GPS receiver, and three barometric altimeters comprise the navigation sensor suite. The sensor data are multiplexed, using conventional integration techniques, and the state estimate is refined by the GPS pseudo- and delta-range measurements in an extended Kalman filter that employs UDU factorization. The performance of the navigation system during flight is presented to substantiate the design. An aggregate route model for strategic air traffic flow management is presented. It is an Eulerian model, describing the flow between segments of unidirectional point-to-point routes. Aggregate routes are created from flight trajectory data based on similarity measures. Spatial similarity is determined using the Frechet distance and temporal similarity by comparing average ground speeds. The aggregate routes approximate actual traffic patterns. By specifying the model resolution, an appropriate balance between model accuracy and model dimension can be achieved. The dynamics of the traffic flow on the network of aggregate routes take the form of a discrete-time linear time-invariant system. The traffic flow controls are ground holding and predeparture rerouting. Strategic planning, to use the controls to modify the future traffic flow when local capacity violations are anticipated, is posed as an integer linear programming problem of minimizing a weighted sum of flight delays subject to capacity constraints. Two examples demonstrate the model formulation and results of strategic planning. First, ground delays are introduced to manage high demand in the Los Angeles center; second, ground holding and predeparture rerouting are used to manage a convective weather scenario in the same center. The main-belt asteroids are of great scientific interest and have become one of the primary targets of planetary exploration. In this paper, the accessibility of more than 600,000 main-belt asteroids is investigated. A computationally efficient approach based on Gaussian process regression is proposed to assess the accessibility. Two transfer models consisting of globally optimal two-impulse and Mars gravity-assist transfers are established, which would serve as a source of training samples for Gaussian process regression. The multistart and deflection technologies are incorporated into the numerical optimization solver to avoid local minima, thereby guaranteeing the quality of the training samples. The covariance function, as well as hyperparameters, which dominate the regression process, are chosen elaborately in terms of the correlation between samples. Numerical simulations demonstrate that the proposed method can achieve the accessibility assessment within tens of seconds, and the average relative error is only 1.33%. Mars gravity assist exhibits significant advantage in the accessibility of main-belt asteroids because it reduces the total velocity increment by an average of 1.23 km/s compared with the two-impulse transfer. Furthermore, it is observed that 3976 candidate targets have potential mission opportunities with a total velocity increment of less than 6 km/s. The minimum-time constant-thrust circular orbit rephasing problem is studied using a curvilinear relative motion description for dynamics. The resulting optimal control problem in the thrust orientation is formulated using the indirect method and investigated both analytically and numerically. By linearizing the relative motion equations, a key nondimensional parameter is identified, which characterizes the duration of the maneuver and its qualitative structure when changing from a short-maneuver to a long-maneuver time regime, passing through a transition zone. Approximate analytical solutions are obtained for the short- and long-maneuver regimes, providing an accurate estimation of the minimum maneuver time and a clear understanding of the evolution of the optimal thrust profile, which approaches two opposite bang-bang structures in the limit cases. The nonlinear problem is investigated numerically and shown to be fully characterized by two nondimensional parameters instead of one. Finally, a comparison with different two-impulse maneuvers is conducted. The results can be used for constructing good initial guesses in more complex optimization problems and to support preliminary mission design. A mission of the U.S. Air Force SpaceCommandis the positioning of Global Positioning System satellites to improve the accuracy of receiver navigation solutions. To that end, a technique is developed that, given a configuration specifying the number of satellites in each orbital plane, combines a nonlinear program with a simulation to find the constellation that optimizes performance. This approach is greatly expedited by a reduced set of configuration classes, which will result in significantly increased efficiency in computer runtime and constellation management. Additionally, a metric is proposed for constellation performance that normalizes receiver dilution of precision accuracy values and weights trouble spots. A tool implementing the technique and the metric is constructed and applied to three configurations. The first includes the nominal 24-satellite operational constellation, and the other two are for 27 and 31 satellites. For the 24-satellite configuration, satellite positions similar to the nominal constellation are obtained (less than 4 degrees average shift in argument of latitude) and have an improved performance according to at least two metrics, including the new one. This suggests that this approach will be useful for determining unique operationally realistic placement of satellites for other configurations. This paper deals with the precision control of tumbling multibody systems with uncertainties present in the descriptions of their mathematical models. A generic tumbling multibody system consisting of a rigid body with internal degrees of freedom is used. A two-step control methodology is developed. First, a nominal system is conceived that best approximates the actual physical system. An analytical dynamics-based control methodology is used to obtain the nominal control force that ensures that this nominal system satisfies the control requirements. This is done using the control methodology proposed by Udwadia ("Optimal Tracking Control of Nonlinear Dynamical Systems," Proceedings of the Royal Society of London, Series A: Mathematical and Physical Sciences, Vol. 464, 2008, pp. 2341-2363). Second, an additional compensating generalized control force is designed to ensure that the actual controlled (uncertain) system tracks the trajectories of the nominal system so that the control requirements are also met by the actual system. This paper deals primarily with the second step and its combination with the first. Uncertainties in both the description of the system as well as the forces acting on it are considered. No linearizations or approximations are made in either of the steps, and the full nonlinear dynamical system is considered. The efficacy of the control methodology is demonstrated by applying it to two tumbling uncertain multibody dynamical systems. This paper discusses the feasibility of using drag and solar radiation pressure for a collision-avoidance maneuver. Usually, an alert about possible collision with another satellite or space debris is received a few days in advance, giving ample time to design and perform an avoidance maneuver. For a propulsionless satellite, drag and solar radiation pressure are the only natural forces that can be used for orbit maneuver. The maneuver is performed by orienting the satellite such that the combined effect of both forces will add up to maximize the change of the semimajor axis from the nonmaneuver case. This causes a change in the orbital period, and eventually, after enough revolutions, the satellite avoids the collision. The control algorithm requires the knowledge of the cross-section areas of the satellite from all directions as well as the drag and solar radiation properties. There must be a difference between the maximum cross-section area and the minimum one to control these natural forces. Numerical examples for a real satellite show that the method is feasible. The along-track deviation that is accumulated after no more than three days of maneuver is sufficient to reduce the collision probability to acceptable level. Traditional flying qualities metrics use first-order descriptors of the "classical" aircraft modes such as the phugoid, short-period, Dutch roll, etc. These modes are often difficult to distinguish from one another in modern aircraft for which the dynamics are usually coupled, leaving designers to develop lower-order equivalent systems as approximations to the behavior of the real aircraft. On the other hand, modern optimal or robust control techniques directly address multiple-input/multiple-output systems, with coupled dynamics, and they function well with higher-order measures of system behavior like state covariances or signal norms. This work presents a new method of approximating the traditional flying qualities requirements for piloted aircraft through a flight condition-dependent recasting of these requirements as upper bounds on the state variances. A linear matrix inequality feasibility problem is developed to compute a control law that stabilizes the system for the given flight condition while satisfying both the state variance and actuator constraints. Devising the planar routes of minimal length that are required to pass through predefined neighborhoods of target points plays an important role in reducing the mission's operating cost. Two versions of the problem are considered. The first one assumes that the ordering of the targets is fixed a priori. In such a case, the optimal route is devised by solving a convex optimization problem formulated either as a second-order cone program or as a sum-of-squares optimization problem. Additional route properties, such as continuity and minimal curvature, are considered as well. The second version allows the ordering of the targets to be optimized to further reduce the route length. We show that such a problem can be solved by introducing additional binary variables, which allows the route to be designed using off-the-shelf mixed-integer solvers. A case study that shows that the proposed strategy is computationally tractable is presented. Gravitational and third-body perturbations can be modeled with sufficient precision for most applications in low Earth orbit. However, owing to severe uncertainty sources and modeling limitations, computational models of satellite aerodynamics and solar radiation pressure are bound to be biased. Aiming at orbital propagation consistent with observed satellite orbital dynamics, real-time estimation of these perturbations is desired. In this paper, a particle filter for the recursive inference and prediction of nongravitational forces is developed. Specifically, after assuming a parametric model for the desired perturbations, the joint probability distribution of the parameters is inferred by using a prescribed number of weighted particles, each consisting of one set of orbital elements and one set of parameters. The particle evolution is carried out by means of an underlying orbital propagator, and the Bayes rule is used to recursively update weights by comparing propagated orbital elements with satellite observations. The proposed formulation uses mean orbital elements as the only available measurements. This feature makes the algorithm a potentially valuable resource for space situational awareness applications, such as space debris trajectories prediction from two-line elements, or for onboard force estimation from Global Positioning System data. High-fidelity simulations show that nongravitational perturbations can be estimated with 20% accuracy. The paper proposes two guidance algorithms for the target-attacker-defender scenario. The first is a combined guidance algorithm for the attacker that simultaneously achieves evasion from the defender and pursuit of the target. The second is a cooperative guidance algorithm for the target to evade the attacker and for the defender to pursue the evader. Both algorithms are derived under the assumption that performance is prescribed in the sense of the required maximum miss distance for pursuit and the required minimum miss distance for evasion, and the algorithms are minimizing the effort to achieve the requirements. It turns out that both the combined guidance algorithm and the cooperative guidance algorithm share a very similar structure. The performance of the proposed algorithms is tested in numerical simulations. Research summary: Using a productivity technique (VCA model), we estimate the economic value created by a firm and appropriated by its stakeholders in two specific empirical contexts. In the first application, we use publicly available data from the U.S. airline industry to illustrate how the VCA model can be used with multiple stakeholder groups. In the second application, we provide estimates for three global automobile companies (GM, Toyota and Nissan), showing how the model can be reformulated using value added. In both industries we find substantial heterogeneity among firms in the creation and distribution of value. We discuss strengths and limitations of the VCA model and implications for strategic management research.Managerial summary: Firms create value not only for shareholders, but also for other stakeholders, including employees, customers and suppliers. This article applies a method to quantify the new economic value created by a firm over an interval of time; the method also reveals the distribution of that value among the stakeholders. The proposed method gives managers some means to assess changes in the economic value created and distributed. We find that the creation and distribution of value has varied greatly among major U.S. airlines and global automakers in recent decades. Moreover, returns to shareholders typically accounted for only a small proportion of firms' total value creation and often had little relation to broader changes in the magnitude and distribution of value. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary>: We take a microfoundational approach to understanding the origin of heterogeneity in firms' capacity to adapt to technological change. We develop a computational model of individual-level learning in an organizational setting characterized by interdependence and ambiguity. The model leads to organizational outcomes with the canonical properties of routines: constancy, efficacy, and organizational memory. At the same time, the process generating these outcomes also produces heterogeneity in firms' adaptive capacity to different types of technological change. An implication is that exploration policy in the formative period of routine development can influence a firm's capacity to adapt to change in maturity. This points to a host of strategic trade-offs, not only between performance and adaptive capacity, but also between adaptive capacities to different forms of change.Managerial summary: Why are firms differentially effective at adapting to technological change? We argue that firms differ in the adaptive capacity of the routines that underlie their capabilities. These differences arise well before change occurs, and result because firms build routines that are differentially responsive to signals of performance decline associated with technological change. Thus, early managerial efforts to build superior productive efficiency must be complemented by efforts to build superior adaptive capacity. Our theory suggests that managers can prepare for technological change by implementing policies, in the formative period of organizational development, that promote individuals' exploration of novel actions. However, there are trade-offs because preparation aimed at building adaptive capacity to one type of technological change may limit adaptive capacity to other types of change. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: Losing key employees to competitors allows an organization to engage in external boundary-spanning activities. It may benefit the organization through access to external knowledge, but may also increase the risks of leaking knowledge to competitors. We propose that the destination of departed employees is a crucial contingency: benefits or risks only materialize when employees leave for competitors that differ from the focal organization along significant dimensions, such as country or status group. In the context of the global fashion industry, we find that key employees' moves to foreign competitors may increase (albeit at a diminishing rate) their former employers' creative performance. Furthermore, firms may suffer from losing key employees to higher- or same-status competitors, but may benefit from losing them to lower-status competitors.Managerial summary: Losing key employees to competitors can provide organizations with access to external knowledge, but increase risks of leaking knowledge to competitors. We find that an organization's access to external knowledge and its risks of knowledge leakage through employee mobility may be affected by whether its employees leave for competitors in a foreign country or in a different status group. In the context of the global fashion industry, we show that key employees' moves to foreign competitors increase (up to a point) their former employers' creative performance. Furthermore, firms may suffer from losing key employees to higher- or same-status competitors, but benefit from losing them to lower-status competitors. Hence, executives in creative industries and possibly beyond could welcome losing employees to competitors in foreign countries or to lower-status competitors. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: We investigate the effect of incumbents' stock of downstream complementary assets on their product innovation during a disruptive technological change. We theorize that a firm's stock of downstream complementary assets, by providing critical information about shifting demand conditions, will play a catalytic role in firm adaptation during such a change. Using the advent of disruptive computer numerical control machine tools in the U.S. machine tool industry during the 1970s and 1980s as the context, we find that firms with greater stocks of downstream complementary assets are likely to be product innovation leaders during such a change.Managerial summary: Disruptive changes are challenging firms across industries. We concentrate on the U.S. machine tool industry during the 1970s and 1980s when Japanese manufacturers with disruptive computer numerical control systems challenged the U.S. manufacturers. We find that, under the threat of disruption, the greater the stock of downstream complementary assets a U.S. machine tool manufacturer has, the more likely it is to be the product innovation leader with the disruptive technology. Our findings provide novel insights for managers in companies that face disruptive changes and can help them avoid the consequences of such changes as predicted by prior research. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: Agency theory suggests that external governance mechanisms (e.g., activist owners, the market for corporate control, securities analysts) can deter managers from acting opportunistically. Using cognitive evaluation theory, we argue that powerful expectations imposed by external governance can impinge on top managers' feelings of autonomy and crowd out their intrinsic motivation, potentially leading to financial fraud. Our findings indicate that external pressure from activist owners, the market for corporate control, and securities analysts increases managers' likelihood of financial fraud. Our study considers external governance from a top manager's perspective and questions one of agency theory's foundational tenets: that external pressure imposed on managers reduces the potential for moral hazard.Managerial summary: Many of us are familiar with stories about top managers cooking the books in one way or another. As a result, companies and regulatory bodies often implement strict controls to try to prevent financial fraud. However, cognitive evaluation theory describes how those external controls could actually have the opposite of their intended effect because they rob managers of their intrinsic motivation for behaving appropriately. We find this to be the case. When top managers face more stringent external control mechanisms, in the form of activist shareholders, the threat of a takeover, or zealous securities analysts, they are actually more likely to engage in financial misbehavior. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: Research on the link between financial and environmental performance implicitly assumes that firms will pursue profitable environmental actions. Yet, clearly, factors beyond profitability influence firms' environmental choices. We treat these choices as organizational change decisions and hypothesize that adoption of environmental initiatives is influenced by a combination of profit, level of disruption caused, and external influences. We test our hypotheses by examining firms' choices regarding implementation of energy-savings initiatives. We find that degree of disruption, number of prior local adopters, and strength of environmental norms affect the adoption decisions. In addition, the effect of disruption is amplified by the implementation costs, but is mitigated by the number of prior local adopters.Managerial summary: Often, in trying to improve firms' environmental performance, academics and stakeholders have focused on actions that simultaneously improve environmental and financial performance. This assumes that firms will undertake projects that offer such dual benefits. We consider what might prevent firms from pursuing such win-win' initiatives. We focus on how the degree of disruption of an energy-saving initiative affects its probability of adoption. We find that firms are significantly more likely to adopt moderately profitable, but easy initiatives than more profitable but disruptive ones. We also examine internal and external factors that moderate the effect of disruption. Our findings suggest that in order to incentivize firms to improve environmental performance, it might be more beneficial make these activities less disruptive than to make them more profitable. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: Integrating the behavioral and institutional perspectives, we propose that a country's formal institutions, particularly its legal frameworks, affect managers' deployment of slack resources. Specifically, we explore the moderating effects of creditor and employee rights on the performance effects of slack. Using longitudinal data from 162,633 European private firms in 26 countries, we find that financial slack enhances firm performance at diminishing rates, whereas human resource (HR) slack lowers performance at diminishing rates. However, financial slack has a more positive effect on firm performance in countries with weaker creditor rights, whereas HR slack has a more negative effect on performance in countries with stronger employee rights. The results provide a richer view of the relationship between slack and firm performance than currently assumed in the literature. Managerial summary: A key dilemma managers often encounter is whether, on the one hand, they should build in excess resources to buffer their firms from internal and external shocks and to pursue new opportunities or whether, on the other hand, they should develop lean firms. Our study suggests that excess cash resourceswhich are usually viewed as easy to redeploybenefit firm performance, especially when firms operate in countries with weaker creditor rights. However, excess human resourceswhich are usually viewed as more difficult to redeployhamper firm performance, particularly when firms operate in countries with stronger labor protection laws. Thus, the management of slack resources critically depends on the characteristics of these resources (e.g., redeployability) and the institutional context in which managers operate. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: This study employs longitudinal multilevel modeling to re-examine the relative importance of business unit, corporation, industry, and year effects on business unit performance. Total variance in performance is partitioned into stable variance and dynamic variance. Sources of these two parts of variance are explored. Empirical results indicate that (1) stable effects of corporation-industry interaction are substantially important, but were unequally confounded with stable effects of business unit, corporation, and industry in results of previous studies; (2) stable effects of corporation, industry, and corporation-industry interaction, taken together, are of similar relative magnitude to stable effects of business unit; and (3) random and nonlinear year effects are very important in explaining dynamic variance. These findings extend our theoretical and empirical understanding of performance variability.Managerial summary: Whether stable or changing, business units themselves, corporate-parents, and industries influence business unit operations. This article investigates the relative effects of these factors on business unit performance. Although the traditional wisdom is that business unit is critical, this research finds that corporate-parent, industry, and interactions between these, taken together, are as influential as business unit. Specifically, interactions between corporate-parent and industry are important for over-time average business unit performance, indicating that a given corporate-parent unevenly influences its business units in different industries and that a particular industry unevenly influences business units within itself from different corporate-parents. This study also demonstrates that changes in business unit, corporate-parent, and industry are important drivers of over-time volatility of business unit performance and that effects of these changes differ. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: We use a variance decomposition methodology to assess the degree to which board chairs may influence their companies' performance. To isolate the board chair effect, we focus on firms in which the CEO and board chair positions are separated. Using a U.S. sample of 6,290 firm-year observations representing 1,828 board chairs in 308 different industries, our results indicate that the board chair effect is substantial at about nine percent. Drawing on resource dependency theory, we also theorize and show how this board chair effect is contingent on the task environment in which firms operate. Our results add to the literature examining the role and influence of board chairs and the context in which chairs may have a greater impact on performance.Managerial summary: Following institutional and regulatory changes, more firms are separating the CEO and board chair positions. With an increasing number of individuals separate from the CEO serving as board chairs, a critical question becomes: What influence do these separate board chairs have on firm performance? Prior research suggests that separate board chairs can provide important resourcesincluding advice and counsel, legitimacy, information linkages, and preferential access to external commitments and supportto their CEOs, other top managers, and overall firms. In turn, who the board chair is and the individual's ability (or lack thereof) to provide these resources may have a significant impact on firm performance. Offering support for this perspective, we find that separate board chairs explain nine percent of the variance in firm performance. Copyright (c) 2016 John Wiley & Sons, Ltd. Research summary: Since Nickerson and Zenger (2002) proposed how vacillation may lead to organizational ambidexterity, large-sample empirical tests of their theory have been missing. In this paper, we empirically examine the performance implications of vacillation. Building upon vacillation theory, we predict that the frequency and scale of vacillation will have inverted U-shaped relationships with firm performance. We test our hypotheses using patent-based measures of exploration and exploitation in the context of technological innovation and knowledge search.Managerial summary: Firms often shift their focus on technological innovation and knowledge search from seeking new and novel knowledge (i.e., exploration) to extending and refining existing knowledge (i.e., exploitation) or vice versa. We examine how the frequency and scale of firms vacillating between exploration and exploitation may affect their performance. We find that both too infrequent or too frequent changes and a too small or too large scale of changes are not desirable. Copyright (c) 2016 John Wiley & Sons, Ltd. This Commentary looks at Globalization with Chinese Characteristics' (quanqiu hua yu zhongguo tese) as revealed through the lens of President Xi Jinping's recent speech to the World Economic Forum in Davos, Switzerland in January, 2017. In this, he sets out a positive role for the PRC in the Globalization' stakes. He also puts himself forward as Expert', rather than Red', in the ongoing polemic on the benefits of further reductions in barriers to doing business. But whether this may be taken at its face value remains to be seen. Whilst the Chinese appear to promote more of Globalization' and the Americans seem to retreat from the model, the world economic community may well suspend its judgement. The People's Republic of China has achieved remarkable progress in the internationalization of the RMB by introducing a number of concrete measures to boost the RMB's status on the world stage since 2009. The ongoing RMB internationalization is being promoted under the background of deepening economic and financial integration in East Asia. In this article, we attempt to analyse RMB internationalization from the perspective of East Asian regional integration. We hypothesize that East Asian regional integration lays a broad foundation for China to push RMB internationalization forward. An internationalized RMB, we argue, will play more important roles in the process of East Asian regionalization. Thus, RMB regionalization could be an important and necessary step of internationalization. The Chinese authorities should not only push the RMB toward internationalization under China's framework of domestic financial system reform, but they should also integrate RMB internationalization into the process of East Asian economic and financial integration. Therefore, a win-win strategy of RMB internationalization for both China and East Asian countries is needed. In this study, we examined the role of guanxi as entrepreneurs' resource-obtaining mechanism in private sector firms, using a data-set of 184 publicly listed firms in China. We found that guanxi indeed played a positive role that helped private sector firms gain easier access to resources. We also found that guanxi exerted even a greater positive effect on private sector firms' resource obtaining compared to entrepreneurs' political participation, due to being the lifeblood of business conduct and social interaction in Chinese culture. In this article, we explore the different roles that knowledge sharing and exploitative learning play in employees' innovative behaviour, and investigate the different moderating effects of employees' espoused national cultural values on the relationship between exploitative learning and innovative behaviour in the Chinese IT-enabled global service firms with different ownerships. We propose a theoretical model to characterize these antecedents of innovative behaviour. A structured research survey was conducted and data were collected from a sample of 484 full-time employees in 3 IT-enabled global service firms in the PRC. Results indicate that knowledge sharing is positively associated with innovative behaviour in multinational corporations and private IT-enabled global service; espoused power distance has a significant positive moderating effect on exploitative learning-innovative behaviour relationship in state-owned and private firms; espoused collectivism has a significant moderating effect only in state-owned firms in China. Last, we explore the implications of our findings for theory and practice of innovation. This study investigates whether the eight ancient principles of Javanese statesmanship (Asta Brata), can be employed as the basis for analysing managerial leadership excellence in Javanese organizations. Factor analysis, regression modelling and structural modelling are used to explain what constitutes leadership excellence in Javanese organizations. These findings based on the perceptions of 312 Javanese managers suggest they favour a paternalistic leadership style that is nurturing but not authoritative. This study highlights the importance of understanding Indonesia's bapak-ism, or reverence for the leader as a father figure, and its familial orientation of interdependency between management and employees. This study examines the effect of host country Internet infrastructure on a multinational corporation (MNC) foreign expansion. Using Heckman's selection model on a sample of 2589 subsidiaries of 487 Korean MNCs between 1990 and 2011, we find that host country Internet infrastructure is important in MNC expansion decisions. In addition, we find that a well-developed Internet infrastructure within a host country leads to more investments from MNCs producing consumer over industrial goods and is more attractive to domestic market followers than market leaders. We find that the host country's Internet infrastructure is important for an MNC foreign expansion decision, suggesting that efficient communication within an MNC is critical in coordinating globalized MNC subsidiary operations. Switching intentions in the business-to-business (B2B) relationship context have a powerful impact on a firm's performance and are often considered in relation to perceived switching costs. This factor has also been considered as a good predictor of actual turnover behaviour resulting in reduced market share and profitability of firms. However, despite the importance of switching intentions in B2B relationships, there is still no evidence either to support linkages to switching costs as a key driver of decision-making or to demonstrate interrelationships with firm performance. The author empirically examines the theoretical process of the cognitive assessment - behavioural intentions - performance linkage that explains a firm's likelihood of terminating B2B relationships using three moderating variables. The results suggest that the switching intentions are driven by switching costs and their similarity to the direct effect on firm performance. Meanwhile, personal relationship loss costs, rather than other types of switching costs, serve an important role in determining the reduction of switching intentions. Finally, the author discusses insights about the present results and suggests future research directions. Using technological capabilities as embodied in machinery, organization, processes and products, this study examines the links with host-site institutions and regional production linkages. The statistical results show no relationship between these variables. In-depth interviews complement the quantitative findings. Overall, the result shows that the government's localization efforts failed because too many joint-venture assemblers were approved in the 1990s when the domestic market was small. The lack of economies of scale also affected the growth of national suppliers. Hence, national producers are confined to low value added segments and lack the quality to compete in export markets. Background: Blending collaborative learning and project-based learning (PBL) based on Wolff (2003) design categories, students interacted in a learning environment where they developed their technology integration practices as well as their technological and collaborative skills.Purpose: The study aims to understand how seventh grade students perceive a collaborative web-based science project in light of Wolff's design categories. The goal of the project is to develop their technological and collaborative skills, to educate them about technology integration practices, and to provide an optimum collaborative, PBL experience.Sample: Seventh grade students aged 12-14 (n=15) were selected from a rural K-12 school in Turkey through purposeful sampling.Design and methods: The current study applied proactive action research since it focused on utilizing a new way to enhance students' technological and collaborative skills and to demonstrate technology integration into science coursework. Data were collected qualitatively through interviews, observation forms, forum archives, and website evaluation rubrics.Results: The results found virtual spaces such as online tutorials, forums, and collaborative and communicative tools to be beneficial for collaborative PBL. The study supported Wolff's design features for a collaborative PBL environment, applying features appropriate for a rural K-12 school setting and creating a digitally-enriched environment. As the forum could not be used as effectively as expected because of school limitations, more flexible spaces independent of time and space were needed.Conclusions: This study's interdisciplinary, collaborative PBL was efficient in enhancing students' advanced technological and collaborative skills, as well as exposing them to practices for integrating technology into science. The study applied design features for a collaborative PBL environment with certain revisions. Background: Numerous studies have been conducted to investigate the factors related to science achievement. In these studies, the classroom goal structure perceptions, engagement, and self-efficacy of the students have emerged as important factors to be examined in relation to students' science achievement.Purpose: This study examines the relationships between classroom goal structure perception variables (motivating tasks, autonomy support, and mastery evaluation), engagement (behavioral, emotional, cognitive, and agentic engagement), self-efficacy, and science achievement.Sample: The study participants included 744 seventh-grade students from 9 public schools in two districts of Gaziantep in Turkey.Design and methods: Data were collected through the administration of four instruments: Survey of Classroom Goals Structures, Engagement Questionnaire, Motivated Strategies for Learning Questionnaire, and Science Achievement Test. The obtained data were subjected to path analysis to test the proposed model.Results: Students' perceptions of classroom goal structures (i.e. motivating tasks, autonomy support, and mastery evaluation) were found to be significant predictors of their self-efficacy. Autonomy support was observed to be positively linked to all aspects of engagement, while motivating tasks were found to be related only to cognitive engagement. In addition, mastery evaluation was shown to be positively linked to engagement variables, except for cognitive engagement, and self-efficacy and engagement (i.e. behavioral, emotional, and cognitive engagement) were observed to be significant predictors of science achievement. Finally, results revealed reciprocal relations among engagement variables, except for agentic engagement.Conclusions: Students who perceive mastery goal structures tend to show higher levels of engagement and self-efficacy in science classes. The study found that students who have high self-efficacy and who are behaviorally, emotionally, and cognitively engaged are more successful in science classes. Accordingly, it is recommended that science teachers utilize inquiry-based and hands-on science activities in science classes and focus on the personal improvement of the students. Furthermore, it is also recommended that they provide students with opportunities to make their own choices and decisions and to control their own actions in science classes. Background: Children generally adopt the behaviours and attitudes they see in their home environment. Because of this, education provided in the school can be effective, as long as it is supported at home and by extension to the entire environment where the child interacts. Isolating the family from school influences the continuance of the school's educational impact. In this sense, families do have a significant impact on their child's attitude about.Purpose: The objective of this study is to determine how parents view science and technology, the factors that influence their views (gender, age, educational level), and the relationship between these opinions and the students' science academic achievement.Sample: The present study was conducted with the parents of 169 students attending randomly chosen primary schools in a city in western Turkey.Design and methods: The Scale for Determining Views of Parents regarding Science and Technology' (SFDVPAST) was developed by the researchers and used in the present study. The scale's reliability was 0.88. Data obtained from SFDVPAST were analysed with SPSS 11.5 using frequency (f), percentage (%), average (X), standard deviation (SD), one-way MANOVA, a univariate ANOVA for each dependent variable as a tracking test, and simple linear regression analysis to determine the relationships.Results: At the completion of this study, findings indicated that gender does not have an impact on how parents view science and technology, but age and educational level do impact parents' views on this topic. The science academic achievement of the student correlates with the views of his/her parents on science and technology.Conclusions: Parents' views towards science and technology have affected their age and education level, but have not affected their sex. In addition, parents' positive view towards science and technology has affected their science academic achievement of the students. Background: Development of scientific understanding of secondary school students is considered to be one of the goals of environmental education. However, it is not quite clear what instructional strategies and what other factors contribute to the effectiveness of environmental education programs promoting this goal.Purpose: The aim was to analyze if an applied instructional strategy applied on an outdoor environmental education program was successful in developing the students' scientific understanding, their interest in studying science at university, and if the students' scientific understanding is influenced by their gender or intention to study science at university.Program description: The investigated program was 3-5days long and it was based on principles of inquiry-based learning approach applied in outdoor setting of an environmental education center.Sample: For the analysis, data from 83 students (60 girls, 23 boys) of three grammar schools who participated in the program (mean age of 16.45years), were collected. The control group consisted of 93 students (59 girls and 34 boys), with a mean age of 17.5years.Design and methods: The research applied a quasi-experimental non-equivalent control group design when both group received the same instruments in the same time span. The instrument consisted of three parts: 1-item Science and Engineering Indicator, 1 Likert-type item for assessing students' intention to study science, and a problem-based task for assessing students' understanding of the procedure of scientific inquiry.Results: The program seemed to positively affect students' understanding of scientific principles and procedures; however, no effect on their intention to study science at university was found.Conclusions: The evaluated strategy which consisted of elements such as the application of mobile technology, balancing between teacher- and student-directed approaches, and emotion-based activities was proven effective for developing students' scientific understanding. However, in order to increase students' intention to study science, a different or a better developed strategy is needed. Background: From previous research among science teachers it is known that teachers' attitudes to their subjects affect important aspects of their teaching, including their confidence and the amount of time they spend teaching the subject. In contrast, less is known about technology teachers' attitudes.Purpose: Therefore, the aim of this study is to investigate Swedish technology teachers' attitudes toward their subject, and how these attitudes may be related to background variables.Sample: Technology teachers in Swedish compulsory schools (n=1153) responded to a questionnaire about teachers' attitudes, experiences, and background.Methods: Exploratory factor analysis was used to inwvestigate attitude dimensions of the questionnaire. Groupings of teachers based on attitudes were identified through cluster analysis, and multinomial logistic regression was performed to investigate the role of teachers' background variables as predictors for cluster belonging.Results: Four attitudinal dimensions were identified in the questionnaire, corresponding to distinct components of attitudes. Three teacher clusters were identified among the respondents characterized by positive, negative, and mixed attitudes toward the subject of technology and its teaching, respectively. The most influential predictors of cluster membership were to be qualified for teaching technology, having participated in in-service-training, teaching at a school with a proper overall teaching plan for the subject of technology and teaching at a school with a defined number of teaching hours for the subject.Conclusions: The results suggest that efforts to increase technology teachers' qualifications and establishing a fixed number of teaching hours and an overall teaching plan for the subject of technology may yield more positive attitudes among teachers toward technology teaching. In turn, this could improve the status of the subject as well as students' learning. Background: Textbooks are integral tools for teachers' lessons. Several researchers observed that school teachers rely heavily on textbooks as informational sources when planning lessons. Moreover, textbooks are an important resource for developing students' knowledge as they contain various representations that influence students' learning. However, several studies report that students have difficulties understanding models in general, and chemical bonding models in particular, and that students' difficulties understanding chemical bonding are partly due to the way it is taught by teachers and presented in textbooks.Purpose: This article aims to delineate the influence of textbooks on teachers' selection and use of representations when teaching chemical bonding models and to show how this might cause students' difficulties understanding.Sample: Ten chemistry teachers from seven upper secondary schools located in Central Sweden volunteered to participate in this study.Design and methods: Data from multiple sources were collected and analysed, including interviews with the 10 upper secondary school teachers, the teachers' lesson plans, and the contents of the textbooks used by the teachers.Results: The results revealed strong coherence between how chemical bonding models are presented in textbooks and by teachers, and thus depict that textbooks influence teachers' selection and use of representations for their lessons. As discussed in the literature review, several of the selected representations were associated with alternative conceptions of, and difficulties understanding, chemical bonding among students.Conclusions: The study highlights the need for filling the gap between research and teaching practices, focusing particularly on how representations of chemical bonding can lead to students' difficulties understanding. The gap may be filled by developing teachers' pedagogical content knowledge regarding chemical bonding and scientific models in general. Background: Correct identification of misconceptions is an important first step in order to gain an understanding of student learning. More recently, four-tier multiple choice tests have been found to be effective in assessing misconceptions.Purpose: The purposes of this study are (1) to develop and validate a four-tier misconception test to assess the misconceptions of pre-service physics teachers (PSPTs) about geometrical optics, and (2) to assess and identify PSPTs' misconceptions about geometrical optics.Sample: The Four-Tier Geometrical Optics Test (FTGOT) was developed based on findings from the interviews (n=16), open-ended testing (n=52), pilot testing (n=53) and administered to 243 PSPTs from 12 state universities in Turkey.Design and Methods: The first phase of the study was the development of a four-tier test. In the second phase of this study, a cross-sectional survey design was used.Results: Validity of the FTGOT scores was established by means of some qualitative and quantitative methods. These are: (1) content and face validations by six experts; (2) Positive correlations between the PSPTs' correct scores considering only first tiers of the FTGOT and their confidence score for this tier (r= .194) and between correct scores considering first and third tiers and confidence scores for both of those tiers (r= .218) were found as evidences for construct validity. (3) False positive (3.5%), false negative (3.3%) and lack of knowledge (5.1%) percentages were found to be less than 10% as an evidence for content validity of the test scores. (4) Explanatory factor analysis conducted over correct and misconception scores yielded meaningful factors for the correct scores as an evidence for construct validity. Additionally, the Cronbach alpha coefficients were calculated for correct scores (r=0.59) and misconception scores (r=0.42) to establish the reliability of the test scores. Six misconceptions about geometrical optics, which were held by more than 10% of the PSPTs, were identified and considered to be significant.Conclusions: The results from the present investigation demonstrate that the FTGOT is a valid and reliable instrument in assessing misconceptions in geometrical optics. The increased use of the internet and information technology to enable online transactions, distribute information and customer reviews through ecommerce and social networking sites, online advertising, and data mining is both creating efficiencies and challenging our privacy. This paper highlights the growing fear that current federal and state laws in the United States are not adequate to protect the privacy of the data collected while we process electronic transactions or browse the internet for information. The notion of efficiency and cost-benefit are used to justify a certain level of privacy loss, thus treating privacy as a commodity to be transacted rather than a right to be defended. To address developing concerns about personal privacy invasions, we discuss the role and limits that both government regulation and self-regulation play in protecting our privacy. The recommendation system is a useful tool that can be employed to identify potential relationships between items and users in electronic commerce systems. Consequently, it can remarkably improve the efficiency of a business. The topic of how to enhance the accuracy of a recommendation has attracted much attention by researchers over the past decade. As such, many methods to accomplish this task have been introduced. However, more complex calculations are normally necessary to achieve a higher accuracy, which is not suitable for a real-time system. Hence, in this paper, we propose a weight-based item recommendation approach to provide a balanced formula between the recommended accuracy and the computational complexity. The proposed methods employ a newly defined distance to describe the relationship between the users and the items, after which the recommendations and predictive algorithms are developed. A data analysis based on the MovieLens datasets indicates that the methods applied can obtain suitable prediction accuracy and maintain a relatively low computational complexity. The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper contributes to the current debate on Web 2.0 technologies and their implications for local governance, identifying the perceptions of policy makers on the use of Web 2.0 in providing public services and on the changing roles that could arise from the resulting interaction between local governments and their stakeholders. The results obtained suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The learning curve of local governments in the use of Web 2.0 technologies is a factor that could influence policy makers' perceptions. In this respect, many research gaps are identified and further study of the question is recommended. Consumers' risk perception and trust are considered among the most important psychological states that influence online behavior. Despite the number of empirical studies that have explored the effects of trust and risk perceptions on consumer acceptance of e-services, the field remains fragmented and the posited research models are contradictory. To address this problem, we examined how trust and risk influence consumer acceptance of e-services through a meta-analysis of 67 studies, followed by tests of competing causal models. The findings confirm that trust and risk are important to e-services acceptance but that trust has a stronger effect size. We found that certain effect sizes were moderated by factors such as the consumer population under study, the type of e-service, and the object of trust under consideration. The data from the meta-analysis best supports the causal logic that positions trust as antecedent to risk perceptions. Risk partially mediates the effects of trust on acceptance. A leading cause of Identity Theft is that attackers get access to the victim's personal credentials. We are warned to protect our personal identifiers but we need to share our credentials with various organizations in order to obtain services from them. As a result the safety of our credentials is dependent on both the ability and diligence of the various organizations with which we interact. However, recent data breach incidents are clear proof that existing approaches are insufficient to protect the privacy of our credentials. Using a Design Science methodology, we propose a new technology, veiled certificates, which includes features that prevent fraudulent use of user's credentials and provides a degree of user anonymity. We also incorporate biometric authentication so that service providers know that they are dealing with the owner of the credentials. Results of a bench scale test that demonstrates the feasibility of the approach are reviewed. We also suggest four major applications which could take advantage of these certificates. More people have access to Multimedia Messaging Service (MMS, a.k.a. mobile picture messaging) than to the Internet, but mobile education markets have yet to adopt MMS as a content delivery mechanism. This paper investigates the role of carrier interoperability as an enabler of MMS in mobile multimedia distance learning. Using instructor reuse of content and learner access to content as feasibility criteria, we empirically evaluate the performance, user adoption, and commercial market of MMS-based mobile education. This study deployed a value-added service that broadcasts videos via MMS to cell phones, and conducted a 9-month public education campaign with weekly broadcasts on breast cancer. We selected a video format and markup language that is compatible with domestic carriers and cell phones, and supports existing educational material. To contrast behaviors between participants with and without access to the Internet, we offered participants the same content via MMS, email and the Web. 277 participants enrolled in the campaign; 120 opted to receive the videos via mobile messaging, and 157 had Internet access and opted to receive videos via email or the Web. Campaign analytics reveal that all participants without Internet access successfully received the MMS video broadcasts, and significantly, one-third of participants with Internet access opted to receive the videos via MMS as well. We conclude with a discussion of why participants with Internet access may have chosen MMS over Internet-based alternatives. We also estimate the size of the market for MMS-based mobile education, and distinguish it from the person-to-person messaging market. This research is beneficial to educators targeting diverse demographics and education disparities, and to mobile commerce economists evaluating emerging markets. The field of "coevolutionary studies" is the origin of many evocative stories in evolutionary biology, as well as a demonstration of the value of studying the ecological interactions of whole organisms and populations. This field exploded after the publication of "Butterflies and Plants: A Study in Coevolution," a 1964 paper co-authored by entomologist Paul Ehrlich and botanist Peter Raven. However, this paper argues that the foundation for "Butterflies and Plants" was laid in the previous decades, in the work of economic entomologists, crop-plant breeders, and insect physiologists. Using the work of an influential insect physiologist, Gottfried S. Fraenkel, this paper examines the prehistory of coevolutionary studies, showing that practical research on insect feeding in the 1940s and 1950s transformed plant chemicals into active biological molecules-causal forces modeled on hormones. Insect physiologists were the first to study the effects of these molecules on insects. Yet, rather than redefining insect-plant interactions in terms of reductionist molecular causation, they sought a more integrative explanation. Not only did these insect biologists see plants as active participants in their ecological and evolutionary landscapes, but they also came to see evolutionary history as the "raison d'tre" of plant molecules and insect feeding behavior. This paper expands our understanding of the generative role that physiology and molecular methods played in the development of concepts and practices in evolutionary biology. Furthermore, it contributes to a growing literature that undermines the historical division between proximate and ultimate causation in biology. Exploration has always centered on claims: for country, for commerce, for character. Claims for useful scientific knowledge also grew out of exploration's varied activities across space and time. The history of the Canadian Arctic Expedition of 1913-18 exposes the complicated process of claim-making. The expedition operated in and made claims on many spaces, both material and rhetorical, or, put differently, in several natural and discursive spaces. In making claims for science, the explorer scientists navigated competing demands on their commitments and activities from their own predilections and from external forces. Incorporating Arctic spaces into the Canadian polity had become a high priority during the era when the CAE traversed the Arctic. Science through exploration-practices on the ground and especially through scientific and popular discourse-facilitated this integration. So, claiming space was something done on the ground, through professional literature, and within popular narratives-and not always for the same ends. The resulting narrative tensions reveal the messy material, political, and rhetorical spaces where humans do science. This article demonstrates how explorer-scientists claimed material and discursive spaces to establish and solidify their scientific authority. When the CAE claimed its spaces in nature, nation, and narrative, it refracted a reciprocal process whereby the demands of environment, state, and discourse also claimed the CAE. Anton Pannekoek (1873-60) was both an influential Marxist and an innovative astronomer. This paper will analyze the various innovative methods that he developed to represent the visual aspect of the Milky Way and the statistical distribution of stars in the galaxy through a framework of epistemic virtues. Doing so will not only emphasize the unique aspects of his astronomical research, but also reveal its connections to his left radical brand of Marxism. A crucial feature of Pannekoek's astronomical method was the active role ascribed to astronomers. They were expected to use their intuitive ability to organize data according to the appearance of the Milky Way, even as they had to avoid the influence of personal experience and theoretical presuppositions about the shape of the system. With this method, Pannekoek produced results that went against the Kapteyn Universe and instead made him the first astronomer in the Netherlands to find supporting evidence for Harlow Shapley's extended galaxy. After exploring Pannekoek's Marxist philosophy, it is argued that both his astronomical method and his interpretation of historical materialism can be seen as strategies developed to make optical use of his particular conception of the human mind. Individual differences such as social anxiety and extraversion have been shown to influence education outcomes. However, there has been limited investigation of the relationship between individual differences and attitudes towards online and offline learning. This study aimed to investigate for the first time how social anxiety and extraversion influence student attitudes to online and offline learning, specifically in relation to tertiary level practical activities. Based on the social compensation hypothesis, it was predicted that students with higher levels of extraversion and lower levels of social anxiety would report more favourable attitudes to face-to-face learning activities. It was further predicted that less extraverted and more socially anxious students would have more favourable attitudes to online learning activities. Undergraduate students (N = 322, 67% female) completed the HEXACO-60 personality inventory, the Mini Social Phobia Inventory, and measures of attitudes towards online and offline activities. Two hierarchical multiple regressions were conducted. The first revealed that neither extraversion nor social anxiety contributed significantly to preference for online practical activities. The second regression revealed that greater emotionality, greater extraversion, greater conscientiousness, and lower levels of social anxiety were associated with more favourable attitudes towards face-to-face practical activities. In contrast to predictions, extraversion and social anxiety did not significantly contribute to attitudes to online learning activities. However, in line with predictions, greater extraversion and lower levels of social anxiety were associated with more favourable attitudes towards face-to-face practical activities. These findings indicate that online learning activities have limited compensatory effects for students who experience social discomfort, and that the social compensation hypothesis may apply within an educational framework, but in unexpected ways. (C) 2017 Elsevier Ltd. All rights reserved. The present investigation aims to fill some of the gaps revealed in the literature regarding the limited access to more advanced and novel assessment instruments for measuring students' ICT literacy. In particular, this study outlines the adaption, further development, and validation of the Learning in Digital Networks ICT literacy (LDN-ICT) test. The LDN-ICT test comprises an online performance-based assessment in which real-time student student collaboration is facilitated through two different platforms (i.e., GoogleDocs and chat). The test attempts to measure students' ability in handling digital information, to communicate and collaborate during problem solving. The data are derived from 144 students in grade 9 analyzed using item response theory models (unidimensional and multidimensional Rasch models). The appropriateness of the models was evaluated by examining the item fit statistics. To gather validity evidence for the test, we investigated the differential item functioning of the individual items and correlations with other constructs (e.g., self-efficacy, collective efficacy, perceived usefulness and academic aspirations). Our results supported the hypothesized structure of LDN-ICT as comprising four dimensions. No significant differences across gender groups were identified. In support of existing research, we found positive relations to self-efficacy, academic aspirations, and socio-economic background. In sum, our results provide evidence for the reliability and validity of the test. Further refinements and the future use of the test are discussed. (C) 2017 Elsevier Ltd. All rights reserved. Undergraduate STEM instruction increasingly uses educational technologies to support problem-solving activities. Educational technologies offer two key features that may make them particularly effective. First, most problem-solving activities involve multiple visual representations, and many students have difficulties in understanding, constructing, and connecting these representations. Educational technologies can provide adaptive support that helps students make sense of visual representations. Second, many problems with visual representations involve collaboration. However, students often do not collaborate effectively. Educational technologies can provide collaboration scripts that adaptively react to student actions to prompt them to engage in specific effective collaborative behaviors. These observations lead to the hypothesis we tested: that an adaptive collaboration script enhances students' learning of content knowledge from visual representations. We conducted a quasi-experiment with 61 undergraduate students in an introductory chemistry course. A control condition worked on a traditional worksheet that asked students to collaboratively make sense of connections among multiple visual representations. An experimental condition worked on the same problems embedded in an educational technology that provided an adaptive collaboration script. The experimental condition showed significantly higher learning gains on a transfer test immediately after the intervention and on complex concepts on a midterm exam three weeks later. (C) 2017 Elsevier Ltd. All rights reserved. Acceptance and intention to use mobile learning is a topic of growing interest in the field of education. Although there is a considerable amount of studies investigating mobile learning acceptance, little research exists that investigates the driving factors that influence students' intention to use mobile technologies for assessment purposes. The aim of this study is to provide empirical evidence on the acceptance of Mobile-Based Assessment (MBA), the assessment delivered through mobile devices and technologies. The proposed model, Mobile-Based Assessment Acceptance Model (MBAAM) is based on the Technology Acceptance Model (TAM). MBAAM extends TAM in the context of MBA by adding to the Perceived Ease of Use and Perceived Usefulness, the constructs of Facilitating Conditions, Social Influence, Mobile Device Anxiety, Personal Innovativeness, Mobile-Self-Efficacy, Perceived Trust, Content, Cognitive Feedback, User Interface and Perceived Ubiquity Value and investigates their impact on the Behavioral Intention to Use MBA. 145 students from a European senior-level secondary school experienced a series of mobile-based assessments for a three-week period. Structured equation modeling was used to analyze quantitative survey data. According to the results, MBAAM explains and predicts approximately 47% of the variance of Behavioral Intention to Use Mobile-Based Assessment. The study provides a better understanding towards developing mobile-based assessments that support learners, enhance learning experience and promote learning, taking advantage of the distinguished features that mobile devices may offer. Implications are discussed within the wider context of mobile learning acceptance research. (C) 2017 Elsevier Ltd. All rights reserved. This study aims to explore factors associated with cyberbullying perpetration on social media among children and adolescents in Singapore, based on the theory of reasoned action and the parental mediation theory. More specifically, the relationships between attitude, subjective norms, descriptive norms, injunctive norms, and active and restrictive parental mediation with cyberbullying perpetration on social media were investigated. Moreover, we examined the moderating effect of age on the relationship between parental mediation and cyberbullying perpetration. Multi-stage cluster sampling was used, in which 635 upper primary school children (i.e., Primary 4 to 6 students) and 789 secondary school adolescents participated in our survey. The results revealed that attitude, subjective norms, and the two parental mediations - active and restrictive mediation - were negatively associated with cyberbullying perpetration on social media. Age was a significant moderator of both parental mediation strategies and cyberbullying perpetration. Implications and limitations of this study were discussed. (C) 2017 Elsevier Ltd. All rights reserved. The study focuses on integration aids (i.e., signals) and their effect on how students process different types of graphical representations (representational pictures vs. organizational pictures vs. diagrams) in standardized multiple-choice items assessing science achievement. Based on text-picture integration theories each type of pictorial representation hold different cognitive requirements concerning integration processes of two representations. Further, depending on type of representation not every picture is needed to answer an item correctly. Students from fifth sixth grade (N = 60) work through 12 multiple choice items while their eye movements were recorded. Results showed that students achieved higher test scores when items were presented in an integrated format than in a non-integrated format, however, this was only true for diagrams. Eye movement data revealed that students looked longer on the graphical representations in items presented in the integrated format condition compared to the non-integrated format condition. Furthermore, relations between looking at the diagrams and achievement in the integrated format emerged. (C) 2017 The Authors. Published by Elsevier Ltd. In this paper the efficiency of using VNS (Variable Neighborhood Search) algorithm for forming four member heterogeneous groups within CSCL (Computer supporting collaborative learning) is analyzed. A mathematical model, based on Kagan's instructions, was created and then the VNS algorithm, the metaheuristic for solving the mathematical optimization problems, was applied to the model. The proposed VNS method is tested on a set of problem instances and results are compared with the optimal results obtained by CPLEX solver applied to the proposed formulation. VNS method showed better performance in terms of execution time and being able to solve large problem instances. The CSCL was applied to three groups of the first year college students, each consisting of 172 students. These three groups were divided into smaller ones of four students: by using VNS algorithm in 2015 (group E), by using Kagan's instructions in 2014 (group K), and randomly in 2013 (group R). The students were tested before and after CSCL of calculus contents. The statistical analysis shows that the students divided by VNS algorithm had significantly better results than the students divided randomly. But the students divided by VNS algorithm were as successful as the students divided without computer. This means that the students' learning achievement in calculus contents is better when they are divided by VNS than randomly, but is as successful as the cooperative learning in heterogeneous groups when VNS was not applied. (C) 2017 Elsevier Ltd. All rights reserved. E-leadership is defined as a social influence process mediated by information and communication technology to produce change in behavior and performance with individuals and groups in an organization. This study investigates e-leadership practices among users of a school virtual learning environment. It was performed in two stages. First, semi-structured interviews with school administrators, teachers, students, parents and school software experts were conducted. The qualitative data collected from the interviews were coded and analyzed using open and axial coding procedures. As a result, an e-leadership model emerged from the data that consisted of eight themes: e-leadership quality with seven core factors, namely, readiness, practices, strategies, support, culture, needs and obstacles. Second, the validity and reliability of the model were further ascertained with a quantitative survey study involving 320 school administrators. The findings of this study established a grounded model for e-leadership practices in schools. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents a 10-year review study that focuses on the investigation of the use of Digital Card Games (DCGs) as learning tools in education. Specific search terms keyed into 10 large scientific electronic databases identified 50 papers referring to the use of non-commercial DCGs in education during the last decade (2003-2013). The findings revealed that the DCGs reported in the reviewed papers: (a) were used for the learning of diverse subject disciplines across all educational levels and leaning towards the school curriculum, in two ways: game-construction and game-play, (b) were mainly proposed by their designers as meaningful, familiar and appealing learning contexts, in order to motivate and engage players/students and also to promote social, rich and constructivist educational experiences while at the same time integrating modern technologies and innovative gamed-based approaches, (c) were implemented using a plethora of digital tools, (d) mainly adopted social and constructivist views of learning during their design and use, although the views were explicitly reported in only a few of these, (e) were evaluated - in more than half of the studies - with positive results in terms of: student learning, attitudes towards DCGs and enrichment of social interaction and collaboration, (f) appeared to support students to acquire essential thinking skills through DCG-play. However, despite the rich DCG-game experiences reported in the reviewed papers, some essential but under-researched topics were also specified. (C) 2017 Elsevier Ltd. All rights reserved. Based on a framework of computational thinking (CT) adapted from Computer Science Teacher Association's standards, an instrument was developed to assess fifth grade students' CT. The items were contextualized in two types of CT application (coding in robotics and reasoning of everyday events). The instrument was administered as a pre and post measure in an elementary school where a new humanoid robotics curriculum was adopted by their fifth grade. Results show that the instrument has good psychometric properties and has the potential to reveal student learning challenges and growth in terms of CT. (C) 2017 Elsevier Ltd. All rights reserved. Pedagogical agents are virtual characters embedded within a learning environment to enhance student learning. Researchers are beginning to understand the conditions in which pedagogical agents can enhance learning, but many questions still remain. Namely, the field has few options in terms of measurement instruments, and limited research has investigated the influence of pedagogical agent persona, or the way the agent is perceived by students, on learning outcomes. In this study, we re-examine the Agent Persona Instrument (API) using confirmatory factor analysis and Rasch methods. We then examine the influence of agent persona on learning outcomes using path analysis. The results confirmed the four factor structure of the instrument, and the fit of items with the Rasch model demonstrates construct validity in our context. However, the analyses indicated that revisions to the instrument are warranted. The path analysis revealed that affective interaction significantly influenced information usefulness variables, however perceptions measured by the API had no significant impact on learning outcomes. Suggestions for revising the API are provided. (c) 2017 Elsevier Ltd. All rights reserved. Regular recreational book reading is a practice that confers substantial educative benefit. However, not all book types may be equally beneficial, with paper book reading more strongly associated with literacy benefit than screen-based reading at this stage, and a paucity of research in this area. While children in developed countries are gaining ever-increasing levels of access to devices at home, relatively little is known about the influence of access to devices with eReading capability, such as Kindles, iPads, computers and mobile phones, on young children's reading behaviours, and the extent to which these devices are used for reading purposes when access is available. Young people are gaining increasing access to devices through school-promoted programs; parents face aggressive marketing to stay abreast of educational technologies at home; and schools and libraries are increasingly their eBook collections, often at the expense of paper book collections. Data from the 997 children who participated in the 2016 Western Australian Study in Children's Book Reading were analysed to determine children's level of access to devices with eReading capability, and their frequency of use of these devices in relation to their recreational book reading frequency. Respondents were found to generally underutilise devices for reading purposes, even when they were daily book readers. In addition, access to mobile phones was associated with reading infrequency. It was also found that reading frequency was less when children had access to a greater range of these devices. (c) 2017 Elsevier Ltd. All rights reserved. Since its inception, one of the primary goals of the Internet has been the open access of information and documents online. This openness aims to allow access to universal knowledge. The Open Educational Resources (OER) have promoted this goal in the context of education. The OER of higher education are supported by means of the Open Course Ware (OCW) initiative. OCW aims to provide access to the knowledge produced by universities. However, the level of access to and use of OCW do not meet expectations. For this reason, it is necessary to provide solutions to increase the accessibility and usability of OCW. As a result, this paper presents a methodology for the evaluation of the accessibility and usability of OCW sites, as well as a framework for improving their accessibility and usability. This methodology and framework have been applied to evaluate and improve the accessibility and usability of a real case study, the OCW initiative of the Universidad Tecnica Particular de Loja (UTPL). This case study has allowed us to validate the methodology and the framework in a real setting in order to determine if they were able to identify and suggest improvement for the accessibility and usability of OCW when required. (c) 2017 Elsevier Ltd. All rights reserved. Recent technological advancements have enabled the widespread application of simulations in organizations, particularly for training contexts. Two important simulation elements, environment and control, have often been shown to improve trainee outcomes. I argue that environment and control are reliant on each other, and their combined effects are explained by extending the Uncanny Valley Theory. The Uncanny Valley Theory proposes that individuals are comfortable with experiences that are very dissimilar or similar to reality, but are uncomfortable with experiences that fall between these conditions. In simulations, perceptions of realism are created through observations (environment) and interactions (control). Users are comfortable with experiences when these elements are in agreement; however, an Uncanny Valley effect may occur when these elements are in disagreement. In the current article, two studies analyze the realism of environment and control in predicting trainee reactions and learning outcomes. Both studies support the extension of the Uncanny Valley Theory to simulations. Simulations with only "low" or only "high" environment and control produce the greatest outcomes, and those with mixed "low" and "high" elements produce the worst outcomes; however, trainees did not differ in reactions to the simulations, indicating that the Uncanny Valley phenomenon in simulations may operate subconsciously. (c) 2017 Elsevier Ltd. All rights reserved. Teachers usually implement their pedagogical ideas in Virtual Learning Environments (VLEs) in a continuous refinement approach also known as "bricolage". Recently, different proposals have enabled the ubiquitous access to VLEs, thus extending the bricolage mode of operation to other learning spaces. However, such proposals tend to present several limitations for teachers to orchestrate learning situations conducted across different physical and virtual spaces. This paper presents an evaluation study that involved the across-spaces usage of Moodle in bricolage mode and learning buckets (configurable containers of learning artifacts) in multiple learning situations spanning five months in a course on Physical Education in the Natural Environment for pre-service teachers. The study followed a responsive evaluation model, in which we conducted an anticipatory data reduction using an existing orchestration framework (called "5 + 3 aspects") for structuring data gathering and analysis. The results showed that learning buckets helped the teachers in the multiple aspects of orchestration, overcoming the limitations of alternative approaches in some specific orchestration aspects: helping the involved teachers to connect different physical and physical spaces, while supporting technologies and activities of their everyday practice, and transferring part of the orchestration load from teachers to students. The results also suggested lines of future improvement, including the awareness of outdoor activities. (c) 2017 Elsevier Ltd. All rights reserved. Underwater exploration has become an active research area over the past few decades. The image enhancement is one of the challenges for those computer vision based underwater researches because of the degradation of the images in the underwater environment. The scattering and absorption are the main causes in the underwater environment to make the images decrease their visibility, for example, blurry, low contrast, and reducing visual ranges. To tackle aforementioned problems, this paper presents a novel method for underwater image enhancement inspired by the Retinex framework, which simulates the human visual system. The term Retinex is created by the combinations of "Retina" and "Cortex". The proposed method, namely LAB-MSR, is achieved by modifying the original Retinex algorithm. It utilizes the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel. With real world data, experiments are carried out to demonstrate both the degradation characteristics of the underwater images in different turbidities, and the competitive performance of the proposed method. (C) 2017 The Authors. Published by Elsevier B.V. Given that single image dehazing is an ill-posed problem, it can be challenging to control the enhancement of haze images. In this paper, we propose a fast and accurate dehazing algorithm based on a learning framework. Using randomly generated training samples, we tackle the difficult problem of sampling haze clear image pairs. Seven haze-relevant features based on image quality are extracted and analyzed. A regression model is learned using support vector regression (SVR), which can estimate the transmission map accurately. Further, a new method is presented to estimate the dynamic atmospheric light, which improves the performance in the sky and shadow regions. Experimental results demonstrate that the proposed approach has a lower computational complexity, and the dehazing results are visually appealing even on extremely challenging photos, such as street views, thick fog, and sky regions. Subjective analysis and objective quality assessments demonstrate that, the proposed method generates superior results than the state-of-the-art methods. (C) 2017 Published by Elsevier B.V. In this paper we propose a method for logo recognition using deep learning. Our recognition pipeline is composed of a logo region proposal followed by a Convolutional Neural Network (CNN) specifically trained for logo classification, even if they are not precisely localized. Experiments are carried out on the FlickrLogos-32 database, and we evaluate the effect on recognition performance of synthetic versus real data augmentation, and image pre-processing. Moreover, we systematically investigate the benefits of different training choices such as class-balancing, sample-weighting and explicit modeling the background class (i.e. no-logo regions). Experimental results confirm the feasibility of the proposed method, that outperforms the methods in the state of the art. (C) 2017 Elsevier B.V. All rights reserved. Vision-based object detection is essential for a multitude of robotic applications. However, it is also a challenging job due to the diversity of the environments in which such applications are required to operate, and the strict constraints that apply to many robot systems in terms of run-time, power and space. To meet these special requirements of robotic applications, we propose an efficient deep network for vision-based object detection. More specifically, for a given image captured by a robot mount camera, we first introduce a novel proposal layer to efficiently generate potential object bounding-boxes. The proposal layer consists of efficient on-line convolutions and effective off-line optimization. Afterwards, we construct a robust detection layer which contains a multiple population genetic algorithm-based convolutional neural network (MPGA-based CNN) module and a TLD-based multi-frame fusion procedure. Unlike most deep learning based approaches, which rely on GPU, all of the on-line processes in our system are able to run efficiently without GPU support. We perform several experiments to validate each component of our proposed object detection approach and compare the approach with some recently published state-of-the-art object detection algorithms on widely used datasets. The experimental results demonstrate that the proposed network exhibits high efficiency and robustness in object detection tasks. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we investigate the neural-network-based adaptive guaranteed cost control for continuous time affine nonlinear systems with dynamical uncertainties. Through theoretical analysis, the guaranteed cost control problem is transformed into designing an optimal controller of the associated nominal system with a newly defined cost function. The approach of adaptive dynamic programming (ADP) is involved to implement the guaranteed cost control strategy with the neural network approximation. The stability of the closed-loop system with the guaranteed cost control law, the convergence of the critic network weights and the approximate boundary of the guaranteed cost control law are all analyzed. Two simulation examples have been conducted and all simulation results have indicated the good performance of the developed guaranteed cost control strategy. (C) 2017 Elsevier B.V. All rights reserved. Recently, graph-based dimensionality reduction methods have attracted much attention due to their widely applications in many practical tasks such as image classification and data clustering. However, an inappropriate graph which cannot accurately reflect the underlying structure and distribution of input data will dramatically deteriorate the performances of these methods. In this paper, we propose a novel algorithm termed Locality Constrained Graph Optimization Dimensionality Reduction (LC-GODR) to address the limitations of existing graph-based dimensionality reduction methods. Firstly, unlike most graph-based dimensionality reduction methods in which the graphs are constructed in advance and kept unchanged during dimensionality reduction, our LC-GODR combines the graph optimization and projection matrix learning into a joint framework. Therefore, the graph in the proposed algorithm can be adaptively updated during the procedure of dimensionality reduction. Secondly, through introducing the locality constraints into our LC-GODR, the local information of high-dimensional input data can be discovered and well preserved, which makes the proposed algorithm distinct from other graph optimization based dimensionality reduction methods. Moreover, an effective updating scheme is also provided to solve the proposed LC-GODR. Extensive experiments on two UCI and five image databases are conducted to demonstrate the effectiveness of our algorithm. The experimental results indicate that the proposed LC-GODR outperforms other related methods. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a class of impulsive inertial neural networks with time-varying delays is considered. By choosing proper variable transformation, the original inertial neural networks can be rewritten as first order differential equations. Based on Lyapunov functions method and inequality techniques, some sufficient conditions are derived to guarantee global exponential convergence of the discussed inertial neural networks with impulsive effects. Meanwhile, the framework of the exponential convergence ball in the state space with a pre-specified convergence rate is also given. Here, the existence and uniqueness of the equilibrium points need not to be considered. Finally, some numerical examples with simulation are presented to show the effectiveness of the obtained results. (C) 2017 Elsevier B.V. All rights reserved. Rule-based classification systems constructed upon linguistic terms in the antecedent and consequent of the rules lack sufficient generalization capabilities. This paper proposes a new multivariate fuzzy system identification algorithm to design binary rule-based classification structures through making use of the repulsive forces between the cluster prototypes of different class labels. This approach is coupled with the potential discrimination power of each dimension in the feature space to increase the generalization potential. To address this issue, first the multivariate variant of a newly proposed soft clustering algorithm along with its mathematical foundations is proposed. Next, the discriminatory power of each individual feature is computed, using the multivariate membership values in the proposed clustering algorithm to achieve the most accurate firing degree in each rule. The main advantage of this method is to handle unbalanced datasets yielding superior true positive measure while keeping the false positive rate low enough to avoid the natural bias toward class labels containing larger number of training samples. To validate the proposed approaches, a series of numerical experiments on publicly available datasets and a real clinical dataset collected by our team were conducted. Simulation results demonstrated achievement of the primary goals of this research. (C) 2017 Elsevier B.V. All rights reserved. Finite-time stability of a class of fractional-order complex-valued memristor-based neural networks with both leakage and time-varying delays is investigated in this paper. By employing the set-valued map and differential inclusions, the solutions of memristor-based systems are intended in Filippov's sense. Via using Holder inequality, Gronwall-Bellman inequality and inequality scaling skills, sufficient conditions to guarantee the stability of the system are derived when 0 < alpha < 1/2 and 1/2 <= alpha <= 1, respectively. Finally, two numerical examples are designed to illustrate the validity and feasibility of the obtained results. (C) 2017 Elsevier B.V. All rights reserved. The topic of non-fragile observation for memristive neural networks with both continuous-time and discrete-time cases are provided in this paper. By endowing the Lyapunov technique, the corresponding sufficient criteria for the stability findings are furnished in the form of linear matrix inequalities (LMIs), of which, the desired observer gains can be calculated via the LMIs. What is the difference lies that the driven memristive neural networks are recast into models with interval parameters when considering the fact that the parameters of memrisitve model are state-dependent, which lead to parameter mismatch issue when different initial values are given. Thus, a new robust control method is introduced to tackle with the target model. Finally, the analytical design are substantiated with numerical results. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a novel reinforcement learning (RL) based approach is proposed to solve the optimal tracking control problem (OTCP) for continuous-time (CT) affine nonlinear systems using general value iteration (VI). First, the tracking performance criterion is described in a total-cost manner without a discount term which can ensure the asymptotic stability of the tracking error. Then, some mild assumptions are assumed to relax the restriction of the initial admissible control in most existing references. Based on the proposed assumptions, the general VI method is proposed and three situations are considered to show the convergence with any initial positive performance function. To validate the theoretical results, the proposed general VI method is implemented by two neural networks on a nonlinear spring-mass-damper system and two situations are considered to show the effectiveness. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates the problem of the exponential synchronization of complex dynamical networks with time-varying inner coupling via event-triggered communication. The network topology is assumed to have a spanning tree. A sufficient condition is derived to guarantee the exponential synchronization by employing the special Lyapunov stability analysis method, which by combining the difference and differential of the Lyapunov function rather than the single difference or differential. The main advantage of this paper is to avoid continuous communication between network nodes, which can decrease the number of information updates, reduce the network congestion and avoid the waste of network resources. Moreover, the Zeno behavior is excluded as well by the strictly positive sampling intervals. Finally, A simulation example is given to show the effectiveness of the proposed exponential synchronization criteria. (C) 2017 Elsevier B.V. All rights reserved. Associative classification is rule-based involving candidate rules as criteria of classification that provide both highly accurate and easily interpretable results to decision makers. The important phase of associative classification is rule evaluation consisting of rule ranking and pruning, in which bad rules are removed to improve performance. Existing association rule mining algorithms relied on frequency-based rule evaluation methods such as support and confidence, failing to provide sound statistical or computational measures for rule evaluation, and often suffer from many redundant rules. In this research we propose predictability-based collective class association rule mining based on cross-validation with a new rule evaluation step. We measure the prediction accuracy of each candidate rule in inner cross-validation steps. We split a training dataset into inner training sets and inner test sets and then evaluate candidate rules' predictive performance. From several experiments, we show that the proposed algorithm outperforms some existing algorithms while maintaining a large number of useful rules in the classifier. Furthermore, by applying the proposed algorithm to a real-life healthcare dataset, we demonstrate that it is practical and has potential to reveal important patterns in the dataset. (C) 2017 Elsevier Ltd. All rights reserved. Recommender systems try to predict the preferences of users for specific items, based on an analysis of previous consumer preferences. In this paper, we propose SCoR, a Synthetic Coordinate based Recommendation system which is shown to outperform the most popular algorithmic techniques in the field, approaches like matrix factorization and collaborative filtering. SCoR assigns synthetic coordinates to nodes (users and items), so that the distance between a user and an item provides an accurate prediction of the user's preference for that item. The proposed framework has several benefits. It is parameter free, thus requiring no fine tuning to achieve high performance, and is more resistance to the cold-start problem compared to other algorithms. Furthermore, it provides important annotations of the dataset, such as the physical detection of users and items with common and unique characteristics as well as the identification of outliers. SCoR is compared against nine other state-of-the-art recommender systems, sever of them based on the well known matrix factorization and two on collaborative filtering. The comparison is performed against four real datasets, including a brief version of the dataset used in the well known Netflix challenge. The extensive experiments prove that SCoR outperforms previous techniques while demonstrating its improved stability and high performance. (C) 2017 Elsevier Ltd. All rights reserved. Empty or limited storage capacities between machines introduce various types of blocking constraint in the industries with flowshop environment. While large applications demand flowshop scheduling with a mix of different types of blocking, research in this area mainly focuses on using only one kind of blocking in a given problem instance. In this paper, using makespan as a criterion, we study permutation flowshops with zero capacity buffers operating under mixed blocking conditions. We present a very effective scatter search (SS) algorithm for this. At the initialisation phase of SS, we use a modified version of the well-known Nawaz, Enscore and Ham (NEH) heuristic. For the improvement method in SS, we use an Iterated Local Search (ILS) algorithm that adopts a greedy job selection and a powerful NEH-based perturbation procedure. Moreover, in the reference set update phase of SS, with small probabilities, we accept worse solutions so as to increase the search diversity. On standard benchmark problems of varying sizes, our algorithm very significantly outperforms well-known existing algorithms in terms of both the solution quality and the computing time. Moreover, our algorithm has found new upper bounds for 314 out of 360 benchmark problem instances. (C) 2017 Elsevier Ltd. All rights reserved. Recent work has been devoted to study the use of multiobjective evolutionary algorithms (MOEAs) in stock portfolio optimization, within a common mean-variance framework. This article proposes the use of a more appropriate framework, mean-semivariance framework, which takes into account only adverse return variations instead of overall variations. It also proposes the use and comparison of established technical analysis (TA) indicators in pursuing better outcomes within the risk-return relation. Results show there is some difference in the performance of the two selected MOEAs non-dominated sorting genetic algorithm II (NSGA II) and strength pareto evolutionary algorithm 2 (SPEA 2) - within portfolio optimization. In addition, when used with four TA based strategies relative strength index (RSI), moving average convergence/divergence (MACD), contrarian bollinger bands (CBB) and bollinger bands (BB), the two selected MOEAs achieve solutions with interesting in-sample and out-of-sample outcomes for the BB strategy. (C) 2017 Elsevier Ltd. All rights reserved. Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems. (C) 2017 The Authors. Published by Elsevier Ltd. The literature on supply base segmentation has increasingly adopted multi-criteria decision making (MCDM) techniques into recently proposed models. However, most proposals segment the supply base from the standpoint of the purchased item, which prevents them from providing guidelines that are specific to each supplier. Some authors have attempted to overcome these limitations by putting forward portfolio models based on the relationship with suppliers. These approaches use fuzzy variables and MCDM methods that take qualitative judgements by experts as the only input for decision making. However, many companies have databases with historical data about the performance of past transactions with suppliers that should be considered by expert systems that aim to comprehensively evaluate suppliers' performance. This paper seeks to address this gap by proposing a segmentation model based on the relationship with suppliers capable of aggregating quantitative and qualitative criteria. Analytic Hierarchy Process (AHP) was used to determine the relative importance of each criteria. Fuzzy 2-tuple, a prominent computing with word (CWW) approach, was used to evaluate suppliers with a mixture of historical quantitative data and qualitative judgements by purchasing experts. An illustrative application of the proposed model was carried out in the pharmaceutical supply center (PSC) of a teaching hospital. The proposed model can be viewed as a decision support system capable of aggregating the qualitative judgements of experts and quantitative historical performance measures, thus providing guidelines to improve the relationship between suppliers and the buyer firm. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents a new and straightforward system for bearing fault detection. The system computes the stability of two vibration signals by using the direct matching points (DMP) of an elastic and nonlinear align function. It is able to find discriminant properties in the stability of fault-free and faulty bearing vibration signals from the early and late stages of the fault in critical bearing parts. Because training data constitutes one of the critical challenges in most expert and intelligent systems, one of the novelties of the proposed stability-based system is that it requires neither training nor fine-tuning. A significant impact on the robustness of the system is demonstrated using two publicly available vibration signal databases under several load conditions, with real faults, during multiple machine working states. Experimental results validate the use of the proposed stability-based system for predictive maintenance in bearings. (C) 2017 Elsevier Ltd. All rights reserved. Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user's intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification. (C) 2017 Elsevier Ltd. All rights reserved. Interactive image segmentation has remained an active research topic in image processing and graphics, since the user intention can be incorporated to enhance the performance. It can be employed to mobile devices which now allow user interaction as an input, enabling various applications. Most interactive segmentation methods assume that the initial labels are correctly and carefully assigned to some parts of regions to segment. Inaccurate labels, such as foreground labels in background regions for example, lead to incorrect segments, even by a small number of inaccurate labels, which is not appropriate for practical usage such as mobile application. In this paper, we present an interactive segmentation method that is robust to inaccurate initial labels (scribbles). To address this problem, we propose a structure-aware labeling method using occurrence and co-occurrence probability (OCP) of color values for each initial, label in a unified framework. Occurrence probability captures a global distribution of all color values within each label, while co-occurrence one encodes a local distribution of color values around the label. We show that nonlocal regularization together with the OCP enables robust image segmentation to, inaccurately assigned labels and alleviates a small-cut problem. We analyze theoretic relations of our approach to other segmentation methods. Intensive experiments with synthetic and manual labels show that our approach outperforms the state of the art. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, starting from a comprehensive mathematical model of a Collaborative Reputation Systems (CRSes), we present a research study within the Cultural Heritage domain. The main goal of this study has been the evaluation and classification of the visitors' behaviour during a cultural event. By means of mobile technological instruments, opportunely deployed within the environment, it is possible to collect data representing the knowledge to be inferred and give a reliable rate for both visitors and exposed artworks. Discussed results, confirm the reliability and the usefulness of CRSes for deeply understand dynamics related to people visiting styles. (C) 2017 Elsevier Ltd. All rights reserved. Flower pollination algorithm (FPA) is a recent addition to the field of nature inspired computing. The algorithm has been inspired from the pollination process in flowers and has been applied to a large spectra of optimization problems. But it has certain drawbacks which prevents its applications as a standard algorithm. This paper proposes new variants of FPA employing new mutation operators, dynamic switching and improved local search. A comprehensive comparison of proposed algorithms has been done for different population sizes for optimizing seventeen benchmark problems. The best variant among these is adaptive-Levy flower pollination algorithm (ALFPA) which has been further compared with the well-known algorithms like artificial bee colony (ABC), differential evolution (DE), firefly algorithm (FA), bat algorithm (BA) and grey wolf optimizer (GWO). Numerical results show that ALFPA gives superior performance for standard benchmark functions. The algorithm has also been subjected to statistical tests and again the performance is better than the other algorithms. (c) 2017 Elsevier Ltd. All rights reserved. The peak of participants indicates the success probability of collective actions. The mathematical model is built to explore the mechanism and prediction values of peaks. Besides of utility heterogeneity, cost heterogeneity is added into to simulate the situation of multiple heterogeneities in reality. Each simulation is run one hundred times repeatedly to get stable expectations and standard deviations of peaks under each combination of parameter values. Based on results of simulation, effects of related factors on peaks is investigated and estimated statistically, making it possible to predict peaks. In addition to forecasting the mean of peaks, the variability of peaks is estimated as well. Therefore, the distribution of peaks is predicted. Utility heterogeneity, cost heterogeneity and the Jointness of supply (J) exert significant effects on the distribution of peaks. It indicates that both utility heterogeneity and cost heterogeneity reduce the values and increase the variability (standard deviation) of peaks. Facilitating chain actions among individuals, heterogeneity promotes the outbreak of collective actions. However, it reduces the peaks and decreases the success probability of collective actions, while homogeneity increases the peak of participants and enhances the success chance of collective actions. (c) 2017 Elsevier Ltd. All rights reserved. Neuro-fuzzy systems have been proved to be an efficient tool for modelling real life systems. They are precise and have ability to generalise knowledge from presented data. Neuro-fuzzy systems use fuzzy sets - most commonly type-1 fuzzy sets. Type-2 fuzzy sets model uncertainties better than type-1 fuzzy sets because of their fuzzy membership function. Unfortunately computational complexity of type reduction in general type-2 systems is high enough to hinder their practical application. This burden can be alleviated by application of interval type-2 fuzzy sets. The paper presents an interval type-2 neuro-fuzzy system with interval type-2 fuzzy sets both in premises (Gaussian interval type-2 fuzzy sets with uncertain fuzziness) and consequences (trapezoid interval type-2 fuzzy set). The inference mechanism is based on the interval type-2 fuzzy Lukasiewicz, Reichenbach, Kleene-Dienes, or Brouwer-Godel implications. The paper is accompanied by numerical examples. The system can elaborate models with lower error rate than type-1 neuro-fuzzy system with implication-based inference mechanism. The system outperforms some known type-2 neuro-fuzzy systems. (c) 2017 Elsevier Ltd. All rights reserved. There are several commercial financial expert systems that can be used for trading on the stock exchange. However, their predictions are somewhat limited since they primarily rely on time-series analysis of the market. With the rise of the Internet, new forms of collective intelligence (e.g. Google and Wikipedia) have emerged, representing a new generation of "crowd-sourced" knowledge bases. They collate information on publicly traded companies, while capturing web traffic statistics that reflect the public's collective interest. Google and Wikipedia have become important "knowledge bases" for investors. In this research, we hypothesize that combining disparate online data sources with traditional time-series and technical indicators for a stock can provide a more effective and intelligent daily trading expert system. Three machine learning models, decision trees, neural networks and support vector machines, serve as the basis for our "inference engine". To evaluate the performance of our expert system, we present a case study based on the AAPL (Apple NASDAQ) stock. Our expert system had an 85% accuracy in predicting the next-day AAPL stock movement, which outperforms the reported rates in the literature. Our results suggest that: (a) the knowledge base of financial expert systems can benefit from data captured from nontraditional "experts" like Google and Wikipedia; (b) diversifying the knowledge base by combining data from disparate sources can help improve the performance of financial expert systems; and (c) the use of simple machine learning models for inference and rule generation is appropriate with our rich knowledge database. Finally, an intelligent decision making tool is provided to assist investors in making trading decisions on any stock, commodity or index. (c) 2017 Elsevier Ltd. All rights reserved. Segmentation is considered the central part of an image processing system due to its high influence on the posterior image analysis. In recent years, the segmentation of magnetic resonance (MR) images has attracted the attention of the scientific community with the objective of assisting the diagnosis in different brain diseases. From several techniques, thresholding represents one of the most popular methods for image segmentation. Currently, an extensive amount of contributions has been proposed in the literature, where thresholding values are obtained by optimizing relevant criteria such as the cross entropy. However, most of such approaches are computationally expensive, since they conduct an exhaustive search strategy for obtaining the optimal thresholding values. This paper presents a general method for image segmentation. To estimate the thresholding values, the proposed approach uses the recently published evolutionary method called the Crow Search Algorithm (CSA) which is based on the behavior in flocks of crows. Different to other optimization techniques used for segmentation proposes, CSA presents a better performance, avoiding critical flaws such as the premature convergence to sub-optimal solutions and the limited exploration-exploitation balance in the search strategy. Although the proposed method can be used as a generic segmentation algorithm, its characteristics allow obtaining excellent results in the automatic segmentation of complex MR images. Under such circumstances, our approach has been evaluated using two sets of benchmark images; the first set is composed of general images commonly used in the image processing literature, while the second set corresponds to MR brain images. Experimental results, statistically validated, demonstrate that the proposed technique obtains better results in terms of quality and consistency. (c) 2017 Elsevier Ltd. All rights reserved. Indoor scene classification is usually approached from a computer vision perspective. However, in some fields like robotics, additional constraints must be taken into account. Specifically, in systems with low resources, state-of-the-art techniques (CNNs) cannot be successfully deployed. In this paper, we try to close this gap between theoretical approaches and real world solutions by performing an in-depth study of the factors that influence classifiers performance, that is, size and descriptor quality. To this end, we perform a thorough evaluation of the visual and depth data obtained with an RGB-D sensor to propose techniques to build robust descriptors that can enable real-time indoor scene classification. Those descriptors are obtained by properly selecting and combining visual and depth information sources. (C) 2017 Elsevier Ltd. All rights reserved. Color-based visual object tracking is one of the most commonly used tracking methods. Among many tracking methods, the mean shift tracker is used most often because it is simple to implement and consumes less computational time. However, mean shift trackers exhibit several limitations when used for long-term tracking. In challenging conditions that include occlusions, pose variations, scale changes, and illumination changes, the mean shift tracker does not work well. In this paper, an improved tracking algorithm based on a mean shift tracker is proposed to overcome the weaknesses of existing methods based on mean shift tracker. The main contributions of this paper are to integrate mean shift tracker with an online learning-based detector and to newly define the Kalman filter-based validation region for reducing computational burden of the detector. We combine the mean shift tracker with the online learning-based detector, and integrate the Kalman filter to develop a novel tracking algorithm. The proposed algorithm can reinitialize the target when it converges to a local minima and it can cope with scale changes, occlusions and appearance changes by using the online learning-based detector. It updates the target model for the tracker in order to ensure long-term tracking. Moreover, the validation region obtained by using the Kalman filter and the Mahalanobis distance is used in order to operate detector in real-time. Through a comparison against various mean shift tracker-based methods and other state-of-the-art methods on eight challenging video sequences, we demonstrate that the proposed algorithm is efficient and superior in terms of accuracy and speed. Hence, it is expected that the proposed method can be applied to various applications which need to detect and track an object in real-time. (C) 2017 Elsevier Ltd. All rights reserved. Mining periodic patterns in time series databases is a daunting research task that plays a significant role at decision making in real life applications. There are many algorithms for mining periodic patterns in time series, where all patterns are considered as uniformly same. However, in real life applications, such as market basket analysis, gene analysis and network fault experiment, different types of items are found with several levels of importance. Again, the existing algorithms generate huge periodic patterns in dense databases or in low minimum support, where most of the patterns are not important enough to participate in decision making. Hence, a pruning mechanism is essential to reduce these unimportant patterns. As a purpose of mining only important patterns in a minimal time period, we propose a weight based framework by assigning different weights to different items. Moreover, we develop a novel algorithm, WPPM (Weighted Periodic Pattern Mining Algorithm), in time series databases underlying suffix trie structure. To the best of our knowledge, ours is the first proposal that can mine three types of weighted periodic pattern, (i.e. single, partial, full) in a single run. A pruning method is introduced by following downward property, with respect of the maximum weight of a given database, to discard unimportant patterns. The proposed algorithm presents flexibility to user by providing intermediate unimportant pattern skipping opportunity and setting different starting positions in the time series sequence. The performance of our proposed algorithm is evaluated on real life datasets by varying different parameters. At the same time, a comparison between the proposed and an existing algorithm is shown, where the proposed approach outperformed the existing algorithm in terms of time and pattern generation. (C) 2017 Elsevier Ltd. All rights reserved. Point-of-interest (POI) recommender system encourages users to share their locations and social experience through check-ins in online location-based social networks. A most recent algorithm for POI recommendation takes into account both the location relevance and diversity. The relevance measures users' personal preference while the diversity considers location categories. There exists a dilemma of weighting these two factors in the recommendation. The location diversity is weighted more when a user is new to a city and expects to explore the city in the new visit. In this paper, we propose a method to automatically adjust the weights according to user's personal preference. We focus on investigating a function between the number of location categories and a weight value for each user, where the Chebyshev polynomial approximation method using binary values is applied. We further improve the approximation by exploring similar behavior of users within a location category. We conduct experiments on five real-world datasets, and show that the new approach can make a good balance of weighting the two factors therefore providing better recommendation. (C) 2017 Published by Elsevier Ltd. Today, graphology is seen as an experimental field of science that is dedicated to suggest ideas about diseases, profession choices, mood and characteristics of a person by investigating his/her handwriting. Graphology is in cooperation with medicine, psychology, sociology or other disciplines that are based on observation. Graphology is used for staff recruitment in business, diagnoses in medicine, identification of criminals in forensics, choosing a profession in education, guidance and counseling and other practices at every level of social structure. It is quite interesting that the number of the scientific studies on graphology is limited around the world, that there are no specific institutions providing education of graphology and that institutions except for a few international corporations do not benefit from graphology at all. In terms of demographic properties, many statistical and mathematical analyses investigate similar and different variables. Especially, differences regarding gender have become subject to research. Therefore, detecting gender through handwriting can give pace to research in other disciplines. Moreover, the research can be useful in any field where gender detection is needed. This study fulfills two objectives. The first one is to find out whether a writer can identify his/her own handwriting. The second objective is to detect the gender of a writer of a text with the help of graphology and computer sciences. The impact of the study is reflected in the fact that findings can be used in fields where gender detection is needed, and that the detection is done with the help of expert and intelligent systems. At the end of the study, gender detection was performed for the individuals by making use of 133 attributes. Then, a decision tree and lists of rules were created with some algorithms. The purpose was to detect the gender of the person by making a character analysis of the handwriting with the help of decision tree formation methods in data mining. The analysis showed that it is possible to detect the gender of a person with the use of the specified attributes. The study reached a success level of 93.75% with ID3 algorithm. (C) 2017 Elsevier Ltd. All rights reserved. This research is focused on the prediction of ICU readmissions using fuzzy modeling and feature selection approaches. There are a number of published scores for assessing the risk of readmissions, but their poor predictive performance renders them unsuitable for implementation in the clinical setting. In this work, we propose the use of feature engineering and advanced computational intelligence techniques to improve the performance of current models. In particular, we propose an approach that relies on transforming raw vital signs, laboratory results and demographic information into more informative pieces of data, selecting a subset of relevant and non-redundant variables and applying fuzzy ensemble modeling to the feature-engineered data for deriving important nonlinear relations between variables. Different criteria for selecting the best predictor from the ensemble and novel evaluation measures are explored. In particular, the area under the sensitivity curve and area under the specificity curve are investigated. The ensemble approach combined with feature transformation and feature selection showed increased performance, being able to predict early readmissions with an AUC of 0.77 +/- 0.02. To the best of our knowledge, this is the first computational intelligence technique allowing the prediction of readmissions in a daily basis. The high balance between sensitivity and specificity shows its strength and suitability for the management of the patient discharge decision making process. (C) 2017 Elsevier Ltd. All rights reserved. The arrival of new technologies related to smart grids and the resulting ecosystem of applications and management systems pose many new problems. The databases of the traditional grid and the various initiatives related to new technologies have given rise to many different management systems with several formats and different architectures. A heterogeneous data source integration system is necessary to update these systems for the new smart grid reality. Additionally, it is necessary to take advantage of the information smart grids provide. In this paper, the authors propose a heterogeneous data source integration based on IEC standards and metadata mining. Additionally, an automatic data mining framework is applied to model the integrated information. (C) 2017 Elsevier Ltd. All rights reserved. An unequal area facility layout problem (UA-FLP) is a typical optimization problem that occurs when constructing an efficient layout within given areas. In this research, a harmony search (HS)-based heuristic algorithm is presented to solve UA-FLPs. In this study, the facility layout is represented as an allocation of blocks with restrictions in terms of an unequal area and rectangular Shape. A more effective facility layout representation is proposed. This is done via a slicing tree representation as a form of layout structure, and via the HS -based algorithm, which generates a quality solution. Once the basic HS solution is generated, modifications are introduced to facilitate improvements. Specifically, the structure of the slicing tree representation is modified, and a re-adjustment operation is added to diversify the possible range of solutions. A penalty scheme is also proposed to improve the feasible region searching capabilities. The effects of the alterations are evaluated by testing well-known problems from previous studies. The proposed algorithm generates the solutions as proficiently as the best results provided by previous research. The proposed method is robust in terms of process, and it determines a favorable solution within a short amount of time. (C) 2017 Elsevier Ltd. All rights reserved. Hybridization of two or more algorithms has always been a keen interest of research due to the quality of improvement in searching capability. Taking the positive insights of both the algorithms, the developed hybrid algorithm tries to minimize the substantial limitations. Clustering is an unsupervised learning method, which groups the data according to their similar or dissimilar properties. Fuzzy c-means (FCM) is one of the popularly used clustering algorithms and performs better as compared to other clustering techniques such as k-means. However, FCM possesses certain limitations such as premature trapping at local minima and high sensitivity to the cluster center initialization. Taking these issues into consideration, this research proposes a novel hybrid approach of FCM with a recently developed chemical based metaheuristic for obtaining optimal cluster centers. The performance of the proposed approach is compared in terms of cluster fitness values, inter-cluster distance and intra-cluster distance with other evolutionary and swarm optimization based approaches. A rigorous experimentation is simulated and experimental result reveals that the proposed hybrid approach is performing better as compared to other approaches. (C) 2017 Elsevier Ltd. All rights reserved. Existing methods for extracting titles from HTML web page mostly rely on visual and structural features. However, this approach fails in the case of service-based web pages because advertisements are often given more visual emphasize than the main headlines. To improve the current state-of-the-art, we propose a novel method that combines statistical features, linguistic knowledge, and text segmentation. Using annotated English corpus, we learn the morphosyntactic characteristics of known titles and define a part-of-speech tag patterns that help to extract candidate phrases from the web page. To evaluate the proposed method, we compared two datasets Titler and Mopsi and evaluated the extracted features using four classifiers: Naive Bayes, k-NN, SVM, and clustering. Experimental results show that the proposed method outperform the solution used by Google from 0.58 to 0.85 on Titler corpus and from 0.43 to 0.55 on Mopsi dataset, and offers a readily available solution for the title extraction problem. (C) 2017 Elsevier Ltd. All rights reserved. Recommendation is the process of identifying and recommending items that are more likely to be of interest to a user. Recommender systems have been applied in variety of fields including e-commerce web pages to increase the sales through the page by making relevant recommendations to users. In this paper, we pose the problem of recommendation as an interpolation problem, which is not a trivial task due to the high dimensional structure of the data. Therefore, we deal with the issue of high dimension by representing the data with lower dimensions using High Dimensional Model Representation (HDMR) based algorithm. We combine this algorithm with the collaborative filtering philosophy to make recommendations using an analytical structure as the data model based on the purchase history matrix of the customers. The proposed approach is able to make a recommendation score for each item that have not been purchased by a customer which potentiates the power of the classical recommendations. Rather than using benchmark data sets for experimental assessments, we apply the proposed approach to a novel industrial data set obtained from an e-commerce web page from apparels domain to present its potential as a recommendation system. We test the accuracy of our recommender system with several pioneering methods in the literature. The experimental results demonstrate that the proposed approach makes recommendations that are of interest to users and shows better accuracy compared to state-of-the-art methods. (C) 2017 Elsevier Ltd. All rights reserved. Surface EMGs have been the primary sources for control of prosthetic hands due to their comfort and naturalness. The recent advances in the development of the prosthetic hands with many degrees of freedom and many actuators, requires many EMG channels to take the full advantage of the complex prosthetic terminals. Some EMG wearable devices were developed lately, that are able to detect several gestures. However, the main drawbacks of these systems are the cost, the size and the system complexity. In this paper, we suggest a simple, fast and low-cost system which can recognize up to 4 gestures with a single channel surface EMG signal. Gestures include hand closing, hand opening, wrist flexion and double wrist flexion. These gestures can be used to control a prosthetic terminal based on predefined grasp postures. We show that by using a high-dimensional feature space, together with a support vector machine algorithm, it is possible to classify these four gestures. Overall, the system showed satisfactory results in terms of classification accuracy, real time gesture recognition, and tolerance to hand movements through integration of a lock gesture. Calibration took only 30 seconds and session independence was demonstrated by high classification accuracy on different test sessions without repeating the calibration. As a case study we use this system to control a previously developed soft prosthetic hand. This is particularly interesting because we show that a simple hardware that has only a single channel EMG, can afford the control of a multi-DOF prosthetic hands. In addition, such system may be used as a general purpose Human Machine Interface for gaming,for controlling multimedia devices, or to control robots. (C) 2017 Elsevier Ltd. All rights reserved. Passenger car equivalents for heavy vehicles are required to carry out capacity calculations and perform operational analysis of any road entity (roadway segments or intersections). At single-lane roundabouts, the constraints to the vehicular trajectories imposed by the curvilinear geometric design and the driver's gap acceptance behaviour are expected to produce an impact of the heavy vehicles on the quality of traffic flow different from that produced on freeways and two-lane highways or other at-grade intersections. This is also because entering flow is opposed by the circulating flow which has priority and travels in an anticlockwise direction on a single-lane path around the central island. This paper addresses the question of how to estimate the passenger car equivalents for heavy vehicles on single-lane roundabouts. First, a comparison was performed between the empirical capacity functions based on a meta-analytic estimation of the critical and the follow up headways and the simulation outputs manually obtained for a single-lane roundabout built in Aimsun microscopic simulator. A genetic algorithm-based calibration procedure, therefore, was used to reach a better convergence between the simulation outputs and the empirical capacities. Based on the calibrated model, the passenger car equivalents were determined by comparing the capacity functions built for a fleet of passenger cars with the capacity functions calculated for different percentages of heavy vehicles. Differently from HCM 2010 which assumes a heavy vehicle to be equivalent to two passenger cars and sets as 2.0 the passenger car equivalents for heavy vehicles for roundabouts, a higher PCE effect would be expected on the quality of traffic conditions when the traffic stream contains a high number of heavy vehicles; this effect should be accounted for when calculating capacity and level-of-services. (C) 2017 Elsevier Ltd. All rights reserved. The simulation and analysis of structure within decision making unit (DMU) is the basis on which network data envelopment analysis (DEA) opens the "black box" and evaluates the efficiency of system with complex internal structure. The efficiency measurements of system with sub-DMU in series or with sub-DMUs in parallel are two common cases in the theory development and applications of two-stage DEA. However, the research on parallel-series hybrid system is not rich enough. The paper develops a set of DEA models to treat a two-stage system comprised of three sub-DMUs in hybrid form with additional inputs to the second stage. The proposed models simulate precisely the system's parallel-series internal structure, employ synthetically additive and multiplicative DEA approaches to estimate and decompose the efficiencies of system, and adopt a heuristic method to convert non linear program due to the additional inputs into a linear program. This approach gives more information about the sources of inefficiency by penetrating into the depth of system and modeling the efficiency formation mechanism. A model application is provided. (C) 2017 Elsevier Ltd. All rights reserved. The applications of artificial intelligence (AI) have considerably expanded over recent years. A new class of industrial systems is beginning to evolve that incorporates using high volume data and advanced analytics to better optimize product quality while reducing energy consumption. Artificial neural networks (ANN) when combined with advanced modeling and control, begins to form an AI platform that can be further enhanced for factories of the future. This paper provides a demonstration of such initial work that can be further developed for future systems in a generic way. When considering polymer processing such as plastic injection molding, the mold cavity temperature (MCT) profile directly relates to part quality and part reject rates. Therefore, it is desirable to optimize the mold cooling process using real time control of MCT as it directly affect part quality. However, MCT is affected by a number of interacting nonlinear dynamic parameters that are often neglected due to the challenge of quantifying such parameters. Advanced model based control algorithms are often used for providing improved control of complex systems. However, they depend on good model formulations that are analytically insufficient. An online intelligent system identification approach for the mold cooling process is developed and tested. An ANN is designed to adjust online sub-space parameters that govern a mold cooling model. Results demonstrate that this online ANN approach can be used to accurately predict the dynamic behavior of mold cavity surface temperature. This is key to many industrial systems where their states are not directly observable and uncertainties are unknown. The methodology can be readily adapted for different operating conditions as in this case of polymer processing and has good potential for its integration with advanced model based control schemes and cloud computing approaches for the next generation of machines. (C) 2017 Elsevier Ltd. All rights reserved. The myriad controversies embroiling the mental health field-heightened in the lead-up to the release of DSM-5 (2013)-merit a close analysis of the field and its epistemological underpinnings. By using DSM as a starting point, this paper develops to overview the entire mental health field. Beginning with a history of the field and its recent crises, the troubles of the past "external crisis" are compared to the contemporary "internal crisis." In an effort to examine why crises have recurred, the internal dynamics of the field are assessed: applying Kuhn's paradigmatic framework, crises are appraised to situate the differences between the natural sciences and the mental health field. Next, a Foucauldian analysis examines the functioning of the field's power over the body, which is disproportionate in comparison to its scientific grounding. This is followed by investigating the field's combination of contested scientific grounding and significant power, through a Latourian consideration of the assumptions and meaning behind the mental health field's deployment of science. This includes scrutinizing the history of the classification of Post-Traumatic Stress Disorder (PTSD). The paper closes by assessing the field's potential to address these issues effectively. An important task in mathematical sciences is to make quantitative predictions, which is often done via the solution of differential equations. In this paper, we investigate why, to perform this task, scientists sometimes choose to use numerical methods instead of analytical solutions. Via several examples, we argue that the choice for numerical methods can be explained by the fact that, while making quantitative predictions seems at first glance to be facilitated by analytical solutions, this is actually often much easier with numerical methods. Thus we challenge the widely presumed superiority of analytical solutions over numerical methods. At least since Kuhn's Structure, philosophers have studied the influence of social factors in science's pursuit of truth and knowledge. More recently, formal models and computer simulations have allowed philosophers of science and social epistemologists to dig deeper into the detailed dynamics of scientific research and experimentation, and to develop very seemingly realistic models of the social organization of science. These models purport to be predictive of the optimal allocations of factors, such as diversity of methods used in science, size of groups, and communication channels among researchers. In this paper we argue that the current research faces an empirical challenge. The challenge is to connect simulation models with data. We present possible scenarios about how the challenge may unfold. In the recent philosophy of explanation, a growing attention to and discussion of non-causal explanations has emerged, as there seem to be compelling examples of non-causal explanations in the sciences, in pure mathematics, and in metaphysics. I defend the claim that the counterfactual theory of explanation (CTE) captures the explanatory character of both non-causal scientific and metaphysical explanations. According to the CTE, scientific and metaphysical explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans. I support this claim by illustrating that CTE is applicable to Euler's explanation (an example of a non-causal scientific explanation) and Loewer's explanation (an example of a non-causal metaphysical explanation). The epistemology of studies addressing questions about historical and prehistorical phenomena is a subject of increasing discussion among philosophers of science. A related field of inquiry that has yet to be connected to this topic is the epistemology of climate science. Branching these areas of research, I show how variety-of-evidence reasoning accounts for scientific inferences about the past by detailing a case study in paleoclimate reconstruction. This analysis aims to clarify the logic of historical inquiry in general and, by focusing on a case study about climate change, it offers an epistemic account of a particular discipline that is of environmental and social importance. We investigate how Dutch Book considerations can be conducted in the context of two classes of nonclassical probability spaces used in philosophy of physics. In particular we show that a recent proposal by B. Feintzeig to find so called "generalized probability spaces" which would not be susceptible to a Dutch Book and would not possess a classical extension is doomed to fail. Noting that the particular notion of a nonclassical probability space used by Feintzeig is not the most common employed in philosophy of physics, and that his usage of the "classical" Dutch Book concept is not appropriate in "nonclassical" contexts, we then argue that if we switch to the more frequently used formalism and use the correct notion of a Dutch Book, then all probability spaces are not susceptible to a Dutch Book. We also settle a hypothesis regarding the existence of classical extensions of a class of generalized probability spaces. If there is a 'platonic world' of mathematical facts, what does contain precisely? I observe that if is too large, it is uninteresting, because the value is in the selection, not in the totality; if it is smaller and interesting, it is not independent of us. Both alternatives challenge mathematical platonism. I suggest that the universality of our mathematics may be a prejudice and illustrate contingent aspects of classical geometry, arithmetic and linear algebra, making the case that what we call "mathematics" is always contingent. This paper elaborates on relationalism about space and time as motivated by a minimalist ontology of the physical world: there are only matter points that are individuated by the distance relations among them, with these relations changing. We assess two strategies to combine this ontology with physics, using classical mechanics as an example. The Humean strategy adopts the standard, non-relationalist physical theories as they stand and interprets their formal apparatus as the means of bookkeeping of the change of the distance relations instead of committing us to additional elements of the ontology. The alternative theoretical strategy seeks to combine the relationalist ontology with a relationalist physical theory that reproduces the predictions of the standard theory in the domain where these are empirically tested. We show that, as things stand, this strategy cannot be accomplished without compromising a minimalist relationalist ontology. The core idea of statistical accounts of biological functions is that to function normally is to provide a statistically typical contribution to some goal state of the organism. In this way, statistical accounts purport to naturalize the teleological notion of function in terms of statistical facts. Boorse's (Philosophy of Science, 44(4), 542-573, 1977) original biostatistical account was criticized for failing to distinguish functions from malfunctions. Recently, many have attempted to circumvent the criticism (Boorse, Journal of Medicine and Philosophy, 39, 683-724, 2014; Kraemer, Biology and Philosophy, 28, 423-438, 2013; Garson and Piccinini, The British Journal for the Philosophy of Science, 65, 1-20, 2014; Hausman, Philosophy of Science, 79(4), 519-541, 2012, Journal of Medicine and Philosophy, 39, 634-647, 2014). Here, I review such attempts and find them inadequate. The reason, ultimately, is that functional attribution depends on how traits would behave in relevant situations, a condition that resists statistical characterizations in terms of how they typically behave. This, I conclude, undermines the attempt to naturalize functions in statistical terms. Among philosophical analyses of Darwin's Origin, a standard view says the theory presented there had no concrete observational consequences against which it might be checked. I challenge this idea with a new analysis of Darwin's principal geographical distribution observations and how they connect to his common ancestry hypothesis. Inference to the Best Explanation (IBE) is usually employed in the Scientific Realism debates. As far as particular scientific theories are concerned, its most ready usage seems to be that of a theory of confirmation. There are however more uses of IBE, namely as an epistemological theory of testimony and as a means of categorising and justifying the sources of evidence. In this paper, I will present, develop and exemplify IBE as a theory of the quality of evidence - taking examples from medicine and showing that IBE can thereby provide the epistemological underpinning and justify the criteria of grading quality of mechanistic evidence that have been recently provided in the Clarke et al. (2014) paper on how evidence of medical mechanisms is to be construed alongside population studies. Evolutionary developmental biology (evo-devo) represents a paradigm shift in the understanding of the ontogenesis and evolutionary progression of the denizens of the natural world. Given the empirical successes of the evo-devo framework, and its now widespread acceptance, a timely and important task for the philosophy of biology is to critically discern the ontological commitments of that framework and assess whether and to what extent our current metaphysical models are able to accommodate them. In this paper, I argue that one particular model is a natural fit: an ontology of dispositional properties coherently and adequately captures the crucial casual-cum-explanatory role that the fundamental elements of evo-devo play within that framework. The significance of the genetic resources of plants, animals, poultry, and useful and harmful microorganisms can hardly be overestimated. Living organisms, carrying a specific set of genes of their own, guarantee life on the Earth. In Russia, the genetic resources of grains, fodder, fruit, and vegetable crops; cattle; sheep; horses; and poultry have been collected for many hundred years due to monastery lands, estates of progressive landowners, exchange of animals between governorates, and imports of cattle and horses. Gradually, the best samples were concentrated in scientific organizations and experimental enterprises and were used to obtain high indicators of productivity. The author attracts attention to the mobilization, investigation, and use of genetic resources of cultivated plants to create state-of-the-art adaptive stress-resistant varieties and hybrids of various (grain, legume, oil-bearing, fruit, vegetable, feed, and medicinal) crops. It is emphasized that constant replenishing of the plant genetic pool and the rational use of plants in breeding are the basis for the country's food security. Priority tasks in research on genetic resources are outlined. As a result of recent basic research by teams of research institutions of the Federal Agency for Scientific Organizations (FASO Russia) in coordination with institutes of the Russian Ministry of Agriculture, the Russian Academy of Sciences, and regional pedigree enterprises, new breeding forms in cattle raising, hog farming, sheep farming, horse breeding, fishery, and poultry farming have been created and assimilated. Traditional breeds that ensure import substitution of genetic resources of animals necessary to intensify meat and dairy production have been improved. In recent decades, more than 40 new and improved breeding forms of animals have been created, providing for about two-thirds of the total gain of livestock products per year. Although metagenomics is a relatively new scientific trend, it has managed to become popular in many countries, including Russia, over its 20-year history. This division of molecular genetics studies ecosystem-extracted nucleic acids (DNA and RNA), which contain full information about the microbial community of a habitat. Owing to metagenomic methods, soil microbiology has undertaken to study not only known cultivated types of microorganisms but also noncultivated forms, the biological properties of which can be suggested exclusively from the genetic information coded in their DNA. It turns out that such "phantom" types constitute the overwhelming majority within soil microbial communities; to all appearances, they actively participate in ensuring soil fertility, and, hence, in the opinion of the authors of this paper, study of them is topical for both basic research and agricultural practice. The development of metagenomic technologies will help understand biological phenomena determined by close plant-microbe interactions, such as increasing the productivity of agricultural crops and protecting them against phytopathogens. However, the introduction of new methods has always presented difficulties; in metagenomics, they are associated with the acquisition, storage, and bioinformational analysis of a huge array of genetic information. State-of-the-art approaches to obtaining new plant varieties based on the potential of traditional breeding and the use of modern methods and achievements in genetics and genomics are considered. The opportunities and advantages of marker-assisted and genomic selection, as well as the importance of developing advanced methods in phenomics and genome editing, are discussed. Activity within the food and manufacturing industries is based on factors that affect the state of the food market and improve the quality of life. A subject of inquiry for scientists who work in this sphere is systems of transforming plant and animal raw materials into foods. In the course of production, the nutrition and energy value of foods, their safety for humans and the environment, and consumer properties are controlled. The success of studies is determined by the multidisciplinary approach, perfectly interfacing different scientific trends-medicine, biology, physics, chemistry, agriculture, etc. The efficiency of targeted breeding of new varieties of fruit and berry crops and grapevines with characters that meet modern cultivation technologies and consumer demand depends on the inclusion of donors and sources with prominent agronomic characters in hybridization. This paper emphasizes the importance of studying, preserving, and replenishing genetic collections to accelerate and increase breeding effectiveness and the high significance of donors and sources with identified genes responsible for the presence of characters in new genotypes. The results of improving the assortment of fruit and berry crops to reach a new level of fruit and berry production are shown. The results obtained by the research team of the All-Russia Research Institute of Selection and Seed Breeding for Vegetable Crops are presented. Specific examples demonstrate the efficacy of methods and technologies that create genetic resources for vegetable crops: the cytogenetic GISH and FISH methods, the doubled haploid technology in the culture of unpollinated ovules, and molecular marking used as a specific and varietal identifier of vegetable crops and for the study and identification of genes responsible for agronomic characters. The results of regular phytosanitary monitoring are considered separately: identifying the composition of pathogenic species not recorded previously in vegetable crops in individual regions and throughout the country in general and developing the scientific basis for the evaluation and selection of vegetable crops with a highly effective antioxidant system to design socially and economically important functional food products. The role of basic science in qualitative improvement of the national defense potential is considered. The exceptional strategic importance of the results obtained during basic research for military security and the necessity to expand their practical application for these purposes are shown. The authors propose to increase the efficiency of the organizational mechanism of control of defense-oriented basic research and discuss possible consequences of the reorganization of the Russian Academy of Sciences in terms of Russia's defense potential. Many new developments in laser medical equipment, as well as laser diagnostic technologies and treatment, are based on the results of basic research conducted at the Prokhorov Institute of General Physics, RAS. The introduction of these results into the clinical practice of Russian health care and their use in ophthalmology, neurosurgery, urology, cosmetology, dermatology, photodynamic therapy, autofluorescence and photodynamic diagnostics, and minimally invasive diagnostics of exhaled air are considered. The latest results of anthropological studies of bone remains from the earliest Upper Paleolithic burial discovered on Russian territory, the Markina Gora site (Kostenki 14), are described. Multivariate statistical methods and parallel studies of the buried skull structure and dentition established that their morphological characteristics undoubtedly belonged to the Caucasian complex. In combination with paleogenetic data, the findings contradict the earlier hypothesis of the southern origin of the Kostenki 14 individual and its similarity to the population of the Australo-Melanesian region. A brief history of reserve management and study in Russia is presented; the background of modern ideas of reserves that established various approaches to the formation of a network of specially protected natural territories is analyzed. The author holds to the classical ideas of reserves, which were formulated by domestic scientists V.V. Dokuchaev, I.P. Borodin, G.A. Kozhevnikov, and V.P. and A.P. Semenov-Tyan-Shanskii, and insists on the necessity to retain their status as "inviolable territories of virgin nature taken under protection for ever and ever." This article is dedicated to the Year of Specially Protected Natural Territories of Russia (2017), announced by a decree of the Russian President. The author calls for a moratorium on the withdrawal of conservation territories for economic and recreational and tourist purposes. The problem of the effectiveness of external sanctions as an instrument of pressure on three states that have adhered to but have abandoned the course of self-reliance is considered. The constructive base of this study is the idea of a conservative project that finds its imperative in reindustrialization. It is shown that the experience of new Russia has yielded the worst results, while China has demonstrated the best outcomes. Iran falls in between. Although the importance of primary schools in the long term is of interest in educational effectiveness research, few studies have examined the long-term effects of schools over the past decades. In the present study, long-term effects of primary schools on the educational positions of students 2 and 4 years after starting secondary education are investigated. Moreover, it is examined which school factors play a role in this process. We specifically investigated whether effective primary schools make a difference in the long term. This study uses data from the longitudinal SiBO project, which followed 6,000 pupils in primary education in Flanders, Belgium, and has follow-up data during secondary education. Two-level models and cross-classified multilevel models show that primary schools have long-term effects on the educational positions of students in secondary education. This paper discusses methods for benchmarking vocational education and training colleges and presents results from a number of models. It is conceptually difficult to benchmark vocational colleges. The colleges typically offer a wide range of course programmes, and the students come from different socioeconomic backgrounds. We solve the comparability problem by focusing on effects in terms of retention and employment rates as opposed to the intermediate outcomes like grades. We neutralize cost differences using alternative cost measures. And we use detailed register data to account for student backgrounds. From a methodological point of view, we combine average methods (multilevel analysis) with frontier methods. We thus combine the key methods of school effectiveness research and school efficiency and productivity research. The analyses show that the efficiency of Danish vocational colleges varies considerably. Adopting best practices could lead to cost savings of between 9 and 33%. A key assumption of accountability policies is that educators will use data to improve their instruction. In practice, however, data use is quite hard, and more districts are looking to instructional coaches to support their teachers. The purpose of this descriptive analysis is to examine how instructional coaches in elementary and middle school science across 3 school districts worked with science teachers around data use. Science teachers and instructional coaches were interviewed about their data use practices and preferences. The findings highlight that coaches play diverse roles in supporting teachers, and that teachers' data use practices closely align with coaches' practices and preferences. The discussion concludes with implications for our understanding of data use in specific content areas and for future research. Although data-based decision making can lead to improved student achievement, data are often not used effectively in schools. This paper therefore focuses on conditions for effective data use. We studied the extent to which school organizational characteristics, data characteristics, user characteristics, and collaboration influenced data use for (1) accountability, (2) school development, and (3) instruction. The results of our hierarchical linear modeling (HLM) analysis from this large-scale quantitative study (N = 1073) show that, on average, teachers appear to score relatively high on data use for accountability and school development. Regarding instruction, however, several data sources are used only on a yearly basis. Among the factors investigated, school organizational characteristics and collaboration have the greatest influence on teachers' data use in schools. This study aims to investigate how teachers' trust in their students relates to reading comprehension achievement in socially and ethnically segregated elementary schools in Flanders (Belgium) by taking into account class composition characteristics. It is examined how student variables, ethnic diversity and the proportion of non-native students in the class, and teachers' trust in their students relate to reading comprehension achievement and learning growth. A 3-level multivariate repeated measures analysis was conducted. At 2 measurement occasions, reading tests and questionnaires were administered to a sample (n = 417) of 7- and 8-year-old students in 32 classes. Teachers' trust in their students was found to be a key factor relating to learning growth in reading comprehension, and mediated the relationship between the level of ethnic diversity in the class and learning growth. Teachers with a higher level of trust in their students seem to foster more learning growth in reading comprehension. Within comparative school effectiveness research facilitated by large-scale data across countries, this article presents the results of the testing for measurement invariance of the latent concept of Professional Community (PC) across 23 European countries and more than 35,000 teachers in secondary schools. The newly proposed Multiple-Group Factor Analysis Alignment method is applied to obtain the mean differences of the latent PC factor, while also identifying the non-invariant parameters in these different countries. After performing a number of robustness checks, including the Multiple-Group Confirmatory Factor Analysis (MGCFA) exact scalar method, we conclude that teachers in Eastern European countries report the highest average perceived PC practices. The countries with smaller PC mean differences tend to shift places in the ranking when the 2 methods are applied; however, very few changes among the groups with significant latent mean differences are observed. Policies about reducing class size have been implemented in the US and Europe in the past decades. Only a few studies have discussed the effects of class size at different levels of student achievement, and their findings have been mixed. We employ quantile regression analysis, coupled with instrumental variables, to examine the causal effects of class size on 4th-grade mathematics achievement at various quantiles. We use data from 14 European countries from the 2011 sample of the Trends in International Mathematics and Science Study (TIMSS). Overall, there are no systematic patterns of class-size effects across quantiles. Class-size effects are generally non-significant and uniform at different achievement levels, which suggests that in most European countries class-size reduction does not have an impact on student achievement and does not close the achievement gap. However, combined estimates across countries indicate that high achievers may benefit more from class-size reduction. This study examined the contribution of classroom format on teaching effectiveness and achievement in English language arts (ELA) and mathematics. Secondary data analyses of the Measures of Effective Teaching database included 464 US classrooms. Classrooms were defined as self-contained if a generalist teacher provided instruction on all subjects and departmentalized if a specialist teacher provided instruction on a specific subject. Beginning-of-the-year classroom-level covariates were compared. Both ELA and mathematics self-contained classrooms had larger class sizes, served more students of color, served students with lower initial achievement, and had teachers with fewer years of teaching experience but more likely to have a Master's degree. Regression models were used to determine if classroom format predicted teaching effectiveness and achievement while controlling for beginning-of-the-year classroom-level covariates. Departmentalization had a small positive association with higher teaching effectiveness ratings in ELA classes. Classroom format was not a significant predictor of achievement in ELA or math. This paper provides an overview of the National Jet Fuels Combustion Program led by the Federal Aviation Administration, the U. S. Air Force Research Laboratory, and the NASA. The program follows from basic research from the U. S. Air Force Office of Scientific Research and results from the engine-company-led Combustion Rules and Tools program funded by the U. S. Air Force. The overall objective of this fuels program was to develop combustion-related generic test and modeling capabilities that can improve the understanding of the impact of fuel chemical composition and physical properties on combustion, leading to accelerating the approval process of new alternative jet fuels. In this paper, the motivation and objectives for the work, participating universities, gas-turbine-engine companies, other federal agencies, and international partners are described. This paper provides an in-depth discussion on the benefits to the fuels approval process, the rationale in selecting conventional and alternative fuels to study, the referee rig used for fuel testing, and the modeling approaches. High-level results are also briefly discussed, and will be covered in detail in separate university-led papers. Lastly, an Appendix reviewing past programs, events, and workshops that lay the groundwork for this program is also included for reference. The background-oriented schlieren technique is applied to visualize Mach 2.5 isolator shock train structures in 3.0- and 6.0-aspect-ratio rectangular cross-section ducts. To optimize the background pattern, a parametric study is performed by using a 10 deg compression ramp calibration flowfield producing a reference density gradient. In conjunction with a multigrid cross-correlation algorithm, the optimized background pattern adheres to three classic particle image velocimetry design rules: 1) interrogation window width four times the expected particle displacement, 2) magnified particle diameter between 2 and 5 pixels, and 3) image density greater than 10. A modified Q-factor analysis and qualitative observations of the density gradient range resolved suggest an optimal image density of 75 and a magnified particle diameter range of 2-5 pixels. Background-oriented schlieren visualization of the primary shock train element in both aspect ratio ducts shows the strongest density gradients to be located in the refracted shock region between the oblique shock intersection and the primary normal shock. A previous study confirms that this region is characterized by an increasing shock angle and Mach number as it approaches the duct centerline. To apply pressure-sensitive paint (PSP) to unsteady shock-wave phenomena such as shock-obstacle interactions in shock-tube experiments, anodized-aluminum PSP (AA-PSP) with ultrafast response was fabricated and its response time to a step pressure change was evaluated by using a shock tube. Phosphoric acid was used as electrolyte in the anodization process, and the anodic alumina with pore diameter as large as 160 nm was successfully fabricated. This improved AA-PSP achieved a response with the time constant of 0.35 mu s. This AA-PSP was applied to interactions of a moving shock wave with a circular cylinder. The results show that the improved AA-PSP can visualize the shock reflections and the shock diffractions with the ever-highest spatial and temporal resolution. The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the local-correlation-based transition modeling concept represents a valid way to include transitional effects into practical computational fluid dynamics simulations. However, the model involves coefficients to be tuned to match the required application. In this paper, the.-equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. Different airfoils are used to evaluate the original model and calibrate it, whereas a large-scale wind turbine blade is employed to show that the calibrated model can lead to reliable solution for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected. A method based on the Delaunay graph mapping mesh movement is proposed to eliminate mesh sensitivity, which is required in the discrete adjoint optimization framework. The method makes use of a one-to-one explicit algebraic mapping between the volume and surface mesh nodes, for which the relative volume coefficients based on the Delaunay graph are computed only once. This procedure also results in a straightforward computation of the mesh sensitivity without the need to invert the large sparse matrix generally associated with implicit mesh movements such as the spring analogy, torsional spring, and linear elastic analogy. The advantage of the method comes from the fact that the solution of the second linear mesh adjoint system of equations is no longer required. The method has been verified for the two-dimensional RAE 5243 airfoil and for the three-dimensional ONERA M6 wing for two cases: 1) sensitivities of the objective function with respect to each surface mesh point as a design variable compared with those computed using finite differences, and 2) sensitivities against incidence as a design variable compared with those derived analytically. A comparison of the surface sensitivities with the mesh adjoint method using linear elasticity is also presented. Compressible large-eddy simulations combining high-order methods with a wall model have been developed in order to compute wall-bounded flows at high Reynolds numbers. The high-order methods consist of low-dissipation and low-dispersion implicit finite volume schemes for spatial discretization on structured grids. In the first part, the procedure used to apply these schemes in near-wall regions is presented. It is based on a ghost cell reconstruction. Its validity is assessed by performing the large-eddy simulation of a turbulent channel flow at a friction Reynolds number of Re-tau = 395. In the second part, to consider flows at higher Reynolds numbers, a large-eddy simulation approach using a wall model is proposed. The coupling between the wall model and the high-order schemes is described. The performance of the approach is evaluated by simulating a turbulent channel flow at Re-tau = 2000 using meshes with grid spacings of Delta x(+) = 100, 200, and 300 in the streamwise direction and Delta(+) = 50, 100, and 150 in the wall-normal and spanwise directions ( in wall units). The effects of the choice of the point used for data exchange between the wall model and the large-eddy simulation algorithm, as well as of the positions of the ghost cells used for the coupling, are examined by performing additional computations in which these parameters vary. The results are in agreement with direct numerical simulation data. In particular, the turbulent intensities obtained in the logarithmic region of the boundary layers of the channel flow are successfully predicted. Experience gained from previous jet noise studies with the unstructured large-eddy simulation flow solver "Charles" is summarized and put to practice for the predictions of supersonic jets issued from a converging-diverging round nozzle. In this work, the nozzle geometry is explicitly included in the computational domain using an unstructured body-fitted mesh. Two different mesh topologies are investigated, with emphasis on grid isotropy in the acoustic source-containing region, either directly or through the use of adaptive refinement, with grid size ranging from 42 to 55 x 10(6) control volumes. Three different operating conditions are considered: isothermal ideally expanded (fully expanded jet Mach number of M-j = 1.5, temperature of T-j / T infinity = 1, and Reynolds number of Re-j = 300; 000), heated ideally expanded (M-j = 1.5, T-j / T infinity = 1.74, and Re-j = 155; 000), and heated overexpanded (M-j = 1.35, T-j / T infinity = 1.85, and Re-j = 130; 000). Blind comparisons with the available experimental measurements carried out at the United Technologies Research Center for the same nozzle and operating conditions are presented. The results show good agreement for both the flow and sound fields. In particular, the spectra shape and levels are accurately captured in the simulations for both near-field and far-field noise. In these studies, sound radiation from the jet is computed using an efficient permeable formulation of the Ffowcs Williams-Hawkings equation in the frequency domain. Its parallel implementation is reviewed and parametric studies of the far-field noise predictions are presented. As an additional step toward best practices for jet aeroacoustics with unstructured large-eddy simulations, guidelines and suggestions for the mesh design, numerical setup, and acoustic postprocessing steps are discussed. A method is presented for imaging rotating broadband sources with a microphone array. The microphone array rotates virtually with the same angular frequency as the source such that the pressure data are mathematically transformed into a rotating reference frame and the influence of the moving source (Doppler shift) is compensated. In contrast with other works, this method works completely in the frequency domain by using the spherical harmonic series expansion method to calculate a modified free-space Green function in the rotating system. An analytical solution for the sound radiation from the rotating sound source, which considers motion compensation and a uniform subsonic axial flow, is deduced. Simulations on a rotating point source considering axial flow are performed. Measurements with a fan are examined for different frequencies with standard delay and sum beam-forming and high-resolution algorithms. The nonlinear response of acoustic resonators is investigated over a broad range of frequencies and amplitudes. Helmholtz resonators with a symmetric neck and an asymmetric neck, respectively, as well as quarter-wave resonators are considered. Describing functions for impedance and the reflection coefficient of a Helmholtz resonator at various sound pressure levels are determined from compressible flow simulation and validated against experimental data. The particular focus of the present study is the nonlinear scattering to higher harmonics. For the Helmholtz resonator with a symmetric neck, a distinct pattern in the amplitudes of the higher harmonics is observed, where the odd harmonics dominate the response, whereas the even harmonics are almost negligible. Such an "oddharmonics-only "pattern, which was observed previously in an experiment at the orifices, is explained by a quasi-steady analysis based on the Bernoulli equation, assuming a symmetric flow pattern at the neck. For the Helmholtz resonator with an asymmetric neck, it is observed in computational fluid dynamics simulations that even harmonics contribute noticeably to the resonator response, such that the odd-harmonics-only pattern is less pronounced. For the markedly asymmetric geometry of the quarter-wave resonator, the second harmonic is dominant and the odd-harmonics-only pattern vanishes completely. The quasi-steady analysis is extended successfully to also describe nonlinear scattering to higher harmonics for asymmetric configurations and flow patterns. Overall, the scattering to higher harmonics remains on a moderate level, even at very high excitation levels for the Helmholtz resonator configurations. For the quarter-wave resonator, the scattering is more pronounced and contributes perceptibly to the response at high excitation amplitudes. Thermoacoustic Helmholtz solvers provide a cheap and efficient way of predicting combustion instabilities. However, because they rely on the inviscid Euler equations at zero Mach number, they cannot properly describe the regions where aerodynamics may interact with acoustic waves, in the vicinity of dilution holes and injectors, for example. A methodology is presented to incorporate the effect of non-purely acoustic mechanisms into a three-dimensional thermoacoustic Helmholtz solver. The zones where these mechanisms are important are modeled as two-port acoustic elements, and the corresponding matrices, which notably contain the dissipative effects due to acoustic-hydrodynamic interactions, are used as internal boundary conditions in the Helmholtz solver. The rest of the flow domain, where dissipation is negligible, is solved by the classical Helmholtz equation. With this method, the changes in eigenfrequency and eigenmode structure introduced by the acoustic-hydrodynamic effects are captured, while keeping the simplicity and efficiency of the Helmholtz solver. The methodology is successfully applied on an academic configuration, first with a simple diaphragm, then with an industrial swirler, with matrices measured from experiments and large-eddy simulation. A new attempt to measure the structure response generated by a turbulent boundary layer in a wind tunnel is proposed in this paper. To develop a scaling procedure that accommodates both the structural vibration and turbulent boundary-layer excitation for predicting the vibration response of a structure using a scaled model in a wind tunnel, a receptance method is used to extend the scaling procedure to a curved plate. Because the scaling equations are developed for a simply supported boundary and a clamped boundary, a modal analysis is used to confirm that the scaling equations are also applicable to a plate with unsupported boundary conditions. The characteristic parameters of the structural vibrations and the flow parameters for a full-scale object are both scaled. In addition, the scaling procedures consider the effects of the material used in the scaled model in predicting the vibrations of the full-scale structure. The scaling procedures are developed from theoretical methods, and numerical techniques and wind-tunnel measurements are used to validate the scaling procedures. The structural vibrations of a full-scale body are compared with the results obtained using the scaling procedure and the response of a scaled model to verify that the scaling procedures developed in this study are effective for predicting the structural response of full-scale bodies from wind-tunnel test data. This paper presents a numerical study of the transonic flow over a half wing-body configuration representative of a large civil aircraft. The Mach number is close to cruise conditions, whereas the high angle of attack causes strong separation on the suction side of the wing. Results indicate the presence of shock-wave oscillations inducing unsteady loads that can cause serious damage to the aircraft. Transonic shock buffet is found. Based on exploratory simulations using a baseline grid, the region relevant to the phenomenon is identified and mesh adaptation is applied to significantly refine the grid locally. Time-accurate Reynolds-averaged Navier-Stokes and delayed detached-eddy simulations are then performed on the adapted grid. Both types of simulation reproduce the unsteady flow physics, and much information can be extracted from the results when investigating frequency content, the location of unsteadiness, and its amplitude. Differences and similarities in the computational results are discussed in detail and are also analyzed phenomenally with respect to recent experimental data. The impact of increased inflow turbulence on the sound generation in forward-and backward-skewed low-pressure axial fans was investigated using a microphone array method. A turbulence grid was mounted at the inlet section to increase the inflow turbulence intensity. The inflow conditions were then characterized using a laser Doppler anemometer. A beamforming algorithm with deconvolution was applied for localizing sound sources on the fans under free and distorted inflow conditions. Sound spectra and sound maps showed that distorted inflow conditions had a profound impact on the sound emission and that the extent of this was strongly dependent on the fan blade design. In general, sound sources were found on the fan blade surfaces and the fan blade leading edges. After the installation of the turbulence grid, sound sources on the leading edges were amplified, especially in the case of the forward-skewed fan. The vortical flow structures generated by the flapping wings of the DelFly II micro air vehicle in hovering flight configuration are investigated using particle image velocimetry. Synchronous force measurements are carried out to establish the relation between the unsteady forces and force generation mechanisms: particularly, the leading-edge vortex and the clap-and-peel motion. The formation of conical leading-edge vortices on both wings is revealed, which occurs rapidly at the start of the outstroke as a result of the wing-wing interaction. The leading-edge vortices of the outstroke interact with those of the instroke, which are shed and, by mutual induction, advect upstream as a vortex pair at the end of previous instroke. The leading-edge vortex pairs induce a strong inflow into the region formed between the upper and lower wings during the peeling phase, resulting in the formation of a low-pressure region. This, together with the leading-edge vortices and a momentum increase formed by the clap, accounts for the generation of relatively higher forces during the outstroke. The cycle-averaged forces are estimated with reasonable accuracy by means of a momentum-based approach using wake velocity information with an average error of 15%. Flapping rotary wing, rotating wing, and flapping wing are feasible wing layouts applicable to micro-air-vehicles capable of hovering flight. A numerical study in this paper presents which wing layout can be more efficient in terms of aerodynamic power for the given kinematic and geometric parameters with or without the constraint of vertical force. In the cases under typical conditions, rotating wing layout is the most efficient one when a small vertical force is needed. However, if a much larger vertical force is required, flapping rotary wing is the only wing layout that fulfills the requirements of the two aspects due to its coupling effect. At relatively high Reynolds number (Re > 2000), flapping amplitude (>70(omicron)), and aspect ratio (= 6), comparative relationships from the cases under typical conditions among three wing layouts in terms of vertical force and aerodynamic power efficiency are kept unchanged. Nevertheless, at relatively low Re(< 500 )), flapping amplitude (< 40(omicron)), or aspect ratio (= 3), horizontally flapping wing and flapping rotary wing could achieve higher aerodynamic power efficiency with a larger vertical force. These findings have great significance in guiding us to select an appropriate wing layout based on various design conditions and requirements in primary design phase to ensure efficiency in micro-air-vehicle design. In this paper, a low-order state-space adaptation of the unsteady lifting line model has been analytically derived for a wing of finite aspect ratio, suitable for use in real-time control of wake-dependent forces. Each discretization along the span has between 1-6 states to represent the local unsteady wake effects, rather than remembering the entire wake history which unnecessarily complicates controller design. Sinusoidal perturbations to each system degree of freedom are also avoided. Instead, a state-space model is fit to individual indicial functions for each blade element, allowing the downwash and lift distributions over the span to be arbitrary. The wake geometry is assumed to be quasi steady (no roll up) but with fully unsteady vorticity. The model supports time-varying surge (a nonlinear effect), dihedral, heave, sweep, and twist along the span. Cross-coupling terms are explicitly derived. This state-space model is then validated through comparison with an analytic solution for elliptic wings, an unsteady vortex lattice method, and experiments from the literature. Operating convergent-divergent nozzles at low pressure ratios can lead to flow separation. Certain nozzle contours are known to produce a discrete acoustical tone under such conditions that are believed to be generated by a phenomenon known as transonic resonance, which involves a standing pressure wave situated between the separation shock and the nozzle exit plane. This paper reports the findings of a dynamic mode decomposition analysis of a perturbed axisymmetric unsteady Reynolds-averaged Navier-Stokes simulation of a nozzle flow known to exhibit this phenomenon. Two cases of different pressure ratios were studied in depth. The results show that the two cases differ in the shape of the standing pressure wave. The lower pressure ratio produces a standing 3/4 pressure wave, whereas the higher pressure ratio produces a 1/4 wave. In both cases, dynamic mode decomposition modes that match the standing pressure wave shape were found to be the least damped and the most energetic modes of the modes produced by the dynamic mode decomposition algorithm. The frequency of the mode for the lower pressure ratio matched the experimentally observed transonic tone frequency extraordinarily well, whereas the higher pressure ratio case was in fair agreement. The dynamic mode decomposition algorithm captured the transonic frequency for five additional pressure ratios. Cold flow analysis is applied in a canonical scramjet combustor to obtain insights into key physics of the symmetric/asymmetric combustion modes under different equivalence ratios. Systematic experiments have been implemented in a single-expanding duct with backpressure produced by a cylinder at Mach 3. Fine structures of separated flow have been gained by the nanobased planar laser scattering system. High-frequency pressure signals are acquired to study the unsteady behaviors of separation shock. Velocity profiles obtained from particle image velocimetry are validated by numerical simulations. The results indicate that the symmetric backpressure itself can cause asymmetric separation. A steady symmetric separation mode forms when backpressure is relatively low. An unsteady asymmetric separation mode appears when backpressure is high enough. A large separated region always dominates at the expansion wall in the asymmetric separation mode. The asymmetric separation is formed by the increment of boundary-layer thickness and the reduction of turbulence intensity after expansion corner. An interlaced turbulence intensity distribution between the straight and expansion walls accounts for the transition of separation modes under different backpressure. A broadband oscillation of shock foot lies in the asymmetric separation mode, which is evolved from low-frequency unsteadiness in the shock-wave-boundary-layer interaction. Meanwhile, asymmetric backpressure is able to restrain the development of the separated region and forces the separation mode to stay symmetric and steady under high backpressure. In an effort to improve the accuracy of current aircraft ice-accretion prediction tools, experimental and analytical studies have been conducted on airfoils roughened by natural ice accretion. Surface roughness introduced by ice accretion and its effect on surface convective heat transfer have been tested and modeled, based on 10 experimental test cases. A novel scaling coefficient relating the Stanton and Reynolds numbers was introduced for heat transfer comparison and modeling in turbulent regime. By coupling the ice roughness and heat transfer models together with LEWICE ice-accretion tool, an improved ice-accretion model has been achieved. Four experimental ice shapes were obtained at the Adverse Environment Rotor Test Stand laboratory for model validation. The new surface-roughness model had very good agreement in both overall ice shape and ice thickness at the stagnation line (within 5% discrepancy for four experimental cases), whereas LEWICE prediction constantly underestimated the stagnation ice thickness by 30%. The overprediction of ice-horn lengths was also addressed by the proposed model. In one of the glaze-to-rime-regime cases, LEWICE overpredicted the upper and lower horn lengths by 32 and 22%, respectively, whereas the new model prediction resulted in +/- 3% accuracy. The current experimental study aims at quantifying the power conversion processes involved in a propulsor ingesting body wake. Stereoscopic particle image velocimetry is employed for the first time to visualize the flowfield at the location of interaction between a propeller and an incoming body wake, as well as to provide experimental data to be used for the power balance method. Besides experimentally quantifying the power conversion method, the results show that one of the main mechanisms responsible for the claimed efficiency enhancement in the experimental setup is due to the utilization of body-wake energy by the wake-ingesting propeller. Two-dimensional, direct numerical simulations are used to study the impact of active and natural (passive) suction at forward-facing steps on laminar-turbulent transition. Suction is applied through a gap in front of a step that is located in a flat-plate boundary-layer flow without streamwise pressure gradient. A steady base flow is used with a Mach number of 0.6. Subsequently, Tollmien-Schlichting waves are introduced by suction and blowing at the wall, and their growth over the surface imperfection is evaluated by N factors. The investigated step heights are in the range of one to two times the local displacement thickness of the smooth flat plate. Both sharp and rounded step corners are investigated. Cases with and without suction are compared with the smooth flat plate without suction based Delta N factors according to the e(N) method. Thus, it is shown that suction is capable to compensate or even overcompensate the negative impact of different steps on laminar-turbulent transition. The work concludes with a configuration that allows natural (passive) suction in front of a forward-facing step. Here, a significant reduction of the N factor compared to the smooth flat plate without suction is possible. Further estimations indicate that a net drag reduction may be possible as well. The effect of suction on an airfoil surface at various locations downstream of the leading edge of a thin flat-plate airfoil was studied in a wind tunnel at a low Reynolds number. At poststall angles of attack, substantial lift enhancement and delay of stall can be achieved if a large separation bubble is generated by reattaching the massively separated flow near the trailing edge. The effects of location and volumetric flow rate of suction were investigated by means of force and velocity field measurements. There is an optimal location of suction around x(s)/c = 0.4, which generates the maximum lift coefficient for suction coefficients less than 3%. When suction is applied closer to the leading edge, it may be possible to reattach the flow for smaller suction coefficients, but the resulting small separation bubble causes smaller lift increase. Large separation bubbles are needed for the maximum time-averaged lift enhancement, however, they exhibit shear layer flapping, intermittent reattachment, and larger lift fluctuations. Vortex generators are a widely used means of flow control, and predictions of their influence are vital for efficient designs. However, accurate computational fluid dynamics simulations of their effect on the flowfield by means of a body-fitted mesh are computationally expensive. Therefore, the Bender-Anderson-Yagle and jBAY models, which represent the effect of vortex generators on the flow using source terms in the momentum equations, are popular in industry. In this contribution, the ability of the Bender-Anderson-Yagle and jBAY models to provide accurate flowfield results is examined by looking at boundary-layer properties close behind vortex generators. The results are compared with both body-fitted mesh and other source term model Reynolds-averaged Navier-Stokes simulations of three-dimensional incompressible flows over flat-plate and airfoil geometries. The influence of mesh resolution and the domain of application on the accuracy of the models is shown, and the influence of the source term on the generated flowfield is investigated. The results demonstrate the grid dependence of the models and indicate the presence of model errors. Furthermore, it is found that the total applied force has a larger influence on both the intensity and shape of the created vortex than the distribution of the source term over the cells. The various separation control mechanisms of burst-mode actuation with a dielectric barrier discharge plasma actuator were experimentally investigated in this study. The control of the separated flow around a NACA 0015 airfoil at a Reynolds number of 6.3 x 10(4) was investigated using a plasma actuator mounted at a distance from the leading edge of 5% of the chord length. A parametric study on the nondimensionalized burst frequency was conducted at three poststall angles of attack and various input voltages using time-averaged pressure measurements and time-resolved particle imaging velocimetry (PIV) results. The measurement results of the trailing edge pressure, which was selected as the index of separation control, indicate that the optimal burst frequency varies with the angle of attack. Several flow fields are discussed in detail in this paper, and two flow control mechanisms were observed: the use of a large-scale vortex and the promotion of turbulent transition. With regard to the first mechanism, the phase-locked PIV results indicate that a vortex structure, the size of which increases with decreasing burst frequency in the experimental range, is shed from the shear layer for each burst actuation. With regard to the second mechanism, time-averaged pressure and PIV measurements reveal that a burst frequency of F+ = 6-10 is able to promote turbulent transition. Among these two mechanisms, at higher angles of attack, the use of a large-scale vortex structure provides better separation control, whereas near the stall angle, the promotion of the turbulent transition provides better separation control. We present a numerical study of plasma dynamics in a three-electrode sliding nanosecond dielectric barrier discharge flow actuator. A two-dimensional self-consistent plasma model including the effect of detail air plasma chemistry and ultrafast gas heating is used in our studies. When a third electrode is placed downstream of a classical two-electrode nanosecond dialectric barrier discharge actuator and powered by a negative direct-current voltage, a coronalike discharge is formed in its immediately vicinity, which in turn changes the dynamic of the primary streamers propagating from the pulsed electrode. The primary streamer slides and can even extend to the entire interelectrode distance. The potential difference between the third electrode, the positively charged dielectric surface, and the virtual anode formed by the streamer itself is found to be the main reason behind this elongation because of the electric field enhancement at the streamer head. Preionization and charge accumulation from the third electrode also contribute to this behavior. The numerical results also indicate that for high electric fields a negative streamer can be produced at the third electrode that merges with the positive streamer from the pulsed electrode. Consequently, the plasma channel covers the entire interelectrode space. The elongation of the primary streamer leads to an increase of the effective energy release region, which can result in a more efficient flow actuation mechanism. This paper presents an experimental investigation on the flow control of a stalled NACA 0015 airfoil at a Reynolds number Re = 7.7 x 10(4) using a sawtooth dielectric barrier discharge plasma actuator. The novel electrode configuration involves two sawtooth-shaped electrodes, which are arranged with opposite sawteeth pointing at each other, resulting in a periodic change in the electrode gap. The sawtooth dielectric barrier discharge plasma actuator is investigated for the first time for flow separation control. The flow structures generated by this actuator are found to depend on the height and width of the sawtooth. The application of this actuator on the airfoil leading edge leads to a delay in the stall angle of attack a by 5 deg and an increase in the maximum lift coefficient C-Lmax by 9%. Under the same power consumption, the conventional dielectric barrier discharge plasma actuator with straight-edged electrodes achieves a delay in stall a by only 3 deg and an increase in CLmax by about 3%. The difference is found to be linked to the fact that the former generates the streamwise wall jet and the near-wall vortices at the tip and in the trough region of the sawtooth electrode, respectively, whereas the latter produces a streamwise wall jet along the straight-edged electrode. In this paper, geometrically exact fully intrinsic equations and variational asymptotic beam sections are used to study the buckling and postbuckling of a column under self-weight. Initial curvature effects both generalized one-dimensional beam equations and the cross-sectional stiffness properties. The effects of initial curvature have been studied on buckling and postbuckling in this paper. This problem is a valuable test case for the utility of the fully intrinsic equations and variational asymptotic beam sections. There has been an increasing effort to improve aircraft performance through using composite tailored structures, not only to reduce weight, but to exploit beneficial aeroelastic couplings. Recent work has considered the ability to tow steer the composite plies to achieve better performance. Here, the potential wing weight savings of a full-size aeroelastically tailored wing are assessed by optimizing the properties of a three-dimensional finite element model using straight-fiber and tow-steered composites in the skins. One-and two-dimensional thickness and laminate rotation angle variations are considered as design freedoms. The jig shape is updated to maintain a fixed 1g flight shape, and optimization constraints are implemented on the strains and buckling loads due to maneuver and dynamic gust loads, flutter stability, and control effectiveness for different flight conditions. The optimal main fiber direction is rotated forward of the front spar direction in the outer wing, leading to extension-shear coupling in the skins, which increases the washout behavior of the wing. Washout effects shift the lift forces inboard and allow skin thickness reductions, but also lead to reductions in aileron control effectiveness. The optimized tow-steered laminate configurations achieved larger mass reductions than optimized straight-fiber configurations. Acoustic metamaterials offer an approach to reducing the dynamic response of sandwich panels. The key concept underlying this approach is to consider a metamaterial as a highly distributed system of continuous vibration absorbers that introduces multiple stop bands in which the response of the global structure is reduced. Multiple modal frequencies of the absorber system may be tuned to match global resonance frequencies and/or excitation frequencies. Using the assumed modes method, a metamaterial system is designed for integration into the honeycomb core of a representative sandwich panel. The metamaterial system is modeled as an effective distributed complex mass density in the global sandwich-panel model. The cores for two sandwich panels are fabricated using three-dimensional printing technology and then characterized statically and dynamically to determine effective elastic properties and natural frequencies of the cores, as well as loss factors of the vibration absorbers. The sandwich panels, constructed by bonding unidirectional carbon-fiber face sheets to both cores, are tested dynamically for two different boundary conditions: cantilevered and free-free. Experimental results confirm that the metamaterial core reduces the peak dynamic response at the natural frequencies of the sandwich panel with reasonably good agreement with model predictions. Mesh reflectors are widely employed for large space antennas due to their lightweight, compact, and easy package. To guarantee the reflector shape precision with acceptable manufacturing and assembly tolerances, mesh reflectors are designed with special shape adjustment mechanisms, namely, the length of some cables can be carefully adjusted when the reflectors are assembled. Shape adjustment plays a major role in achieving perfect reflector shape precision. In this paper, a robust method is proposed for shape adjustment of mesh reflectors with the consideration of the discrepancy between the mechanical behaviors of numerical models and actual reflectors. In this method, the measured nodal position in each shape adjustment iteration is also employed to update the numerical model so that this discrepancy is gradually decreased. A robust approach is used to determine the cable length change for the model updating and shape adjustment where the adverse effects of the discrepancy can be reduced. Numerical examples and experiments validate the proposed method. Contemporary research in the field of Foucauldian studies on education have pointed to a growing imbrication between educational practises and neoliberal ideas. The problematization of such scenario would lead to two premises, grounded on a general hypothesis for the analysis of the educational present. The first premise: nowadays, the educational or, to be more precise, educationalizing practisessince they would not deal only with the schooling effort, but also with the diffusion of a great number of pedagogical initiatives of non-formal characterconsists in an efficient rationality of governing of oneself and others. The second premise is that such educationalizing movement consists not only in the expression but also in the typical modus operandi of the current governmentalization processes, which aim at a large-scale administration of the multiplicity of populations now in terms of a lifelong educatibility for the citizens. These two premises sustain the hypothesis that the present educational practises do not restrict themselves to the mere condition of reiterative apparatus of imperatives extrinsic to them, but have in fact cemented themselves as a generative locus of the veridiction/subjectivization games capable of overrunning the whole social space. Faced with the incessant concern on the part of national and supranational institutions in promoting, expanding, and implementing education on human rights in schools and educational systems, it is necessary to stand back for a moment and review the political and discursive ways in which these projects work and the mechanisms they are based on within the current system. In order to do this, we will use the Foucauldian methodological notion of governmentality. From this point of view, some of the relatively recent practices of subjectivation, production, control, and shaping of people needed for a project of nation come to light. At the same time, these practices respond to a global project. Human rights education (HRE) is postulated and closely associated with contemporary techniques for citizenship training. Consequently, practices and discourses of human rights and, more recently, of HRE worked within government modalities of modern states as an exercise of control on populations. This text through the direct use to Foucault's work and using the concepts of care of the self' and biopolitics is questioning and analyzing resistance and practices of freedom. Mainly, from the Foucault's courses at the College de France and the methodological tools found there, here I present a discussion about Gilles Deleuze's contributions to Foucault's thought and I develop a dialog where I try to explain the concepts of domination, power, ethics, esthetics and the relationship of the self with himself. This text reflects about the need to consider an additional institutional alternative that matters, not only to the ones that advocate for pedagogy, but also to all of those involved in different educational processes. It is, so to speak, a Paideia that privileges the care of the self as a substantial value, and, as such, it is not dedicated to a unique moment on people's lives and it does not correspond to a specific institution, but to the universal and singular spirit of the human affairs. The article analyses the boom of self-help discourses and their relationship with pedagogic discourses, with the purpose of marking the centrality of the individual in the practices of contemporaneous government. Two exercises are important in this analysis of an archaeological genealogical perspective: on the one hand, it comprehends the impact which self-help has in the life of its readers and practitioners, allowing the consolidation and broad diffusion of tools to guide one's own life and define modes of being within the world; on the other hand, thinking that the techniques provided by self-help may proceed in a millenary tradition of practices intended for the government itself. The study of the series self-helpeducationgovernment allows some of the main elements of these discourses to be identified and shows the centrality of the notion of learning among them. Although Foucault did not produce any particular work devoted to teaching or education, following authors like Hoskin this text aims to show the importance that teaching practices and discourses have in Foucault's analysis, particularly in the analysis of what he called governmentality . If we associate these analyses with the concept of Antropotecnicas developed by the German philosopher Peter Sloterdijk, then we have a transparent toolbox for analyzing learning, recognizing that contemporary society is an educating society. This article discusses the displacement observed in the Brazilian mainstream pedagogical ideas and how Foucauldian studies play a role in shaping new ways of thinking, thus contributing for a change in such a scenario. In order to do so, a shift in the interpretative dominant educational discourse, mainly in Brazil, is necessary. The main idea is to discuss the points where some Foucauldian toolssuch as disciplinary power, norm, device, archive, government, and governmentality, among othersare being intertwined to operate in the way they do. We know that this kind of displacement causes a strong lag in the certainties and hopes held mostly, of those who are more accustomed to a salvationist and promethean vision of the school and, in a broader sense, of education. Nevertheless, the statements in this article do not mean that a Foucauldian perspective is, in itself, pessimistic or nihilistic. The multi-component reactions of arylhydrazopyrazoles 2, aromatic aldehydes 3 and cylohexane-1,3-dione gave pyrazoloquinazolinones 5. In addition, compounds 5 reacted with elemental sulfur and either of malononitrile or ethyl cyanoacetate to give the thiophene derivatives 7, respectively. Compounds 5 were used to synthesis the 5,6,11,12-tetrahydro-3H-pyrazolo[1',5':1,2]pyrimido[5,4-a]carbazol-2-ol derivatives 9 through their reaction with phenylhydrazine. Similarly, the reaction of the pyrazole derivatives 2, with 3 and dimedone (10) gave the quinoline derivatives 11. The newly synthesized products were evaluated against c-Met kinase, PC-3 prostate cancer cell lines and six typical cancer cell lines (A549, H460, HT-29, MKN-45, U87MG, and SMMC-7721). The most promising compounds were 5b, 5c, 5e-i, 7e, 7p, 7q, 7r, and 11i were further investigated against tyrosin kinase (c-Kit, Flt-3, VEGFR-2, EGFR, and PDGFR). Some compounds were selected to examine their Pim-1 kinase inhibition activity where compounds 5h and 7u showed high inhibitions. This study synthesized a series of 3-((4-(4-nitrophenyl)-6-aryl-1,6-dihydropyrimidin-2-yl)thio)propanenitriles and their structures characterized by spectral (H-1 NMR, C-13 NMR, infrared spectroscopy, liquid chromatography-mass spectroscopy) and elemental analysis. Antimicrobial activity was carried out by serial broth dilution method on gram-positive bacteria (Staphylococcus aureus and Streptococcus pyogenes), gram-negative bacteria (Escherichia coli and Pseudomonas aeruginosa) and three fungi (Candida albicans, Aspergillus niger and Aspergillus clavatus) in vitro. Compounds 5d, 5j, 5l and 5m showed potent antimicrobial activity against tested microorganisms. The grades of cytotoxicity studies on a human cervical cancer cell line (HeLa) and a 3T3 mouse embryonic fibroblast cell line indicated that compounds 5c, 5d, 5h and 5j showed low cytotoxicity. In this paper, we report a series of eighteen novel pyrimidine-based thiourea compounds with good to excellent yields (61-88%). The chemical structures of these heterocycles consist of a central pyrimidine ring with phenyl-substituted thiourea motifs. The enzyme inhibitory potential of these compounds was investigated against alpha-glucosidase as this enzyme plays a crucial role in treating type II diabetes mellitus. Compounds 4i (IC50 = 22.46 +/- 0.65 A mu M), 4f (IC50 = 25.88 +/- 0.40 A mu M), 4h (IC50 = 27.63 +/- 0.49 A mu M), 4c (IC50 = 29.47 +/- 0.42 A mu M), and 4e (IC50 = 32.01 +/- 0.42 A mu M) delivered better inhibition than the reference compound acarbose (IC50 38.22 +/- 0.12 A mu M). The quantitative structure-activity relationship of the synthesized compounds was also studied. The purpose of this study was to explore the possibility of synthesizing pyrazolo[3,4-d]pyrimidine derivatives that have antimicrobial activities. A novel series of 4-substituted-1-phenyl-1H-pyrazolo[3,4-d]pyrimidine derivatives including hydrazones (4a-i and 5), pyrazoles (6a, b) and (8), and oxadiazole-2-thiol (7), has been synthesized. Compounds 4g, 4b, 4h, 4d, 6a, 7, 8 have shown that some of the synthesized compounds have a moderate to outstanding antimicrobial activity against Gram Positive bacteria: Streptococcus pneumonia and Bacillis subtilis, gram negative bacteria: Pseudomonas aeruginosa and Escherichia coli, and fungi: Aspergillus fumigatus and Candida albicans strains. Interestingly, compound (7) "5-{[(1-phenyl-1H-pyrazolo[3,4-d]pyrimidin-4-yl)amino]methyl}-1,3,4-oxadiazole-2-thiol" exhibited superior antimicrobial activity compared to ampicillin and gentamicin suggesting that it can be used as a lead compound for further investigations. A new iridoid glycoside; 6-O-beta-D-xylopyranoside-shanzhiside methyl ester (1) along with six known compounds; shanzhiside methyl ester (2), lamalbid (3), geniposidic acid (4), theveside (5), verbascoside (6) and arenarioside (7) were isolated from the roots of Lantana montevidensis. The structures of the compounds were determined through 1D and 2D NMR spectroscopic data analysis, HRESIMS, electronic circular dichorism and UPLC-UV/MS method. The total extract, chloroformic (F1) and aqueous (F2) fractions together with the isolated compounds were tested for their antimicrobial, antiprotozoal, antiplasmodial, anti-inflammatory, monoamine oxidase inhibition and cell viability activities in addition to free radical scavenging activity using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The phenylpropanoid compounds (6 and 7) resulted in a potent antioxidant activity. Total methanolic extract together with the aqueous fraction (F2) showed decrease in reactive oxidative stress with 57 and 66%, respectively, while the chloroformic fraction (F1), together with the total methanolic extract, showed a decrease in iNOS with IC50 values 5 and 30 mu g/mL, respectively. Compounds 1, 2, 3, 6, and 7 showed inhibition in the reactive oxidative stress with values 50, 60, 57, 63, and 52%, respectively. Both F1 and F2 fractions demonstrated measurable inhibition of MCF-7 breast cancer cell growth, with IC50 value 0.3 mg/mL. Compounds 2 and 7 showed mild monoamine oxidase inhibition. None of the tested compounds showed antimicrobial, antiplasmodial or antiprotozoal activity. A series of 1,3,4-trisubstituted pyrazole derivatives (3a-f), (4a-r), (5a-f) and (6a-f) have been synthesized and evaluated for their Mycobacterium tuberculosis (MTB) (H37Rv) inhibitory activity. The structures of newly synthesized compounds were characterized by IR, H-1 NMR, C-13 NMR and mass spectral analysis. Among the thirty six compounds screened for in vitro anti-mycobacterial activity against MTB, three compounds 3b, 3e and 3f demonstrated significant growth inhibitory activity with minimum inhibitory concentration of 0.78 A mu g/ml. The selected compounds were further tested for cytotoxic activity against normal human dermal fibroblast cell lines using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazoliumbromide (MTT) assay and exhibited no significant cytotoxicity. Molecular docking simulation studies were carried out in order to better understand the hypothetical binding interaction to the MTB NADH-dependent enoyl-acyl carrier protein reductase. Herein, we describe the synthesis of 11new thiazolyl coumarin derivatives and evaluation of their potential role as antibacterial and antituberculosis agents. The structures of the synthesized compounds were established by extensive spectroscopic studies (Fourier transform infrared spectroscopy, H-1-nuclear magnetic resonance, C-13-nuclear magnetic resonance, 2D-nuclear magnetic resonance and liquid chromatography-mass spectrometry) and elemental analysis. All synthesized compounds were assayed for their in vitro antibacterial activity against a few gram positive and gram negative bacteria and antituberculosis activity against Mycobacterium tuberculosis H37Rv ATCC 25618 by using colorimetric microdilution assay method. Nine derivatives showed moderate anti-bacterial and anti-tuberculosis activities against all the tested strains. The highest activity against all the pathogens including Mycobacterium tuberculosis was observed by compound 7c with MIC values ranging between 31.25-62.5 mu g/mL, indicating that coumarin skeleton could indeed provide useful scaffold for the development of new anti-microbial drugs. This work reports the synthesis, protease inhibition, and antileishmanial activity of ten benzoxazole derivatives, which were obtained in a three-step synthetic route from 4-hydroxy-acetophenone and 4-hydroxy-benzophenone. These benzoxazoles, the synthetic intermediates, and the starting ketones were evaluated for their inhibitory effect on the activity of cysteine (papain, rCPB2.8, and rCPB3.0) and serine (trypsin) proteases. All compounds showed significant values of IC50 against these enzymes (in the range of 0.0086-0.7612 A mu M for papain and 0.0075-0.5032 A mu M for trypsin), being more active than the standard inhibitors (1.7821 and 7.2318 A mu M, for E64 and TLCK, respectively). Following, all compounds were evaluated in vitro for their leishmanicidal activity against promastigote form of Leishmania amazonensis. The most active compounds were further evaluated against amastigote form and for its toxicity against murine macrophages. The benzoxazole 4d, a benzophenone derivative, and the intermediate 4-hydroxy-3-nitroacetophenone 2b showed significant antileishmanial activity (IC50 = 90.3 A mu M and IC50 = 130.9 A mu M, respectively) with selectivity indexes (5.22 and 18.09, respectively) compared to or better than those of two established leishmanicidal drugs, pentamidine (0.58) and amphotericin B (5.31). The chemistry of Fe(II) complexes showing efficient light-induced DNA cleavage activity, binding propensity to calf thymus DNA and antibacterial photodynamic therapy has been summarized in this article. Complexes of formulation [Fe(mqt)(B)(2)](PF6)(1)-(3), where mqt is 2-thiol-4-methylquinoline and B is N,N-donor heterocyclic base, viz. 1,10-phenanthroline, dipyrido[3,2-d:2',3'-f]quinoxaline and dipyrido[3,2-a:2',3'-c]phenazine have been prepared and characterized. The DNA-binding behaviors of these three complexes were explored by absorption spectra, viscosity measurements and thermal denaturation studies. The DNA binding constants for complexes (1), (2) and (3) were determined to be 1.9 x 10(3), 3.4 x 10(4) and 8.1 x 10(4) M-1, respectively. The experimental results suggest these complexes interact with DNA through groove-binding mode. The complexes show significant photocleavage of supercoiled DNA proceeds via a type-II process forming singlet oxygen as the reactive species. Antimicrobial photodynamic therapy was studied using photodynamic antimicrobial chemotherapy assay against Escherichia coli and all complexes exhibited significant reduction in bacterial growth on photoirradiation. Sampangine is an azaoxoaporphine alkaloid with interesting biological activities. Elucidating the mode of action of sampangine is a topic of continuous research. Recently reported cell-based data have indicated heme-dysfunction and subsequent reactive oxygen species production as being responsible for the biological activity of the natural product. By using an in vitro biochemical assay the ability of sampangine to produce reactive oxygen species was confirmed. The production of reactive oxygen species occurred upon mild chemical reduction of sampangine in absence of any cellular components. In an additional structure-activity-relationship study, utilizing synthesized analogs of sampangine, we identified the 1,4-iminoquinone scaffold as the key motif for the observed reactive oxygen species production. To assess the ability of sampangine to induce DNA damage, the direct binding of sampangine to calf thymus-DNA was measured using UV-Visible spectroscopy. No DNA binding was observed when sampangine was tested against calf thymus -DNA up to a ratio of 1:100. This observation rules out the direct involvement of sampangine in DNA binding and damage. Protein tyrosine phosphatase 1B enzyme has been found to be a negative regulator of insulin and leptin signaling pathway. It has gained considerable attention to medicinal chemists as a new therapeutic target for intervention in the treatment of type2 diabetes. A series of N-substituted-5-(thiophen-2-ylmethylene)thiazolidine-2,4-dione derivatives were synthesized and screened in vitro for protein tyrosine phosphatase 1B inhibitory activity and in vivo for anti-hyperglycaemic activity. The introduction of alkyl/halo alkyl moiety onto the amidic nitrogen of thiazolidine-2,4-dione ring was intended to enhance the inhibitor-enzyme affinity and hence, good protein tyrosine phosphatase 1B inhibition. The nature of interactions which governs the binding mode of ligands inside the active site of protein tyrosine phosphatase 1B was further studied by molecular docking simulation. Compound 7 was found to be a potent protein tyrosine phosphatase 1B inhibitor with IC50 9.96 A mu M. The synthesized compounds have also shown significant lowering of blood glucose level. A comparative molecular field analysis has been developed to study the three-dimensional quantitative structure-activity relationship of a series of triterpene-based gamma-secretase modulators. We have performed the genetic algorithm on a large set of comparative molecular field analysis fields to select the most responsible fields contributing to inhibitory activities of these compounds against Alzheimer's disease. The genetic algorithm-selected comparative molecular field analysis fields were introduced into the partial least squares and principal component analysis to reduce the dimensionality of the input features. The extracted partial least squares components were used as inputs to build partial least squares regression (genetic algorithm-partial least squares regression), and the extracted principal components were used as inputs for principal component regression (genetic algorithm-principal component regression) and support vector regression (genetic algorithm-principal component analysis-support vector regression). The classic three-dimensional quantitative structure-activity relationship comparative molecular field analysis analysis (partial least squares regression) is also carried out for the sake of comparison. The results show that among the constructed models, in terms of root mean squares and leave-one-out cross-validated R (2)(q (2)), the combination of principal component analysis and support vector machine can effectively improve the prediction performance (RMSEtrain = 0.231, RMSEtest = 0.360, and q (2) = 0.638) compared with PLSR (RMSEtrain = 0.415, RMSEtest = 0.680, and q (2) = 0.311). The performances of the genetic algorithm-principal component regression and genetic algorithm-partial least squares regression were also comparable but less powerful than genetic algorithm-principal component analysis-support vector regression. Finally, based on the information derived from the comparative molecular field analysis contour map, some key features for increasing the activity of gamma-secretase modulators have been identified to design new triterpene-based Alzheimer's disease drugs. The present work reports the synthesis of novel 5-Oxo-5,6,7,8-tetrahydroquinoline and 2,5-dioxo-5,6,7,8-tetrahydroquinoline derivatives containing enaminone system and bearing a sulfonamide moiety. The newly synthesized compounds were designed in compliance with the general pharmacophoric requirements for carbonic anhydrase inhibiting anticancer drugs, as this may play a role in their anticancer activity. Twelve of the newly synthesized compounds were evaluated for their in vitro anticancer activity against human breast cancer cell line (MCF7). Compounds 5c, 7, 10, and 12c showed IC50 values (0.048, 0.040, 0.041, 0.044 mu M, respectively) comparable to that of the reference drug doxorubicin (IC50 = 0.04 mu M). On the other hand, compounds 12a, 12d, and 16b exhibited better activity than doxorubicin with an IC50 values (0.025, 0.036, 0.015 mu M, respectively). In the present study, synthesis and characterization of pyrazoline derivatives integrated with sulfonamide scaffold have been performed. The characterization of the molecules was done by elemental analysis, ultraviolet-visible, infrared, nuclear magnetic resonance (NMR), and mass spectra. Crystal structure of compounds 2e and 2g were determined by single crystal X-ray diffraction. In the compounds 2e and 2g, the intra molecular hydrogen bonds N15--H15aEuro broken vertical bar O13 and N14--H14aEuro broken vertical bar N1 were closed to form a S(6) ring motif, whereas the N14--H14aEuro broken vertical bar O17 hydrogen bond links, pairs of molecules related by inversion, forming the familiar R-2 (2)(10) ring motif. The Hirshfeld surface analysis comprising of the d (norm) surface plots, electrostatic potentials and two-dimensional fingerprint plots were generated in order to give visual confirmation of the intermolecular interactions. The molecules were screened for their in vitro antitubercular and antimicrobial activity. The molecules 2n and 2m have shown high potent against M. tuberculosis and most of the molecules have shown good potential against different bacteria and fungi. Three new series of thiazoles, quinolones, and thiazolidinones merged with benzimidazole, benzoxazole, and benzothiazole nuclei were synthesized. The compounds were subjected to Infrared, H-1 nuclear magnetic resonance, C-13 nuclear magnetic resonance, mass spectrometry, and elemental analyses. Cytotoxic activity of compounds were evaluated against breast (MCF-7) and colon (HCT-116) cell lines. Seven of the compounds had IC50 values between 0.0112 and 0.0198 A mu M and are hence more potent than the reference drug doxorubicin (IC50 = 0.0084 and 0.0088 A mu M against MCF-7 and HCT-116, respectively). Biologically active compounds isolated from medicinal herbs have been the center of interest for researchers to investigate their possible effects and mechanisms through which they exert their action. In the current study, we investigated the antiproliferative effect and the mechanism of action of deoxypodophyllotoxin, a semi-synthetic compound derived from the extract of a Chinese herbal medicine, Dysosma versipellis (Hance) M. Cheng ex Ying (Berberidaceae). The study was conducted on MCF-7 and MDA-MB-231 breast cancer cell lines. The antiproliferative effect of deoxypodophyllotoxin was assessed by the Cell Counting Kit-8 assay. Flow cytometry, Annexin V/PI, mitochondrial membrane potential, caspase inhibition assays, and western blot analysis were performed to detect, explore, and assess the antiproliferative effect of deoxypodophyllotoxin. Our data revealed that, deoxypodophyllotoxin treatment resulted in a dose-responsive inhibition of MCF-7 and MDA-MB-231 cell growth with very low IC50 (10.91 and 20.02 nM, respectively). It disrupted the cytoskeleton and induced significant cell cycle arrest at the G2/M phase in both cell lines through the interference with cell cycle regulatory proteins: cyclin B1, cdc25c, and CDK1. In MCF-7 cells, cell cycle inhibition was associated with apoptosis, which was caspase-dependent and involved elevation of Bax protein and a cleavage of PARP, this finding along with disruption of mitochondrial membrane potential, confirmed the involvement of intrinsic pathway in deoxypodophyllotoxin-induced apoptosis in MCF-7 cells. However, in MDA-MB-231 cells, deoxypodophyllotoxin is cytostatic and significantly suppresses proliferation by cell cycle arrest at G2/M phase without apoptotic induction. Such an activity of deoxypodophyllotoxin could be expected to spare normal tissues from toxic side effects. This work, based on pharmacophore modeling, aimed to elucidate the three-dimensional structural features of resveratrol derivatives by generating the three-dimensional common pharmacophore responsible for the binding to the inflammation-inducible target. A five points pharmacophore, with three hydrogen bond acceptors and two aromatic rings as pharmacophoric features, was developed. The fit values of the generated pharmacophore are high, except for the Resv34 compound, which could be due to its lowest activity toward cyclooxygenase-2. This pharmacophore yielded a statistically significant atom-based three-dimensional-quantitative structure-activity relationships model with values of R (2), F and standard deviation of 0.9735, 165.4, and 0.2058, respectively, for a training set of twenty-three molecules. Obtained data for a test set of nine compounds with 0.9343, 0.3676, and 0.9694 values for correlation coefficient Q (2), root-mean-square error and Pearson R, respectively, suggested an excellent predictive power of the model. Further, the visualization of the three-dimensional-quantitative structure-activity relationships model provided information about the structure-activity relationships by revealing the importance of electron withdrawing and hydrophobic features, on the chemical structure of compounds for the inhibition of cyclooxygenase-2 enzyme activity. The model therefore provides explicit advances for the design of better resveratrol analogs as cyclooxygenase-2 inhibitors. Thus, the obtained three-dimensional-quantitative structure-activity relationships model can predict novel compounds derived from the low active resveratrol analogs, with improved structures, allowing them to have a better inhibitory effect toward cyclooxygenase-2. Obesity is the major leading cause of global mortality among metabolic disorders. Orlistat, a pancreatic lipase inhibitor, is the only approved drug of choice for the long term treatment of obesity. However, recent findings reported severe adverse effects with long term administration of orlistat. Plant-based natural products represent a vast reservoir of chemical entities that have the potential to treat various metabolic disorders. In the present study, we have performed a preliminary screening of local flora for pancreatic lipase inhibition assay, which highlighted the methanol extract of Tabernaemontana divaricata leaves (IC50 of 12.73 A mu g/mL). Molecular docking of the 38 alkaloids, reported from the leaves of T. divaricata, into the active site of pancreatic lipase led to the identification of furan bridged bis-indole alkaloids viz., Conophylline (1), Conophyllinine (2) and Conophyllidine (3) as potential leads, while Taberhanine (4), a monomeric indole alkaloid was found to exhibit comparatively poor docking score. Further, molecular docking analysis of the top three molecules (1-3) highlighted the importance of hydrophobic interactions of the dimeric extension with the lid domain of pancreatic lipase, which was not found with 4. Molecular dynamics simulations of 1-4 in complex with pancreatic lipase, further confirmed the docking results, wherein compound 4 was comparatively less stable (RMSD ae 1.5 ). Liquid chromatography-mass spectrometry analysis of the alkaloid-rich fraction of T. divaricata leaves indicated the presence of 1, while 2-4 were found to be absent. Molecule 1 was tested for pancreatic lipase inhibition potential, where it exhibited potent IC50 of 3.31 A mu M, comparable to that of orlistat (IC50 of 0.99 A mu M). Enzyme kinetic studies validated the in silico analyses, wherein compound 1 exhibited a competitive reversible inhibition of pancreatic lipase. The present study identified the bis-indole alkaloids of T. divaricata leaves as a new class of potent pancreatic lipase inhibitors. In previous work, we presented experimental and theoretical evidence that podophyllum derivatives substituted by chlorine atom in the 3-posititon of 2-aminopyridine exhibited significantly elevated potency. In this study, a series of podophyllum derivatives substituted in the 3-position of 2-aminopyridine, including methyl and fluorine groups, were synthesized. Their chemical structures were confirmed by the spectral (H-1-nuclear magnetic resonance, C-13-nuclear magnetic resonance, electrospray ionization mass spectrometry) and elemental analyses. These derivatives were tested for their respective cytotoxicities in HeLa, BGC-823, A549, Huh7, and MCF-7 cells by MTT assay and the pharmacological results showed that most of them displayed potent cytotoxicities against at least one of the tested cancer cell lines. Structure-activity relationship study suggested that the introduction of the fluorine atom into the 3-posititon of 2-aminopyridine had enhanced the cytotoxicity against numerous tumor cells compared to the chlorine atom, while the methyl group did not. Furthermore, other biological experiments were consistent with the beneficial effect of fluorine atom substituent in the 3-position of 2-aminopyridine, which then inhibited the microtubule polymerization and activity of topoisomerase II when 2-amino-3-fluoropyridine substituted in podophyllotoxin and 4'-O-demethylepipodophyllotoxin, and that they work by effecting the target proteins which induce P53-dependent apoptosis. Phytochemical investigation of the stem bark of Xylopia pierrei Hance led to the isolation of one triterpene, polycarpol (1), three heptenes, (7R)-acetylmelodorinol (2), (7R)-melodorinol (3), and melodienone (8), and four flavonoids, pinocembrin (4), isochamanetin (5), chrysin (6), and dichamanetin (7). All compounds were isolated for the first time from this plant species. The structures of the isolated compounds were characterized by spectroscopic techniques and by comparison of the spectroscopic data with the literature values and the stereochemistry at the asymmetric carbon was determined by the modified Mosher's method. Among them, compound 2 displayed potent cytotoxic activity against human small cell lung cancer (NCI-H187) cells with an IC50 value of 6.66 mu M and it was 2.3-fold higher than that of the reference anticancer drug, ellipticine. In addition, compound 2 was also evaluated against the non-cancerous Vero cells and showed high selectivity index of 8.89, which is 59-fold greater than that of ellipticine. The findings suggest that compound 2 should be further developed as a potential lead molecule for anticancer drug development. Normal and keloid fibroblasts were examined using X-band (9.3 GHz) electron paramagnetic resonance spectroscopy. The effect of genistein on the concentration of free radicals in both normal dermal and keloid fibroblasts after ultraviolet irradiation was investigated. The highest concentration of free radicals was seen in keloid fibroblasts, with normal fibroblasts containing a lower concentration. The concentration of free radicals in both normal and keloid fibroblasts was altered in a concentration-dependent manner by the presence of genistein. The change in intra-cellular free radical concentration after the ultraviolet irradiation of both normal and keloid fibroblasts is also discussed. The antioxidant properties of genistein, using its 1,1-Diphenyl-2-picrylhydrazyl (DPPH) free radical-scavenging activity as a model, were tested, and the effect of ultraviolet irradiation on its interaction with free radicals was examined. The electron paramagnetic resonance spectra of DPPH showed quenching by genistein. The interaction of genistein with DPPH free radicals in the absence of ultraviolet irradiation was shown to be slow, but this interaction was much faster under ultraviolet irradiation. Ultraviolet irradiation enhanced the free radical-scavenging activity of genistein. In this study, a series of novel substituted pyrazoles containing indole and thiazole motifs were synthesized and evaluated for their antihyperglycemic activity against alpha-amylase and alpha-glucosidase enzymes. Among them, 2-(5-(1H-indol-3-yl)-3-phenyl-1H-pyrazol-1-yl)-4-(4-bromo phenyl) thiazole (3f) was identified as the best antihyperglycemic activity (IC50 = 236.1 A mu g/mL) in comparison with the standard drug (acarbose, IC50 = 171.8 A mu g/mL). Through reverse screening, glucocorticoid receptor (1NHZ) was identified as the best target and molecular docking studies was performed to understand the interaction of the molecules to the active sites of 1NHZ. The docking study was then conducted against the enzymes (alpha-amylase (4X9Y) and alpha-glucosidase (2QMJ)) used in the in vitro studies. A molecular dynamic simulation experiment was conducted to analyze the behavior of the docked complex. The analysis confirmed the stability of the docked complex in terms of energy and hydrogen bonds. In vitro and in silico evaluations thus generate an interesting insight toward exploring heteroaryl substituted pyrazoles as potent therapeutic antidiabetic agents. A series of rhodanine 3-carboxyalkanoic acid derivatives possessing 4'-(N,N-dialkyl-amino or diphenylamino)-benzylidene moiety as a substituent at the C-5 position were synthesised and their antibacterial activity was screened. All the rhodanine derivatives showed bacteriostatic or bactericidal activity to the reference gram-positive bacterial strains, but lack of activity to the reference Gram-negative bacterial strains and yeast strains was observed. A mild and eco-friendly method has been developed for the synthesis of a series of 1,3-diaminopropan-2-ols 8a-n. The epoxide of epichlorohydrin undergoes ring-opening with amines using MgSO4 or mixed metal oxides catalysts under mild and neutral conditions to afford the corresponding beta-amino alcohols in excellent yields. Preliminary evaluation of relaxant activity of 8b-n was carried out on rat tracheal rings contracted by carbachol 1 mu M. Most of the tested compounds exhibited significantly relaxant effects in a concentration-dependent manner. Compound 8n was found to be the most active, being twofolds more potent than theophylline (positive control). This compound has the potential for development as an anti-asthma drug. Cancer is a multifactorial disease with a network of genes causing genetic alterations. The sophisticated techniques in molecular biology revealed different cancer pathway, but their mechanistic approach is still shrouded. Tumor necrosis factor and TNF-related apoptosis-inducing ligand receptors (DR5) emerged as potential target drug for the cancer therapy. Among natural products basidiomycete fungus, Ganoderma lucidum and its constituents endowed with a plethora of activities modulating signaling in cancer. Ganoderic acid, a triterpene with lanosteroidal skeleton play an inextricable role in modulating signaling cascades in various mitogenic pathways. In the present study, receptor-based molecular docking was performed to study the dynamic behavior of the docked complexes and the molecular interactions between ganoderic acid and its isoforms with tumor necrosis factor and its receptor (DR5). The top scoring compounds were compared with the already documented natural inhibitor of tumor necrosis factor, DR5-curcumin, catechin, bupropion, pentoxyphyllin for their binding affinity and other absorption, distribution, metabolism, excretion, and toxicity properties. Ganoderic acid A interact more promising as compared with other isoforms with GScore (-9.858 (kcal/mol), Lipophilic EvdW (-1.7), H Bond (-0.9), Glide emodel (-40.5) with the involvement of Tyr 151, Leu 120 and Gln 149 residues during binding with tumor necrosis factor. During docking of ganoderic acid with DR5, ganoderic acid A exhibits GScore (-8.7), HBond (-2.9), Glide emodel (-30.0) with the involvement of hydrogen bonding inMet99, Arg101, Pro97, Glu98 residues. Natural inhibitors already documented exhibit low-binding energy and other docking parameters, which have an edge of ganoderic acid A to tumor necrosis factor and DR5. Ganoderic acid A efficiently inhibits the proliferation, viability, and intracellular reactive oxygen species and messenger RNA expression of tumor necrosis factor and DR5 in the breast cancer cell lines. Pediatric patients undergoing tracheostomy placement are often medically fragile with multiple comorbidities. The complexity of these patients partnered with the risks of a newly placed tracheostomy necessitates a clear understanding of patient management and clinical competence. At our institution, a quality improvement initiative was formed with a focus on increasing the safety of these patients by developing a postoperative care guideline. The Pediatric Emergency Services Network (PESN) was developed to provide ongoing continuing education on pediatric guidelines and pediatric emergency care to rural and nonpediatric hospitals, physicians, nurses, and emergency personnel. A survey was developed and given to participants attending PESN educational events to determine the perceived benefit and application to practice of the PESN outreach program. Overall, 91% of participants surveyed reported agreement that PESN educational events were beneficial to their clinical practice, provided them with new knowledge, and made them more knowledgeable about pediatric emergency care. Education and outreach programs can be beneficial to health care workers' educational needs. Despite increasing injury prevalence of traumatic brain injury (TBI) in children, most injuries in children are mild in severity. Even mild injuries can result in long-term or chronic effects not apparent until the child ages, resulting in increased economic burden and overall lifetime costs related to injury. Early recognition of TBI is essential for ongoing evaluation and management of acute symptoms and reduction of chronic health effects. Providing early interventions to manage acute and post-concussive symptoms and reducing health disparities in children with mild TBI can minimize adverse events that impact health-related quality of life for the injured child and their family and increase overall population health. Pediatric codes outside the ICU are associated with increased morbidity and mortality. This qualitative research highlights results from confidential interviews with 10 pediatric nurses with experience of caring for children who required rapid response, code response, or transfer to intensive care. Detailed examination of nurses' experiences revealed local factors that facilitate and inhibit timely transfer of critical patients. Nurses identified themes including the impact of nurse assertiveness, providers' lack of understanding of nursing, team communication, and other hospital cultural barriers. Complex regional pain syndrome (CRPS) is a life-altering and debilitating chronic pain condition. The authors are presenting a case study of a female who received high-dose ketamine for the management of her CRPS. The innovative treatment lies not only within the pharmacologic management of her pain, but also in the fact that she was the first patient to be admitted to our pediatric intensive care unit solely for pain control. The primary component of the pharmacotherapy treatment strategy plan was escalating-dose ketamine infusion via patient-controlled-analgesia approved by the pharmacy and therapeutics committee guided therapy for this patient. The expertise of advanced practice nurses blended exquisitely to ensure patient and family-centered care and the coordination of care across the illness trajectory. The patient experienced positive outcomes. Pressure injury prevention is required in all health care environments. Respiratory technology includes invasive and noninvasive positive pressure ventilation methods of support and life-saving equipment. Pressure injury can occur from tracheostomy tubes and their securement devices, or use of noninvasive positive pressure ventilation interfaces or the head gear. Methods instituted to decrease hospital-acquired pressure injury related to noninvasive positive pressure ventilation and tracheostomy securement devices are discussed. Inadequate treatment of pain for children in the emergency department is a persistent problem. Health care professionals are bound by ethical principles to provide adequate pain management; in children, this may be challenging owing to cognitive and developmental differences, lack of knowledge regarding best practices, and other barriers. Studies have concluded that immediate assessment, treatment, and reassessment of pain after an intervention are essential. Self-report and behavioral scales are available. Appropriate management includes pharmacologic and non-pharmacologic interventions. Specific diagnoses (eg, abdominal pain or traumatic injuries) have been well-studied and guidance is available to maximize efforts in managing the associated pain. Today's health care environment emphasizes patient outcomes, although financial incentives and penalties have placed a high priority on elimination of health care-associated infections (HAIs). The use of standardized care bundles is evidence-based; however, implementation of these bundles has not proven effective in eliminating HAIs. Actively learning from HAI events through the use of apparent and systemic cause analysis identifies new barriers to success and opportunities for improvement in further reducing HAIs. The effective use of apparent and systemic cause analysis requires a standardized review and is followed with the implementation of appropriate steps to remove newly identified barriers. Patient- and family-centered care is endorsed by leading health care organizations. To incorporate the family in interdisciplinary rounds in the pediatric intensive care unit, it is necessary to prepare the family to be an integral member of the child's health care team. When the family is part of the health care team, interdisciplinary rounds ensure that the family understands the process of interdisciplinary rounds and that it is an integral part of the discussion. An evidence-based protocol to provide understanding and support to families related to interdisciplinary rounds has significant impact on satisfaction, trust, and patient outcomes. Capnography or end-tidal carbon dioxide (Etco(2)) monitoring has a variety of uses in the pediatric intensive care setting. The ability to continuously measure exhaled carbon dioxide can provide vital information about airway, breathing, and circulation in critically ill pediatric patients. Capnography has diagnosis-specific applications for pediatric patients with congenital heart disease, reactive airway disease, neurologic emergencies, and metabolic derangement. This modality allows for noninvasive monitoring and has become the standard of care. This article reviews the basic principles and clinical applications of Etco(2) monitoring in the pediatric intensive care unit. At a 72-bed pediatric facility, a multidisciplinary team approach was used to prepare for the expansion of services for patients requiring spinal fusion. This preparation included emergency response requiring massive transfusion, necessitating the need for a Massive Transfusion Protocol (MTP) process to be in place. Such instances are low volume/high risk, creating difficulty for staff to gain and maintain proficiency with the equipment and processes related to the MTP in a secure environment. The purpose of this article is to highlight the preparation and education put into place before receiving the first pediatric patient for spinal fusion. This study was aimed to examine the nanoparticle formation from redispersion of binary and ternary solid dispersions. Binary systems are composed of various ratios of glibenclamide (GBM) and polyvinylpyrrolidone K30 (PVP-K30), whereas a constant amount at 2.5%w/w of a surfactant, sodium lauryl sulfate (SLS) or Gelucire44/14 (GLC), was added to create ternary systems. GBM nanoparticles were collected after the systems were dispersed in water for 15 min. The obtained nanoparticles were characterized for size distribution, crystallinity, thermal behavior, molecular structure, and dissolution properties. The results indicated that GBM nanoparticles could be formed when the drug content of the systems was lower than 30%w/w in binary systems and ternary systems containing SLS. The particle size ranged from 200 to 500 nm in diameter with narrow size distribution. The particle size was increased with increasing drug content in the systems. The obtained nanoparticles were spherical and showed the amorphous state. Furthermore, because of being amorphous form and reduced particle size, the dissolution of the generated nanoparticles was markedly improved compared with the GBM powder. In contrast, all the ternary solid dispersions prepared with GLC anomalously provided the crystalline particles with the size ranging over 5 mu m and irregular shape. Interestingly, this was irrelevant to the drug content in the systems. These results indicated the ability of GLC to destabilize the polymer network surrounding the particles during particle precipitation. Therefore, this study suggested that drug content, quantity, and type of surfactant incorporated in solid dispersions drastically affected the physicochemical properties of the precipitated particles. The objective of this study was to develop tanshinol sustained-release pellets (TS-SRPs) for the treatment of angina. Considering the poor intestinal absorption of TS, sodium caprate (SC) was used as an absorption enhancer for bioavailability improvement. Single-pass intestinal perfusion in rats demonstrated that the permeability of TS was remarkably enhanced, when the weight ratio of TS to SC was 1:3. Then, the cores were prepared with TS, SC and MCC at a weight ratio of 1:3:16 via extrusion-spheronization, followed by coating with Eudragit (R) RS30D/RL30D dispersion (9:1, w/w). In vitro release studies revealed that release methods and rotation rates had no significant effects on the drug release of optimized TS-SC-SRPs except for the dissolution media. The release behavior was characterized as non-Fick diffusion mechanism. The pellets possessed a dispersion-layered spherical structure and were stable during three months of storage at 40 degrees C/75% RH. Compared with TS immediate-release pellets, the AUC(0-24) in healthy rabbits was increased by 1.97-fold with prolonged MRT (p<.05). Pharmacodynamic studies in rabbits with angina showed that the optimized TS-SC-SRPs had a steady and improved efficacy with synchronous drug concentration-efficacy. Consequently, preparation of sustained-release pellets with absorption enhancer provides a potential strategy to prolong the release and enhance the efficacy for hydrophilic drugs with poor intestinal absorption. Objectives: Dry powder formulations are extensively used to improve the stability of antibodies. Spray drying is one of important methods for protein drying. This study investigated the effects of trehalose, hydroxypropyl beta cyclodextrin (HPBCD) and beta cyclodextrin (BCD) on the stability and particle properties of spray-dried IgG. Methods: D-optimal design was employed for both experimental design and analysis and optimization of the variables. The size and aerodynamic behavior of particles were determined using laser light scattering and glass twin impinger, respectively. In addition, stability, ratio of beta sheets and morphology of antibody were analyzed using size exclusion chromatography, IR spectroscopy and electron microscopy, respectively. Results: Particle properties and antibody stability were significantly improved in the presence of HPBCD. In addition, particle aerodynamic behavior, in terms of fine-particle fraction (FPF), enhanced up to 52.23%. Furthermore, antibody was better preserved not only during spray drying, but also during long-term storage. In contrast, application of BCD resulted in the formation of larger particles. Although trehalose caused inappropriate aerodynamic property, it efficiently decreased antibody aggregation. Conclusion: HPBCD is an efficient excipient for the development of inhalable protein formulations. In this regard, optimal particle property and antibody stability was obtained with proper combination of cyclodextrins and simple sugars, such as trehalose. Objective: The aim of this study is to evaluate the use of PEG/glycerides of different HLB; oleoyl macrogol-6-glycerides (Labrafil (R) M 1944 CS) and caprylocaproylmacrogol-8-glycerides (Labrasol (R)), compared to Labrafac lipophile (R) as PEG-free glyceride in the preparation of nanostructured lipid carriers (NLCs). PEG/glycerides are suggested to perform a dual function; as the oily component, and as the PEG-containing substrate required for producing the PEGylated carriers without physical or chemical synthesis. Methods: Lipid nanocarriers were loaded with simvastatin (SV) as a promising anticancer drug. An optimization study of NLC fabrication variables was first conducted. The effect of lyophilization was investigated using cryoprotectants of various types and concentrations. The prepared NLCs were characterized in terms of particle size (PS), size distribution (PDI), zeta potential (ZP), drug entrapment, in vitro drug release, morphology and drug-excipient interactions. The influence of glyceridesPEG on the cytotoxicity of SV was evaluated on MCF-7 breast cancer cells, in addition to the cellular uptake of fluorescent blank NLCs. Results: The alteration between different oil types had a significant impact on PS, ZP and drug release. Both sucrose and trehalose showed the lowest increase in PS and PDI of the reconstituted lyophilized NLCs. The in vitro cytotoxicity and cellular uptake studies indicated that SV showed the highest antitumor effect on MCF-7 cancer cells when loaded into Labrasol (R) NLCs demonstrating a high cellular uptake as well. Conclusion: The study confirms the applicability of PEG/glycerides in the development of NLCs. Encapsulating SV in Labrasol (R)-containing NLC could enhance the antitumor effect of the drug. The bioavailability of the anthelminthic flubendazole was remarkably enhanced in comparison with the pure crystalline drug by developing completely amorphous electrospun nanofibres with a matrix consisting of hydroxypropyl-beta- cyclodextrin and polyvinylpyrrolidone. The thus produced formulations can potentially be active against macrofilariae parasites causing tropical diseases, for example, river blindness and elephantiasis, which affect altogether more than a hundred million people worldwide. The bioavailability enhancement was based on the considerably improved dissolution. The release of a dose of 40 mg could be achieved within 15 min. Accordingly, administration of the nanofibrous system ensured an increased plasma concentration profile in rats in contrast to the practically non-absorbable crystalline flubendazole. Furthermore, easy-to-grind fibers could be developed, which enabled compression of easily administrable immediate release tablets. Objective: Artesunate (ART) is proven to have potential anti-proliferative activities, but its instability and poor aqueous solubility limit its application as an anti-cancer drug. The present study was undertaken to develop coaxial electrospraying as a novel technique for fabricating nanoscale drug delivery systems of ART as the core-shell nanostructures. Methods: The core-shell nanoparticles (NPs) were fabricated with coaxial electrospraying and the formation mechanisms of NPs were examined. The physical solid state and drug-polymer interactions of NPs were characterized by X-ray powder diffraction (XRPD) and Fourier transform infrared (FTIR) spectroscopy. The effects of materials and electrospraying process on the particle size and surface morphology of NPs were investigated by scanning electron microscopy (SEM). The drug release from NPs was determined in vitro by a dialysis method. Results: The ART/poly(lactic-co-glycolic) acid (PLGA) chitosan (CS) NPs exhibited the mean particle size of 30393nm and relatively high entrapment efficiency (80.5%). The release pattern showed an initial rapid release within two hours followed by very slow extended release. The release pattern approached the Korsmeyer-Peppas model. Conclusions: The present results suggest that the core-shell NPs containing PLGA and CS have a potential as carriers in the anticancer drug therapy of ART. This study was oriented toward the disintegration profiling of the diclofenac sodium (DS) immediate-release (IR) tablets and development of its relationship with medium permeability k(perm) based on Kozeny-Carman equation. Batches (L1-L9) of DS IR tablets with different porosities and specific surface area were prepared at different compression forces and evaluated for porosity, in vitro dissolution and particle-size analysis of the disintegrated mass. The k(perm) was calculated from porosities and specific surface area, and disintegration profiles were predicted from the dissolution profiles of IR tablets by stripping/residual method. The disintegration profiles were subjected to exponential regression to find out the respective disintegration equations and rate constants k(d). Batches L1 and L2 showed the fastest disintegration rates as evident from their bi-exponential equations while the rest of the batches L3-L9 exhibited the first order or mono-exponential disintegration kinetics. The 95% confidence interval (CI95%) revealed significant differences between k(d) values of different batches except L4 and L6. Similar results were also spotted for dissolution profiles of IR tablets by similarity (f(2)) test. The final relationship between k(d) and k(perm) was found to be hyperbolic, signifying the initial effect of k(perm) on the disintegration rate. The results showed that disintegration profiling is possible because a relationship exists between k(d) and k(perm). The later being relatable with porosity and specific surface area can be determined by nondestructive tests. Background: Free radical scavengers and antioxidants, with the main focus on enhanced targeting to the skin layers, can provide protection against skin ageing. Objective: The aim of the present study was to prepare nanoethosomal formulation of gammaoryzanol (GO), a water insoluble antioxidant, for its dermal delivery to prevent skin aging. Methods: Nanoethosomal formulation was prepared by a modified ethanol injection method and characterized by using laser light scattering, scanning electronic microscope (SEM) and X-ray diffraction (XRD) techniques. The effects of formulation parameters on nanoparticle size, encapsulation efficiency percent (EE%) and loading capacity percent (LC%) were investigated. Antioxidant activity of GO-loaded formulation was investigated in vitro using normal African green monkey kidney fibroblast cells (Vero). The effect of control and GO-loaded nanoethosomal formulation on superoxide dismutase (SOD) and malondialdehyde (MDA) content of rat skin was also probed. Furthermore, the effect of GO-loaded nanoethosomes on skin wrinkle improvement was studied by dermoscopic and histological examination on healthy humans and UV-irradiated rats, respectively. Results: The optimized nanoethosomal formulation showed promising characteristics including narrow size distribution 0.17 +/- 0.02, mean diameter of 98.9 +/- 0.05 nm, EE% of 97.12 +/- 3.62%, LC% of 13.87 +/- 1.36% and zeta potential value of -15.1 +/- 0.9 mV. The XRD results confirmed uniform drug dispersion in the nanoethosomes structure. In vitro and in vivo antioxidant studies confirmed the superior antioxidant effect of GO-loaded nanoethosomal formulation compared with control groups (blank nanoethosomes and GO suspension). Conclusions: Nanoethosomes was a promising carrier for dermal delivery of GO and consequently had superior anti-aging effect. Cationic liposome is a potential nanocarrier to deliver drugs to solid tumor with proven efficiency in targeting tumor tissues. However, the major limitation is their charge-related instability and blood toxicity via intravenous injection. In order to overcome these problems and to maintain tumor targeting potency, we modified the cationic liposomes with low molecular weight heparin (LMWH) to obtain series of liposomes with different surface charges. Both in vitro and in vivo studies including serum stability, blood hemolysis, cellular uptake, cytotoxicity and in vivo biodistribution were evaluated. The results indicated the cationic liposome with surface charge of 5 mV (denoted as LLip-C) had the similar stability in serum and mild hemocytolysis compared with that of anionic liposome (LLip-D), but better cellular uptake owing to electrostatic interaction between cationic liposomes and cell membranes. Furthermore, we prepared curcumin-loaded liposomes to investigate the therapy efficiency. The intracellular distribution of curcumin-loaded LLip-C (Curcumin-LLip-C) was inclined to locate in cytoplasm and nuclei than curcumin-loaded LLip-D (Curcumin-LLip-D). In vitro cytotoxicity of Curcumin-LLip-C also exhibited higher inhibition of tumor cells than that of Curcumin-LLip-D. These results certified that a slightly positive surface charge of nanocarriers could achieve the balance between well antitumor efficiency and mild adverse effects. Objectives: A new improved mometasone furoate (Elocon (TM)) cream with an emulsification system that produces a stable emulsion has been developed. In order to register the product in various markets, it was essential to ensure the cream was topically well tolerated and that it was bioequivalent to the reference product.\ Methods: Phase I clinical studies were performed to assess the local safety and tolerability upon multiple dosing of this new cream as well as to assess the single-dose bioequivalence relative to the marketed product. Bioequivalence was assessed using a vasoconstrictive assay (VCA) after a dose-duration pilot study was completed with the marketed Elocon cream. Key findings: The new mometasone cream and its vehicle were nonirritating in healthy subjects during 21-day patch application (MCII <0.025). The positive control was moderately irritating in the same study. The pivotal VCA study enrolled 162 subjects with 105 detectors included in the analysis of bioequivalence. In the 105 detectors, the ratio (x100%) of AUEC values at ED50 for test vs. standard (90% CI) was 112.91% (105.55, 120.87), within the bioequivalence criteria of (80, 125). Conclusions: These studies supported the registration of reformulated mometasone cream in various markets. Novel solid dispersions of oleanolic acid-polyvinylpolypyrrolidone (OLA-PVPP SDs) were designed and prepared to improve the apparent solubility of drug, as well as to improve the stability, fluidity and compressibility of SDs. Disintegrable OLA-PVPP SDs were then evaluated both in vitro and in vivo. DSC, XRD, IR and SEM analysis proved the formation of OLA-PVPP SD and its amorphous state. The results of fluidity study, moisture absorption test and stability test showed that OLA-PVPP SD with good fluidity and qualified stability was successfully obtained. Meanwhile excellent dissolution rate was achieved for in vitro studies; dissolution test showed that similar to 50-75% of OLA was dissolved from SDs within the first 10min, which is about 10-15 times of free OLA. In vivo study indicated that the formation of solid dispersion could largely improve the absorption of OLA, resulting in a much shorter T-max (p < .05) and higher C-max (p < .01) than those of free drug. The AUC(0 ->infinity) of OLA-PVPP SDs (1:6) were 155.4 +/- 37.24 h.ng/mL compared to the 103.11 +/- 26.69 h.ng/mL and 94.92 +/- 13.05 h.ng/mL of OLA-PVPP physical mixture (1:6) and free OLA, respectively. These proved PVPP could be a promising carrier of solid dispersions and was industrially feasible alternative carrier in the manufacture of solid dispersions. The aim of the present investigation was to enhance the oral bioavailability of olmesartan medoxomil by improving its solubility and dissolution rate by preparing nanosuspension (OM-NS), using the Box-Behnken design. In this, four factors were evaluated at three levels. Independent variables include: concentration of drug (X-1), concentration of surfactant (X-2), concentration of polymer (X-3) and number of homogenization cycles (X-4). Based on preliminary studies, the size (Y-1), zeta potential (ZP) (Y-2) and % drug release at 5min (Y-3) were chosen as dependent responses. OM-NS was prepared by high pressure homogenization method. The size, PDI, ZP, assay, in vitro release and morphology of OM-NS were characterized. Further, the pharmacokinetic (PK) behavior of OM-NS was evaluated in male wistar rats. Statistically optimized OM-NS formulation exhibited mean particle size of 492nm, ZP of -27.9mV and 99.29% release in 5min. OM-NS showed more than four times increase in its solubility than pure OM. DSC and XRD analyses indicated that the drug incorporated into OM-NS was in amorphous form. The morphology of OM-NS was found to be nearly spherical with high dispersity by scanning electron microscopic studies. The PK results showed that OM lyophilized nanosuspension (NS) exhibited improved PK properties compared to coarse powder suspension and marketed tablet powder suspension (TS). Oral bioavailability of lyophilized NS was increased by 2.45 and 2.25 folds when compared to marketed TS and coarse powder suspension, respectively. Results of this study lead to conclusion that NS approach was effective in preparing OM formulations with enhanced dissolution and improved oral bioavailability. Background: Dioscin has shown cytotoxicity against cancer cells, but its poor solubility and stability have limited its clinical application. In this study, we designed mixed micelles composed of TPGS and Soluplus (R) copolymers entrapping the poorly soluble anticancer drug dioscin. Method: In order to improve the aqueous solubility and bioactivity of dioscin, TPGS/Soluplus (R) mixed micelles with an optimal ratio were prepared using a thin-film hydration method, and their physicochemical properties were characterized. Cellular cytotoxicity and uptake of the dioscin-loaded TPGS/Soluplus (R) mixed micelles were studied in MCF-7 breast cancer cells and A2780s ovarian cancer cells. The pharmacokinetics of free dioscin and dioscin-loaded TPGS/Soluplus (R) mixed micelles was studied in vivo in male Sprague-Dawley rats via a single intravenous injection in the tail vein. Results: The average size of the optimized mixed micelle was 67.15 nm, with 92.59% drug encapsulation efficiency and 4.63% drug loading efficiency. The in vitro release profile showed that the mixed micelles presented sustained release behavior compared to the anhydrous ethanol solution of dioscin. In vitro cytotoxicity assays were conducted on human cancer cell lines including A2780s ovarian cancer cells and MCF-7 breast cancer cells. The mixed micelles exhibited better antitumor activity compared to free dioscin against all cell lines, which may benefit from the significant increase in the cellular uptake of dioscin from mixed micelles compared to free dioscin. The pharmacokinetic study showed that the mixed micelle formulation achieved a 1.3 times longer mean residual time (MRT) in circulation and a 2.16 times larger area under the plasma concentration-time curve (AUC) than the free dioscin solution. Conclusion: Our results suggest that the dioscin-loaded mixed micelles developed in this study might be a potential nano drug-delivery system for cancer chemotherapy. Purpose: Zaleplon (ZL) is a hypnotic drug prescribed for the management of insomnia and convulsions. The oral bioavailability of ZL was low (similar to 30%) owing to poor water solubility and hepatic first-pass metabolism. The cornerstone of this investigation is to develop and optimize solid lipid nanoparticles (SLNs) of ZL with the aid of Box-Behnken design (BBD) to improve the oral bioavailability. Methods: A design space with three formulation variables at three levels were evaluated in BBD. Amount of lipid (A(1)), amount of surfactant (A(2)) and concentration of co-surfactant (%) (A(3)) were selected as independent variables, whereas, particle size (B-1), entrapment efficiency (B-2) and zeta potential (ZP, B-3) as responses. ZL-SLNs were prepared by hot homogenization with ultrasonication method and evaluated for responses to obtain optimized formulation. Morphology of nanoparticles was observed under SEM. DSC and XRD studies were examined to understand the native crystalline behavior of drug in SLN formulations. Further, in vivo studies were performed in Wistar rats. Results: The optimized formulation with 132.89 mg of lipid, 106.7 mg of surfactant and 0.2% w/v of co-surfactant ensued in the nanoparticles with 219.9 +/- 3.7 nm of size, 25.66 +/- 2.83 mV surface charge and 86.83 +/- 2.65% of entrapment efficiency. SEM studies confirmed the spherical shape of SLN formulations. The DSC and XRD studies revealed the transformation of crystalline drug to amorphous form in SLN formulation. In conclusion, in vivo studies in male Wistar rats demonstrated an improvement in the oral bioavailability of ZL from SLN over control ZL suspension. Conclusions: The enhancement in the oral bioavailability of ZL from SLNs, developed with the aid of BBD, explicated the potential of lipid-based nanoparticles as a potential carrier in improving the oral delivery of this poorly soluble drug. Transnational academic mobility is often characterized in relation to terms such as brain drain', brain gain', or brain circulation' - terms that isolate researchers' minds from their bodies, while saying nothing about their political identities as foreign nationals. In this paper, I explore the possibilities of a more nomadic political ontology', where the body is multifunctional and complex, a transformer of flows and energies, affects, desires and imaginings' (p. 25). In this sense, academic mobility is not only the outcome of national innovation and economic competitiveness strategies, but also sets the conditions for epistemic and ontological change at the level of the individual. In this paper, I explore a personal account of the nomadic political ontology of academic mobility to exemplify the interrelationships between nationalism, academic belonging and transnationalism. My experiences as a transnational subject affect the stability and scope of my work as a policy-oriented researcher who studies the academic profession and the internationalization of higher education. My positionality in relation to my research focus is likely not unique to the field of higher education studies or educational research more broadly, which permits a wider applicability of this exploration beyond personal narrative and a particular national context. This personal reflection, guided by nomadic theory and post-structural possibilities, offers a viewpoint of the academic profession beyond the standard mobility discourse. Space, time and movement have particular meanings and significance for Australian prisoners attempting higher education while incarcerated. In a sense, the prison is another world' or country' with its own spatial and temporal arrangements and constraints for incarcerated university students. The contemporary digital university typically presupposes a level of mobility and access to mobile communication technologies which most Australian prisoners cannot access. This article examines the immobility of incarcerated students and their attempts to complete tertiary and pre-tertiary distance education courses without direct internet access. Drawing on critical mobilities theory, this article also explores attempts to address this digital disconnection of incarcerated students and where such interventions have been frustrated by movement issues within the prison. Prison focus group data suggest the use of modified digital learning technologies in prisons needs to be informed by a critical approach to the institutional processes and practices of this unique and challenging learning environment. This article also highlights the limitations and contradictions of painful immobilisation as a core strategy of Australia's modern, expanding penal state, which encourages rehabilitation through education, while effectively cutting prisoners off from the wider digital world. Over the past 25years charter school policies have spread through the United States at a rapid pace. However, despite this rapid growth these policies have spread unevenly across the country with important variations in how charter school systems function in each state. Drawing on case studies in Michigan and Oregon, this article argues that mobile education policies are best conceptualized as made up of both mobile and immobile elements that continually shape and reshape those policies. This article contributes to a growing literature on policy mobilities by proposing that affect be considered in analyses of the movements and transformations of policy over time and space. In particular, collective affective conditions, the role of affect in terms of infrastructures and actors of policy apparatuses and the mediating influences of affective bodily encounters are discussed in relation to why and how policies move. The article suggests that policy mobilities research could be strengthened by further examining how affect is inherent in familiar considerations of policy actors and networks, and tools and infrastructures such as policy documents, meetings, and data, and their contributions to policy flows. In addition, encompassing affective atmospheres and structures of feeling, as well as affect in the specific relationships between people and with place, are indicated as important for the study of policy mobility and immobility, including in shaping policy uptake and resistance. Examples from educational research are used to elaborate these considerations of affect for policy mobilities and to suggest possible topical and methodological implications for critical policy research. As universities have succumbed to market discourses, they have adopted advertising strategies. It is not uncommon to see advertisements for them displayed in such mobile spaces as railway stations and alongside highways. Whilst it is true that such environments have always sought to take advantage of populations in transit, the fact that higher education institutions have turned to them as promotional sites, reflects the fact that the transit' demographic now includes large numbers of young people and high school students. In this paper, a sample of higher education advertisements found in Sydney's transit spaces is analysed along with the rationale' provided by advertising companies responsible for their design. It is argued their existence reflects the fact that universities compete against one another for students and need to develop a persuasive brand'. Thus in line with neo-liberalist constructions of subjectivity, they individualise the educational experience, and translate that experience into an economic asset, as a value-adding process. It is of note then that much of the imagery and copy of the advertising visualises' education as a journey and underpins the fact that mobility is an inescapable predicate of quotidian life. The argument of this paper is that new methodologies associated with the emerging field of policy mobilities' can be applied, and are in fact required, to examine and research the networked and relational, or topological', nature of globalised education policy, which cuts across the new spaces of policymaking and new modes of global educational governance. In this paper, we examine the methodological issues pertaining to the study of the movement of policy. Informed by contemporary methodological thinking around social network analysis and the ethnographic notion of following the policy', we discuss the limitations of these approaches to adequately address presence in policy network analysis, and the problem of representing speed and intensity of policy mobility, even while these attempt to solve the problem of relationality and territoriality. We conclude that the methodologies of policy mobility are inexorably intertwined with the (constantly) changing phenomena under examination, and hence require what Lury and Wakeford describe as inventive methods'. Higher education is understood as essential to enabling social mobility. Research and policy have centred on access to university, but recently attention has turned to the journey of social mobility itself - and its costs. Long-distance or extreme' social mobility journeys particularly require analysis. This paper examines journeys of first-in-family university students in the especially high-status degree of medicine, through interviews with 21 students at an Australian medical school. Three themes are discussed: (1) the roots of participants' social mobility journeys; (2) how sociocultural difference is experienced and negotiated within medical school; and (3) how participants think about their professional identities and futures. Students described getting to medical school the hard way', and emphasised the different backgrounds and attitudes of themselves and their wealthier peers. Many felt like imposters', using self-deprecating language to highlight their lack of fit' in the privileged world of medicine. However, such language also reflected resistance to middle-class norms and served to create solidarity with community of origin, and, importantly, patients. Rather than narratives of loss, students' stories reflect a tactical refinement of self and incorporation of certain middle-class attributes, alongside an appreciation of the worth their difference' brings to their new destination, the medical profession. General aviation and small unmanned aircraft systems are less redundant, may be less thoroughly tested, and are flown at lower cruise altitudes than commercial aviation counterparts. These factors result in a higher probability of a forced or emergency landing scenario. Currently, general aviation relies on the pilot to select a landing site and plan a trajectory, even though workload in an emergency is typically high, and decisions must be made rapidly. Although sensors can provide local real-time information, awareness of more distant or occluded regions requires database and/or offboard data sources. This paper considers different data sources and how to process these data to inform an emergency landing planner regarding risks posed to property, people on the ground, and the aircraft itself. Detailed terrain data are used for selection of candidate emergency landing sites. Mobile phone activity is evaluated as a means of real-time occupancy estimation. Occupancy estimates are combined with population census data to estimate emergency landing risk to people on the ground. Openly available databases are identified and mined as part of an emergency landing planning case study. The European Coordination Centre for Accident and Incident Reporting Systems develops an information system for reporting aviation occurrences on the European scale. The system makes use of various taxonomies, like the taxonomy of event types, or a taxonomy of descriptive factors. However, the European Coordination Centre for Accident and Incident Reporting Systems data model and associated taxonomies are complex and difficult to understand, which reduces interpretability of the records. In this paper, the problems European Coordination Centre for Accident and Incident Reporting Systems users face during occurrence reporting are discussed, as well as subsequent searches in reported occurrences. Next, it is shown how proper conceptual modeling with ontological foundations could leverage the quality of occurrence categorization, and thus better exploitability of the European Coordination Centre for Accident and Incident Reporting Systems system. The ontological model is demonstrated on the Aviation Vocabulary Explorer, which is a new prototypical tool for exploring European Coordination Centre for Accident and Incident Reporting Systems. This paper presents the autonomous micro aerial vehicle pilot, a new autopilot platform weighing 6.25 g and measuring 11.3 cm(2), specifically designed for use on micro/miniature aerial vehicle mobile sensing platforms. An overview of the hardware, firmware, ground station, and validation testing used to demonstrate this autopilot as a viable research instrument for atmospheric thermodynamic sensing on micro/miniature aerial vehicles is presented. The autonomous micro aerial vehicle pilot incorporates a 16 bit 140 MHz processor, global positioning system, dual radios, inertial measurement unit, pressure sensor, humidity sensor, and temperature sensor. Through these components, the autonomous micro aerial vehicle pilot is capable of full-state feedback, vehicle state estimation, localization, and wireless networking. Notable features of this autopilot are its dual-radio configuration, providing redundancy and adaptability in communication, and the detachable sensor breakout design that allows for increased flexibility in sensor placement on a vehicle. Full-state feedback of the autopilot platform was validated through a series of bench tests. This includes a unique technique for dynamic inertial measurement unit validation performed using the present group's model positioning system as well as comparing estimated pressure values with known values at multiple heights and global positioning system values with a known path. Systemwide validation was performed through flight tests on a micro/miniature aerial vehicle airframe. Policymakers are implementing reforms with the assumption that students do better when attending high-achieving schools. In this article, we use longitudinal data from Chicago Public Schools to test that assumption. We find that the effects of attending a higher performing school depend on the school's performance level. At elite public schools with admission criteria, there are no academic benefitstest scores are not better, grades are lowerbut students report better environments. In contrast, forgoing a very low-performing school for a nonselective school with high test scores and graduation rates improves a range of academic and nonacademic outcomes. We find evidence of enrollment increases in both Syracuse and Buffalo following the announcement of a placed-based scholarship program, Say Yes to Education. While the Syracuse increases were accompanied by enrollment declines in surrounding suburban districts, the Buffalo increases coincided with declines in private school enrollments. Buffalo and Rochester also saw enrollment increases relative to trends following the announcement of Say Yes in Syracuse, suggesting that part of the increase in Syracuse might be attributable to factors other than Say Yes. We also find evidence of increases in home prices in Syracuse after the program's announcement, as well as decreases in the surrounding suburbs. We do not find evidence of similar housing price changes in Buffalo. One critical factor in policy implementation is how teachers interpret policy. Previous research largely overlooks how the broader culture shapes teachers' interpretations. In the current research, we explore how teachers' interpretations of instructional reforms are associated with the logics of broad societal institutions. Our longitudinal mixed-methods study of 117 teachers at three urban public schools demonstrates that teachers' interpretations are rooted in market accountability logics, professional bureaucracy logics, and communal sentiment logics. Teachers' logics partially depend on their school and community contexts. The most substantive differences in teachers' logics result from individual attributes, namely, race/ethnicity. One implication is that effective policy implementation depends on formulation and framing that address the multiple and potentially competing logics that motivate teachers' responses to reform. Many states require prospective principals to pass a licensure exam to obtain an administrative license, but we know little about the potential effects of principal licensure exams on the pool of available principals or whether scores predict later job performance. We investigate the most commonly used exam, the School Leaders Licensure Assessment (SLLA), using 10 years of data on Tennessee test takers. We uncover substantial differences in passage rates by test-taker characteristics. In particular, non-Whites are 12 percentage points less likely than otherwise similar White test takers to attain the required licensure score. Although candidates with higher scores are more likely to be hired as principals, we find little evidence that SLLA scores predict measures of principal job performance, including supervisors' evaluation ratings or teachers' assessments of school leadership from a statewide survey. Our results raise questions about whether conditioning administrative licensure on SLLA passage is consistent with principal workforce diversity goals. As state and federal policymakers continue to adopt more centralized policies, it is increasingly important to understand how policies, particularly those designed to enhance education for underserved students, are implemented locally. We employ a mixed-methods sequential explanatory design to investigate the implementation of one policy, that which guides the process of reclassifying English learners (ELs) as English proficient in Texas. First, we use event history analysis to determine whether the likelihood of being reclassified differs across the state for similar ELs. Second, we utilize interview and observation data from eight schools to unpack how practitioners implement reclassification policy. We find differences in the hazard rate of reclassification across the state, which is linked to practitioners' understanding of the policy. The Every Student Succeeds Act (ESSA) requires states to identify and turn around struggling schools, with federal school improvement money required to fund evidence-based policies. Most research on turnarounds has focused on individual schools, whereas studies of district-wide turnarounds have come from relatively exceptional settings and interventions. We study a district-wide turnaround of a type that may become more common under ESSA, an accountability-driven state takeover of Massachusetts's Lawrence Public Schools (LPS). A differences-in-differences framework comparing LPS to demographically similar districts not subject to state takeover shows that the turnaround's first 2 years produced sizable achievement gains in math and modest gains in reading. We also find no evidence that the turnaround resulted in slippage on nontest score outcomes and suggestive evidence of positive effects on grade progression among high school students. Intensive small-group instruction over vacation breaks may have led to particularly large achievement gains for participating students. This article examines the impacts of the Michigan Merit Curriculum (MMC), a statewide college-preparatory curriculum that applies to the high school graduating class of 2011 and later. Our analyses suggest that the higher expectations embodied in the MMC had slight impact on student outcomes. Looking at student performance in the ACT, the only clear evidence of a change in academic performance comes in science. Our best estimates indicate that ACT science scores improved by 0.2 points (or roughly 0.04 SD) as a result of the MMC. Our estimates for high school completion are sensitive to the choice of specification, though some evidence suggests that the MMC reduced graduation for the least prepared students. Research increasingly points to the importance of parental engagement in children's education. Yet, little research has investigated whether prompting parents to be more involved in college processes improves student outcomes. We investigate experimentally whether providing both students and their parents with personalized outreach about tasks students need to complete to enroll in college leads to improved college enrollment outcomes relative to providing outreach to students only. We utilize text messaging to provide information and advising to students and parents. Across treatment arms, the text outreach increased on-time college enrollment by a statistically significant 3.1 percentage points. Texting both parents and students, however, did not increase the efficacy of the outreach. We situate this result in the broader parental engagement literature. This study intends to fill some gaps in knowledge about online communities and their influence on consumers' purchase decisions. A review of the extant literature on online brand communities reveals that most prior studies have focused on identifying the factors associated with joining and participating in online communities. However, limited studies examine the implications of participation in online brand communities in terms of consumer behaviors, such as brand liking and intention to purchase the brand. In addition, these few studies have either focused on the influence of operational elements of the online community on purchase intention or on the influence of community members' characteristics regarding purchase intention. Therefore, this study aims to provide an integrated framework for operational and user characteristics' antecedents associated with consumers' participation in online brand communities and their effect on purchase decisions. Structural equation modeling was used to test the conceptual model. Data were collected via a survey of the Facebook pages of 282 members of Egyptian telecommunication companies. The findings provide insights into how these antecedents should be managed to enhance participation in virtual telecommunication communities. This research explores the roles of various interaction behaviors of service frontliners in activating customer participation and creating customer value in the context of health care service. Based on the data of 285 paired patient-physician cases of serious chronic diseases, the analysis revealed that individuated, relational, and empowered interactions expressed by a service frontliner play a critical role in activating customer participation, leading to a higher level of perceived value; while ethical interaction has a direct-only impact on perceived value. These results imply that frontliner interaction can be further broken into participation-activating interaction and value-enhancing interaction. Both of which eventually lead to the improvement of customer value. This study aims to examine the moderating effects of customer personal characteristics on the satisfaction-loyalty link in order to overcome potential response bias and common-method variance in the link by using both real-life purchasing behavior data and survey in a cross-method and panel data. Two separate data collection procedures dealt with survey and customer relationship management (CRM) data. A total of 391 members of restaurant loyalty program participated in the survey. Also, additional data were gathered on restaurant CRM. Data were analyzed using SEM and multi-group analysis. This study confirmed the nonlinear relationship between customer satisfaction and brand loyalty due to the significant moderating effects of customers' personal characteristics on subsequent stages of the link. The primary purpose of this study was to investigate how top manager attributes account for the implementation of risk-averse strategy by applying a conceptual framework based on upper echelons theory. We selected franchising as a representative risk-averse strategy based on resource scarcity, agency, and risk-sharing theories. We chose the top management team (TMT) as a proxy for the upper echelon to examine the theoretical argument. The study period was from 2000 to 2013, and 29 restaurant companies were included in the research. Related data were derived from EXECUCOMP, COMPUSTAT, Annual 10-K, and publicly accessible resources (e.g., LinkedIn and Business Week). Feasible generalized least squares and random effect regression models were used to analyze the data. The results suggested that the formal education levels of top managers negatively affected franchising implementation, whereas the tenure of TMT members positively influenced restaurant franchising. After service failure situations, firms often carry out transactional activities to achieve customer recovery (CR), using corrective actions to restore the exchange (e.g., economic and social compensations). Furthermore, during the service recovery process, firms can encourage activities of co-creation (CC) to prevent similar future failures. This paper discusses the importance of CC and service recovery process communication (RPC), in which customers are informed of the adoption of solutions to address the cause of the failure, so as to avoid the same problem happening again. Experimental studies investigate the impact, individually and together, of CR, CC, and RPC on satisfaction, repurchase intentions, and word of mouth. The results indicate that CC and RPC improve customer's satisfaction, repurchase intentions, and word of mouth. Firms that want to maximize the return on their efforts to prevent service failures, should encourage CC, develop solutions to prevent future failure recurrence, and implement strategies of RPC. Firms must decide how to promote CC and which media to use for RPC. Applying resource-based theory and signaling theory, we argue that firm and employee reputations affect consumer adoption of advice offered by professional service providers, and these effects are contingent on contextual variables. Our study on brokerage reports in Singapore supports our arguments. We show that reliance on firm (employee) reputation when adopting advice is higher (lower) if the evaluation of an entity is an initial rather than a repeated one. Also, reliance on employee reputation increases with stronger recommendation or when the entity has a business relationship with the advice-giving firm. These findings have implications for advice-giving firms and policy makers. This paper examines the influence of four variables on online tourism customer satisfaction: website image perceptions, online routine, online knowledge, and customer innovativeness, and their simultaneous effects. The analysis gauges the moderating role of three socio-demographic characteristics: gender, age group, and educational background. A sample of 3188 regular online consumers of the Portuguese leader in the tourism sector was analyzed using structural equation modeling. Results show that website image, routine, and knowledge significantly influence e-customer satisfaction. Only gender moderates the impact of website knowledge on e-satisfaction. These results entail a better understanding of customer specificities, with practical actions for addressing their real needs and expectations. The aim of this paper is to analyse the phenomenon of crowdfunding and determine whether it can be considered a service ecosystem, where the context frames innovation through value co-creation. A qualitative, multiple case study approach is used to analyse three platforms and six initiatives in the Spanish arts sector. The findings reveal that crowdfunding can be considered an ecosystem where value-in-context frames seven types of value co-creation, offering a contribution both to ecosystem theory and to the field of co-creation. This study developed a procedure to determine the quality priorities of the internet protocol television (IPTV) service. First, a set of key elements of IPTV service quality was developed based on a literature review and a focus group interview. Second, analytic hierarchy process and the Kano model were applied to identify the requirements of experts and customers, respectively. The experts measured the importance and difficulty of management, whereas the customers measured the satisfaction level and importance of each quality element. Third, quality priorities were calculated through the entropy principle and scenario-based analysis. The proposed procedure is illustrated with a case study of a telecommunications company in Korea. The measurements of complex modulus G* of asphalt and aged Trinidad-Lake-Asphalt-modified (TLA-modified) asphalt were conducted through dynamic shear rheological test. This parameter was used to model the aging kinetics of both types of asphalt to evaluate and predict the rheological properties at different aging time and temperature. By detailed analysis, it was observed that the aging process of both types of asphalt follows first-order kinetics. Moreover, it was observed that the TLA-modified asphalt possess higher activation energy compared with the pure asphalt. It was also researched that the chemical reaction rate of the TLA-modified asphalt was lower than that of the ordinary asphalt, implying that the former has a more stable nature and TLA could improve the anti-aging properties of asphalt. The chemical reaction rate of TLA-modified asphalt after pressure aging (PAV) was 1.2 times that after rotary-thin-film-oven-testing (RTFOT) which indicating that at the same aging temperature of PAV-aging, i.e., 1 h, is equivalent to 1.2 h of RTFOT-aging. In this work, we demonstrate the application of a Metal-Organic Chemical Vapor Deposition (MOCVD) technique for deposition of the palladium electrodes on the surface of metal phthalocyanine films. It is expected that these hybrid multifunctional materials can be effective chemical sensors for hydrogen detection. Two palladium beta-diketonate derivatives, palladium hexafluoroacetylacetonate Pd(hfac)(2) and novel Pd(NMe(2)i-acac)(2) were synthesized and characterized. Pd(NMe(2)i-acac)(2) was shown to transfer to the gaseous phase with decomposition. Pd(hfac)(2) was used as a precursor in MOCVD processes. The structure and morphology of Pd layers deposited both on Si(100) substrate and on the surface of metal phthalocyanine films were studied by means of XRD and scanning electron microscopy. The influence of the deposition of Pd as a top layer on the phase composition of metal phthalocyanine films was also discussed. Diamond powder was coated with a SiC layer by a rotary chemical vapour deposition (RCVD) technique. The SiC layer thickness varied from 15 to 120 nm, depending on the diamond particle size and coating time. The SiC-coated diamond powder was consolidated with 35 wt.% SiO2 by spark plasma sintering (SPS) at 1773-1973 K for 300 s under 100 MPa pressure. The effects of SiC coating, diamond particle size (2 mu m and 7 mu m), and sintering temperature on the densification behaviour were investigated. The diamond/SiO2 composite using SiC-coated diamond powder exhibited higher density than those using uncoated diamond powder. The highest density and Vickers hardness were 99% and 34 GPa, respectively. In this paper, porous carbon fibers (PCFs) with pores of 1-10 mu m in diameter, designated PCFs-L, and with pores of 0.1-1 mu m diameter, designated as PCFs-S, were prepared by pre-oxidation and carbonization of poly(acrylonitrile)/poly(methyl methacrylate) (PAN/PMMA = 30/30) blend fibers Then, composites containing 2-6 wt% of PCFs-L or PCFs-S as microwave absorbents were fabricated. The complex permittivity epsilon (epsilon = epsilon' - j epsilon '') and complex permeability mu (mu = mu' - j mu '') of the composites were measured using a HP-8722ES network analyzer. The reflection loss of the prepared composites based on a model for a single-layer plane wave absorber. It was found that the composites filled with PCFs-S exhibited a much better microwave absorption than those filled with PCFs-L. The composites containing 6 wt% of PCFs-S showed the lowest reflection loss of -32 dB at 9.7 GHz, whereas the PCFs-L filled composites gave the minimum reflection loss of -24 dB at 10.7 GHz. The bandwidth with a reflection loss below -10 dB covered almost the entire X band with PCF-S, whereas it only a bandwidth of 1.3 GHz was covered with PCFs-L. It is postulated that the stronger interference of the multi-reflected microwaves resulting from PCFs-S could be responsible for the additional microwave absorption beside the dielectric type absorption. In other words, both the solid carbon and the pores in PCFs could contribute to the absorption of microwaves. Gold nanoparticles (AuNPs) were successfully synthesized using an eco-friendly method in a single-step reaction with red algae (Laurencia papillosa). In this research we investigated the factors that affect the characteristics of AuNPs, such as the size, shape, surface profile and surface chemistry. The synthesized AuNPs were characterized using UV-Visible spectroscopy, X-ray diffraction (XRD), Transmission Electron Microscope (TEM), Field Emission Scanning Electron microscope (FESEM), and Atomic Force Microscope (AFM) measurements. Computational analysis of the size distribution, surface profile, area and value of the different AuNPs synthesized were carried out using freeware ImageJ. The study revealed that the color of the solutions of HAuCl4 was changed from yellow to ruby red indicating that metallic gold nanoparticles were synthesized. TEM images of the synthesized AuNPs showed different shapes and sizes ranging from 3.5 to 53 nm at lambda(max) = 586 nm in different concentrations (Graphical Abstract). The resulting shapes were spherical and triangular crystalline gold nanoparticles with varied oxidation states of Au (0) and Au (+1). One of our future goals in Plant Virology is the use of these gold nanoparticles as a carrier in biological control of plant virus diseases using viral satellite RNA as a biological control agent against severe plant virus diseases (United States Patent No. US 8,138,390 B2). The functional groups in the algal extract responsible for the synthesis of the AuNPs were identified to be NH2+ and OH-groups. The optimum ratio of Au/Algal extract required for producing small sizes of AuNPs was found to be 1:2. The Algal water-extract has dual functions in this synthesis-: they act as reducing and stabilizing agent. AuNPs have many biological, industrial and medical applications which include treatments against plant virus and cancer diseases. Anatase TiO2 nanoparticles doped with different concentrations of cobalt (0, 1, 2, and 3 mol%) were synthesized by simple sol-gel method at room temperature. The prepared nanoparticles were calcinated at 500 degrees C to get nanocrystalline anatase phase. The crystallite sizes of the doped and undoped TiO2 nanoparticles were estimated with XRD analysis. XRD and XPS analysis revealed the substitutions of the few sites of Ti4+ ions by Co2+ ions. The band gap energy of doped and undoped TiO2 nanoparticles were identified with UV-Vis diffuse reflectance spectra. The band gap energy value of Co-doped TiO2 nanoparticles was observed in the range of 3.2-2.8 eV. The optical absorption edge of doped TiO2 nanoparticles is shifted towards longer wavelength with increase in cobalt content in TiO2 host lattice. The non-spherical morphology of the nanoparticles were confirmed by TEM analysis. The photocatalytic activity tests of the nanoparticles were evaluated by the degradation of aqueous methylene blue (MB) solution. Manufacturing electronic components on the nanoscale suffers from some restrictions and it is impossible in most cases. Phenacenes, with chemical formula C4n+2H2n+4, are a family of organic molecules that are being paid a high level of attention in molecular electronics and nanoscale. However, examination of their electronic and optical properties is highly expensive, particularly when the number of the rings is higher than 6. The present study is an attempt to propose a model that can predict the electronic properties of the family of phenacenes using the Topological Indices Method (TIM). Topological indices are real numbers that have been introduced in the studies on the molecular graphs in chemistry and are given in terms of parameters, such as the degree of vertices, the distance between the vertices, etc. These indices indicate some properties of the molecule. One of the topological indices is the Reciprocal Randic's index (RR) which is here examined and calculated for the family of phenacenes. In addition, by calculating gap energy and electron affinity energy for some members of this family and comparing it with the data provided in valid articles, a model is proposed for prediction of some electronic and optical properties. The strengthening of titanium (Ti) is usually achieved by alloying it with various metal elements, in this work, we investigated a new route for the fabrication of Ti-based materials with high strength and good plasticity. Graphene produced by in situ thermal reduction from graphite oxide (GO) was used as reinforcement for enhancing the mechanical properties of the Ti matrix. A combination of ball milling, ultrasonication and freeze-drying was utilized to fabricate 0.5, 1.0 and 2.0 vol.% GO/Ti composite powders. The powders were subsequently consolidated by plasma activated sintering at 1200 degrees C for 3 min with a heating rate of 200 degrees C/min. The microstructure, mechanical properties, electrical conductivity and corrosion resistance of the prepared graphene/Ti composite materials were studied. On the basis of computed tomography images, the three-dimensional reconstruction software MIMICS is used to reconstruct the three-dimensional of human upper respiratory system including nasal cavity, pharynx, larynx, and part of the main trachea. In this research, the mesh processing of the model is conducted with ICEM CFD (Integrated Computer Engineering and Manufacturing code for a Computational Fluid Dynamics software), and the software FLUENT is used to simulate the flow field of circular breathing. The flow field properties of four moments, including 0.2 s in the inspiratory acceleration phase, 0.5 s in the inspiratory peak phase, 1.2 s in expiratory acceleration phase and 1.5 s in peak inspiratory phase, are studied. The characteristics of flow field contour in typical parts, such as the nasal cavity, pharynx and larynx in the different states of normal inspiration and expiration are investigated. The research provides reference for further understanding the positive and reverse process of fine particulate matter deposition. The results collected help understanding the pathogenesis of illness caused by particulate depositing in the nasal cavity, pharynx and larynx. It also demonstrates the feasibility of the treatment of diseases caused by the deposition of drug aerosol. The main reason limiting long-term clinical application of neodymium-iron-boron magnets in dental field is their marked corrosion tendency in middle aggressive media containing chloride such as saliva solution. Thereby it becomes necessary to overcome the corrosion resistance lack of Nd-Fe-B magnets with new wear resistant encapsulating materials or surface coatings. With this aim, in this work a multi-layered organic-inorganic structure able to supply specific functional properties to the whole assembly (i.e., anticorrosion resistance, wear resistance, biocompatibility and durability) was proposed. The influence on the magnetic force of a new hybrid coating consisting in 0, 1, 2, 3 silane layers on nickel plated magnets was tested. Furthermore the degradation of the samples was evaluated by data analysis collected by EIS (electrochemical impedance spectroscopy), during aging time in synthetic Fusayama saliva solution at pH 5.5 and morphological analysis by 3D Digital Microscope. The experimental data confirm that the presence of coupling agent does not affect the magnetic force of Nd-Fe-B magnets and that the new coating provides a barrier effect in biological environment without affecting the biocompatibility of the system. Thus the limiting aspect preventing the use of Nd-Fe-B magnets for dental applications can be avoided by using silane agents as surface coating. Stable non-covalent functionalization DNA/graphene assemblies are successfully fabricated via a facile sonication approach. The obtained assemblies are characterized by transmission electron microscopy, atomic force microscopy and Raman spectroscopy. Experimental results indicate that graphene aggregates are separated into few-layers structures by DNA segments and exhibit excellent dispersibility in aqueous solution due to the introduction of DNA. The strong interaction between the backbones of DNA and the surface of graphene is attributed to the pi-pi interaction or hydrogen bonding. Moreover, cyclic voltammograms show that the DNA/graphene assemblies exhibit good electrochemical response, especially with 0.5 wt% DNA, with the maximum current approaching to 2.2 mA. The better electrochemical performance is ascribed to the synergetic effect of excellent electronic properties of graphene and excellent dispersibility in aqueous solution due to the efficient bonding of DNA segments on surface of graphene. The stable DNA/graphene assemblies hold potential for future applications in biosensing and biomedical engineering. This study investigated a magnetic separation guideline for a drum-type magnetic separator and feasibility of a process for upgrading natural low-grade manganese ore by removing iron oxides from the manganese ore. In the study, samples assaying 44.5 wt% of Mn, 15.5 wt% of SiO2 and 1.76 wt% of Fe; and 36.7 wt% of Mn, 15.3 wt% of CaO and 4.34 wt% of Fe were subjected to a beneficiation process in order to enrich the ores in terms of manganese by using a drum-type magnetic separator manufactured for the study. The results showed that magnetic separation for manganese ore containing magnetic manganese minerals like MnSiO3 and Mn7SiO12 is not effective. However, low-grade manganese ore containing high silica could be upgraded to 50.3 wt% with 90.8% recovery at a 0.11 Tesla magnetic field strength and 20 rpm drum speed, and low-grade manganese ore containing high calcium oxide resulted in 41.4 wt% with 87.2% recovery with the same conditions. Highly flowable strain hardening fiber reinforced concrete (HF-SHFRC) has good workability in the fresh state, and exhibits tensile strain-hardening and multiple cracking characteristics of high performance fiber reinforced cementitious composites (HPFRCC) in the hardened state. A successful mixture of HF-SHFRC can be easily manufactured and delivered by ready-mix trucks for cast on job sites. However, current mix proportions of HF-SHFRC are still developed with unsystematic trial-and-error procedure by modifying selected self-consolidating concrete (SCC) mixtures. This paper presents a comprehensive performance-based HF-SHFRC mix design methodology for target slump flow and strength based on dense packing concept. Minimum fiber volume fraction to achieve strain hardening is considered in this design procedures. Unlike traditional HPFRCC, coarse aggregate is provided in HF-SHFRC. Slump flow tests, compressive tests and direct tensile tests were carried out to investigate the validity of this mix design for different strength demands (30, 40, 50, 60 MPa). The test results show good agreement with performance targets in terms of flowability, strength and tensile strain hardening characteristics. Stannic oxide (SnO2) nanoplate-assembled hierarchical microstructures were synthesized by a facile hydrothermal method, and their structural, optical, and photocatalytic properties were studied in detail. The as-prepared SnO powders were calcined at 500 degrees C to create the oxidized SnO2 ones. These powder samples were analyzed by scanning electron microscopy, transmission electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction, and UV/Vis absorption measurements. The newly synthesized SnO2 powder samples exhibited a pure tetragonal phase with P42/mnm (136) space group, and they were used as a photocatalytic material under UV light irradiation. The photocatalytic activity of the SnO2 nanoplate-assembled samples was investigated with a photodegradable dye, i.e., rhodamine B (RhB) solution, by varying the irradiation time. Photodegradation rates of similar to 20 and 51.6%, respectively, were obtained without and with the SnO2 photocatalyst in the RhB solution for an irradiation time of 7 h. High-strain-rate and quasi-static compressive tests were carried out on a highly porous shape-memory alloy. Porous bulk specimens with a three-dimensional network structure were recently fabricated by vacuum sintering of rapidly solidified 50Ti-49.68Ni-0.32Mo (at.%) alloy fibers. An open porosity of 75% was achieved. A split Hopkinson pressure bar (SHPB) experiment was used to apply a compressive high strain rate on the bulk specimens. The dynamic result did not show a significant difference from the static test result. Young's modulus of the porous material was about 100 MPa which is the same order of magnitude as bone. Photoacoustic imaging is the latest promising diagnostic tool with many advantages, such as high spatial resolution, deep penetration depth, and use of non-ionizing radiation. Tumor therapy using visual guidance of photoacoustic imaging has been recently highlighted for personalized and effective tumor therapy. The goal of this study was to develop a nano-medicine tool for simultaneous photoacoustic imaging-based diagnosis and chemotherapy. Here, indocyanine green and paclitaxel doubly-loaded human serum albumin nanoparticles (IPHSA-NPs) were developed as photoacoustic imaging-based theragnostic nano-medicines. The IPHSA-NPs were on average 188.14 +/- 42.82 nm in diameter. The IPHSA-NPs were taken up by cancer cells efficiently, resulting in effective anti-cancer activity (50% less cell viability with 200 nM of paclitaxel for 6 h compared to control). The IPHSA-NPs sufficiently demonstrated photoacoustic intensity in the near infrared region at a wavelength of 800 nm. The photoacoustic intensity of IPHSA-NPs was 1.5 times stronger than hemoglobin, an endogenous photoacoustic contrast molecule. In addition, both hypodermically and intravenously administered IPHSA-NPs provided sufficient photoacoustic contrast signal intensities in in vivo over 24 h. Finally, intravenously administered IPHSA-NPs successfully accumulated in the tumor region, resulting in high photoacoustic contrast intensity. In conclusion, IPHSA-NPs are capable of photoacoustic-based cancer diagnosis and chemotherapy. In this study, the effects of inductively coupled plasma (ICP) power added to the DC magnetron sputtering of indium gallium zinc oxide (IGZO) using Ar/O-2 on the plasma characteristics and IGZO thin film characteristics were investigated. The addition of ICP power decreased the magnetron voltage and increased the magnetron current due to the increased plasma density near the magnetron surface. The addition of ICP also increased the deposition rate but also increased the surface roughness of the deposited IGZO thin films. When the electrical characteristics of deposited IGZO thin films were measured, the increase of ICP power to 300 W not only increased the carrier concentration from 1.87 x 10(19) to 2.59 x 10(19) cm(-3) but also increased carrier mobility from 14 to 16.7 cm(2)/Vs possibly due to the decreased defects (decreased defect scattering) in the film even with the increased impurity scattering caused by increased carrier concentration. The further increase of ICP power to 500 W slightly decreased the carrier mobility while slightly increasing the carrier concentration due to both the increased impurity scattering by the increased carrier concentration and the increased surface scattering caused by the increased surface roughness. The optical transmittance was not significantly varied with the ICP power and was higher than 80% at for the visible wavelength and the structure of IGZO deposited at room temperature remained amorphous even with the ICP power up to 500 W. In this paper, we have prepared different compositions and thicknesses of In-doped ZnO thin films using the spin-coating process. To determine the optimum electrical conductance and transmittance, 0-2.5 mol% In contents were doped onto ZnO thin films to realize transparent conducting-oxide applications. 1.5 mol% In-doped ZnO thin films showed the highest electrical conductance from among the other specimens considered, and displayed good optical transmittance properties. From these results, different thicknesses of 1.5 mol% In-doped ZnO thin films were analyzed for application to transparent conducting oxides. As the thickness of 1.5 mol% In-doped ZnO thin films increased, the optical transmittance was degraded, while the sheet resistance improved, and vice versa. In addition, as increasing the thickness of In-doped ZnO thin films, nanorods were developed. By developing these nanorods, the electrical conductivities improved significantly. Theoretical investigations into the photophysical properties of the series of [Ir(C<^>N)(x)(N<^>N)(3-x)]((3-x)+) {x = 0, (1); x = 1, (2); x = 2, (3); and x = 3, (4)} have been carried out by means of DFT/TDDFT calculations. Computational results show that an increase of the number of C<^>N ligands in complexes 1-4 is significantly beneficial for enhancing spin-orbit coupling effect and decreasing the singlet-triplet splitting energy, thus leading to enhanced radiative rate (k(r)) and phosphorescence quantum yield (Phi(PL)). Based on the investigation results for the first series, a second series of complexes [Ir(C<^>N)(2)LX](n+) whose structures stemmed from modifications of complexes 3 and 4 with good phosphorescence performance, have been designed and calculated. A theoretical examination of structure-emission efficiency relationship is highlighted. Essentially, both sigma-coordination and pi-conjugation bonds between Ir(III) center and ligands are factors controlling Phi(PL). To be an efficient phosphorescence emitter for a six-coordinated octahedral Ir(III) complex, two strongly electron-donating anion C<^>N ligands are theoretically required. As for the option of the third ligand, anionic-and neutral-type ligands, which can not only be used as excellent sigma donors but form good pi conjugation planes with Ir(III) center, are suitable candidates. This work opens the way to a combined orbital bonding mode and phosphorescence efficiency strategy for designing new iridium phosphorescence emitter with high Phi(PL). CaCu3Ti4O12 ceramic has a crystal structure of perovskite type, and it exhibits high relative dielectric permittivity and moderate loss tangent. We investigate the dependence of its abnormally high relative dielectric permittivity, of the order of 10(3)-10(4), on the sintering conditions. Due to its relatively high dielectric properties, this material can be employed in various kinds of applications that include capacitors and energy storage elements. Therefore, it is very important to optimize the processing conditions to increase the relative dielectric permittivity. In this research, we analyzed and reported the dependence of electrical and structural properties of CaCu3Ti4O12 on the sintering temperature employed for processing the ceramic. Erbium oxide (Er2O3) films have been grown by O-2 annealing on Si(100) in a furnace. The impact of Si surface passivation on the O-2 growth during annealing was investigated. We further studied the variation in surface composition of Er2O3 with a variety of furnace annealing temperatures and amounts of O-2 gas. Prolonged annealing of Er2O3 thin films resulted in the formation of a thick layer of Er2O3 at the surface, while shorter annealing times produced a thin film of Er metal. The thick Er2O3 layer was generated from a sputter deposited Er layer, which was oxidized by heating at 200 to 400 degrees C in O-2 gas (20 sccm). Stable Er2O3 thin film characteristics were achieved at an annealing temperature of 400 degrees C. For Er thin films, the extent of reduction varies with the thermal history of the samples, and prolonged annealing produced a more reproducible surface. Er2O3 thin films grown from an Er metal layer showed diffusion of lattice oxygen, which influenced the characteristics of the thin films. Li2ZnO2 micro-film was synthesized using sol-gel. Magnetic field was found to increase the photocurrent in Li2ZnO2. When B = 0.44 T, we achieved 5737% of magneto-photocurrent tunability in Li2ZnO2 as the photon energy of 1.90 eV. A quantitative model in terms of spin states transitions was used to analyze the photocurrent response in the magnetic field. In this study, we analyzed newly designed ceramic-polymer composite multilayered piezoelectric energy harvesters. For comparison, two different types of piezoelectric energy harvesters were designed, and fabricated by employing the conventional ceramic and tape-casting processes. 1-3 storied disc-type and multilayered-type piezoelectric energy harvesters were compared and analyzed by considering their crystalline structure, polarization electric field, output voltage, and output power. In addition, we have investigated the optimized output powers of the disc-type and multilayered-type piezoelectric energy harvesters by employing impedance matching. Ordered weighted average (OWA) operators with their weighting vectors are very important in many applications. We show that directly taking Minkowski distances (including Manhattan distance and Euclidean distance) as the distances for any two OWA operator is not reasonable. In this study, we propose the standard distance measures for any two OWA operators and then propose a standard metric space for the set of all n-dimension OWA operators. We analyze and discuss some properties of the introduced OWA metric and further propose a metric space of Choquet integrals represented by the underlying fuzzy measures. Some applications in decision making of OWA distances are also presented in this study. This article modifies evaluating the probability of a fuzzy event based on a classical probability space introduced by Yager (Inform. Sci. 1979;18:113-122). The presented theoretical results will be illustrated with some propositions and remarks. It is shown that a modification in the definition of fuzzy probability of fuzzy events into a concept of such probabilities has some desirable properties from mathematical points of view. Then, the proposed fuzzy probability will be compared to that of the similar proposed methods. The proposed fuzzy probability will be illustrated with some numerical examples. In real life, sometimes multicriteria decision making (MCDM) problems are dealt with inevitably under cognitive limitations of human's minds. However, few existing models can directly solve MCDM problems of this kind. Thus, to address the issue, this paper proposes a novel approach, which can: (i) handle the cognitive limitations in MCDM problems by distinguishing the case of complete criteria (i.e., there are no hidden cognitive factors that can deviate rational decisions) from the case of incomplete criteria (i.e., there are some hidden cognitive factors that can deviate rational decisions); (ii) differentiate incomplete and complete relative ranking of the groups of decision alternatives (DAs) over a criterion; and (iii) solve the imprecise and uncertain evaluation of criterion weight as well as the ambiguous evaluations of the groups of DAs regarding a given criterion. Hence, we give a measure to consider the influence of cognitive limitations and give two methods to reduce the influence of cognitive limitations when a decision making needs more rational. Moreover, we illustrate our approach by solving a real-life problem of estate investment. Finally, we give some experimental results about the reduction of the required number of knowledge judgments in our method compared with the previous methods. Given a multicriteria decision-making problem, an obvious question emerges: Which method should be used to solve it? Although some efforts had been made, the question remains open. The aim of this contribution is to compare a set of multicriteria decision-making methods sharing three features: same fuzzy information as input data, the need of a data normalization procedure, and quite similar information processing. We analyze the rankings produced by fuzzy MULTIMOORA, fuzzy TOPSIS (with two normalizations), fuzzy VIKOR, and fuzzy WASPAS with different parameterizations, over 1200 randomly generated decision problems. The results clearly show their similarities and differences, the impact of the parameters settings, and how the methods can be clustered, thus providing some guidelines for their selection and usage. In the paper, we develop the novel intuitionistic fuzzy induced ordered weighted Euclidean distance (NIFIOWED) operator based on the generalized intuitionistic fuzzy distance for multiple attribute group decision making problems. The NIFIOWED operator's attribute values take the form of Atanassov's intuitionistic fuzzy numbers(A-IFNs), and the principal component x of A-IFN is taken into account first. The prime properties of the NIFIOWED operator are investigated. Finally, a new method and a numerical example are provided to reveal the availability and practicability of the NIFIOWED operator. International comparisons are never easy and they are not perfect. But PISA shows what is possible in education and it helps countries to see themselves in the mirror of student performance and educational possibilities in other countries. This article summarises key policy insights from PISA. It highlights how excellence and improving equity need not be conflicting policy objectives, but that they tend to be jointly achieved only when deliberate policies are in place that match resources with needs and when stratification and grade repetition are contained. The article also shows how a number of countries have been able to raise learning outcomes and moderate the impact of social background in the last decade and highlights some of the policies and practices that characterise these countries. The author describes the results of PISA, including those of 2015 and Japan's reaction, as well as their impact. Highly-ranked in PISA, Japan has always tried to improve its education system. The promotion of reading comprehension remains an important issue and low interest and motivation to learn subjects are crucial problems. The author discusses these questions and reform policies from a Japanese point of view. He explains the latest reform plan which will be implemented in 2020. The Japanese peculiarity in education is referred to in the conclusion. This article describes and discusses what happens when knowledge for policy generated within PISA is received by its target audience: what have the Portuguese policy actors been doing with PISA data and analysis when they consider, express and justify their choices? Drawing on previous and current studies, using interview materials and formal and informal policy documents, as well as texts published in the written press, the article analyses two main phenomena related to the reception of PISA and how this has evolved between 2001 and 2012 in Portugal: the consolidation of PISA's credibility as a source for policy processes and texts; the emergence of new actors and modes of intervention in the production of knowledge for national policy, drawing on PISA. Finally, it presents an analysis of the reception of PISA 2015 in the Portuguese media, focusing on the interventions by political actors in the Portuguese daily and weekly written press. Two main elements emerge from our content analysis as the main common elements of that reception: the consecration of PISA's credibility; and the practices of qualification and disqualification of educational policies and perspectives. The article concludes by emphasising the regulatory role of PISA in Portuguese policy processes and the relevant contribution played by the politics of reception in legitimising this role. The impact of the PISA study on Polish education policy has been significant, but probably different from any other country. Poland has not experienced the so-called PISA shock', but its education system has been benefiting considerably from PISA. For experts and policy makers, it has been a useful and reliable instrument that has made it possible to measure the effects of consecutive reforms of the school education system. Moreover, PISA and other international studies have influenced the perception of education policy in Poland. The latter has shifted from an ideology-driven, centralised policy to an evidence-informed policy, developed with the involvement of multiple stakeholders, although this has mostly affected the thinking of experts and policy makers rather than the general public. The new government (in power from 2015), following public opinion polls, has reversed most of the previous education reforms, eliminating lower secondary schools introduced in 1999. As the field of education has become a highly internationalised policy field in the last decade, international organisations such as the OECD play an ever more decisive role in the dissemination of knowledge, monitoring of outcomes, and research in education policy. Although the OECD lacks any binding governance instruments to put coercion on States or to provide material incentive, it has successively expanded its competences in this field. OECD advanced its status as an expert organisation in the field of education mainly by designing and conducting the international comparative PISA study. With PISA, the OECD was able to greatly influence national education systems. Basically, States were faced with external advice based on sound empirical data that challenged existing domestic policies, politics, and ideas. One prominent case for the impact of PISA is Germany. PISA was a decisive watershed in German education policy-making. Almost instantly after the PISA results were publicised in late 2001, a comprehensive education reform agenda was put forward in Germany. The experienced reform dynamic was highly surprising because the traditional German education system and politics were characterised by deep-rooted historical legacies, many involved stakeholders at different levels, and reform-hampering institutions. Hence, a backlog of grand education reforms have prevailed in Germany since the 1970s. The external pressure exerted by PISA completely changed that situation. PISA, which was launched by OECD, is one of the most significant and successful initiatives on which education systems have recently collectively embarked. However, although it is a well-coordinated international programme, its reception differs according to country. There is therefore a need to analyse specific national circumstances in order to gain a deeper understanding of the undertaking as a whole. This article specifically considers Spain's participation in PISA and focuses on a number of aspects: a) the expectations created when it joined the programme, in parallel to the implementation of its own national education evaluation system; b) the impact PISA has had, both in the media and in political and discursive spheres; and c) the technical and scientific debates generated in Spanish academic media. Finally, it is argued that, in the last few years, PISA has met with a certain disenchantment among specialists and the public opinion because of its limitations as a ranking tool, the difficulty in explaining its findings, and its inability to prescribe education policies that are suitable for very different contexts. Using the PISA 2015 releases in Norway and England, this article explores how PISA has been presented in the media and how the policy level has responded to the results. England will be used as an example for comparison. The article presents early media responses from the 20 most circulated daily newspapers in the two countries and discusses them in relation both to the national PISA reports in Norway and England, as well as the international report of the OECD. The media responses are further interpreted in light of previous research in both countries, with a particular focus upon Norway, where previous Ministers of Education have been interviewed about assessment policy and education reforms. The international comparative studies on students' outcomes have initiated analyses that have had a growing influence on national and sub-national education policies in industrialised and developing countries. It is particularly the case of the OECD's Programme for International Student Assessment (PISA) which started in 2000 and has organised surveys every 3 years, so that the 2015 survey was the 6(th). Its influence has been particularly important for several reasons: 1) it assesses the basic competences in reading literacy, maths and science of 15 year-olds students, i.e. around the end of compulsory education in many countries; 2) the assessment is based on a reliable methodology and the tests are completed by qualitative surveys and studies; 3) and the results lead to recommendations and are amplified by the media in most countries. However, it is not easy to evaluate the real impact of PISA because of the existence of other international studies such as IEA's TIMSS and, particularly in Europe, the influence of the recommendations and benchmarks of the EU that has been growing steadily in the last 25 years. Our analysis of the impact of PISA and EU policy focuses on the evolution of the education policy in France, but also studies its evolution in a few other European countries. Finally, we underline the limits of the influence of PISA and international standards in education towards a convergence of education systems because of the importance of their specific historic and cultural contexts. Empirical evidence suggests that educational attainment nurtures people's social outcomes and promotes active participation in society and stability. However, it is unclear to what extent other types of human capital also correlate with social outcomes. Hence, we explored the opportunity offered by the PIAAC survey through its provision of information on educational attainment, observed individual key skills proficiency, and participation in adult education and training (adult lifelong learning). We therefore studied the association between these human capital variables and social outcomes, and more specifically interpersonal trust and participation in volunteering activities. Results revealed that these social outcomes were affected not only by the formal qualification obtained, determined by the education variable, but also throughout the life-cycle. Indeed, education and training when undertaken during adult life have a significant impact, especially on volunteering. The fact that the skill proficiency also plays a significant role is extremely relevant, as skills are more likely to change over the life-cycle, either in a positive or negative way. Whilst the formal education received is constant after exiting the educational system, skills reflect competences more accurately: first, because those with the same level of education may have different skill levels because of differences in the quality of education or ability; second, because skills can vary over time. For example, they may increase with work experience or informal education, or decrease as a result of depreciation and ageing. These findings suggest that social outcomes are prone to be affected by many factors other than formal education, suggesting that policy makers can implement recommendations even after formal education has been completed. An analysis of the phenomenon of combining work and study amongst university students is made using data obtained from surveys of graduates carried out four years after finishing their degrees. First, the article reviews the evolution of the phenomenon over the last ten years, taking into account the Catalan University Quality Assurance Agency (AQU) labour market insertion surveys for 2005, 2008, 2011 and 2014. Second, the 2008 and 2014 waves are compared to analyse the impact of the economic crisis. In this case, how combining work and study affects academic results and labour market insertion is studied, in addition to whether or not differences occur according to the family's educational background. A random stratified two-stage sampling is used to obtain the results; descriptive and ANOVA analyses with different factors are performed. The evolution shows how the numbers of students who combine work and study has increased, especially among those whose parents have little education. Furthermore, this means that lower marks are obtained and that there is a greater degree of inequality in labour market insertion, depending on the educational background of the family of origin. In general, the relationship between the different variables shows how combining work and study has negative effects on marks but positive effects on labour market insertion, especially if the work experience whilst at university is related to the studies. This paper develops grounded theory on how receiving respect at work enables individuals to engage in positive identity transformation and the resulting personal and work-related outcomes. A company that employs inmates at a state prison to perform professional business-to-business marketing services provided a unique context for data collection. Our data indicate that inmates experienced respect in two distinct ways, generalized and particularized, which initiated an identity decoupling process that allowed them to distinguish between their inmate identity and their desired future selves and to construct transitional identities that facilitated positive change. The social context of the organization provided opportunities for personal and social identities to be claimed, respected, and granted, producing social validation and enabling individuals to feel secure in their transitional identities. We find that security in personal identities produces primarily performance-related outcomes, whereas security in the company identity produces primarily well-being-related outcomes. Further, these two types of security together foster an integration of seemingly incompatible identities-''identity holism''-as employees progress toward becoming their desired selves. Our work suggests that organizations can play a generative role in improving the lives of their members through respect-based processes. In this paper, we study the way that nascent occupations constructing an occupational mandate invoke not only skills and expertise or a new technology to distinguish themselves from other occupations, but also their values. We studied service design, an emerging occupation whose practitioners aim to understand customers and help organizations develop new or improved services and customer experiences, translate those into feasible solutions, and implement them. Practitioners enacted their values in their daily work activities through a set of material practices, such as shadowing customers or front-line staff, conducting interviews in the service context, or creating "journey maps'' of a service user's experience. The role of values in the construction of an occupational mandate is particularly salient for occupations such as service design, which cannot solely rely on skills and technical expertise as sources of differentiation. We show how service designers differentiated themselves from other competing occupations by highlighting how their values make their work practices unique. Both values and work practices, what service designers call their ethos, were essential to enable service designers to define the proper conduct and modes of thinking characteristic of their occupational mandate. In an analysis of data on employment in the 48 contiguous United States from 1978 to 2008, we examine the connection between organizational demography and rising income inequality at the state level. Drawing on research on social comparisons and firm boundaries, we argue that large firms are susceptible to their employees making social comparisons about wages and that firms undertake strategies, such as wage compression, to help ameliorate their damaging effects. We argue that wage compression affects the distribution of wages throughout the broader labor market and that, consequently, state levels of income inequality will increase as fewer individuals in a state are employed by large firms. We hypothesize that the negative relationship between large-firm employment and income inequality will weaken when large employers are more racially diverse and their workers are dispersed across a greater number of establishments. Our results show that as the number of workers in a state employed by large firms declines, income inequality in that state increases. When these firms are more racially diverse, however, the negative relationship between large-firm employment and income inequality weakens. These results point to the importance of considering how corporate demography influences the dispersion of wages in a labor market. In this paper, we examine when members of underrepresented groups choose to support each other, using the context of the funding of female founders via donation-based crowdfunding. Building on theories of choice homophily, we develop the concept of activist choice homophily, in which the basis of attraction between two individuals is not merely similarity between them, but rather perceptions of shared structural barriers stemming from a common social identity based on group membership. We differentiate activist choice homophily from homophily based on the similarity between individuals ("interpersonal choice homophily''), as well as from "induced homophily,'' which reflects the likelihood that those in a particular social category will affiliate and form networks. Using lab experiments and field data, we show that activist choice homophily provides an explanation for why women are more likely to succeed at crowdfunding than men and why women are most successful in industries in which they are least represented. Using two longitudinal panel datasets of Chinese manufacturing firms, we assess whether state ownership benefits or impedes firms' innovation. We show that state ownership in an emerging economy enables a firm to obtain crucial R&D resources but makes the firm less efficient in using those resources to generate innovation, and we find that a minority state ownership is an optimal structure for innovation development in this context. Moreover, the inefficiency of state ownership in transforming R&D input into innovation output decreases when industrial competition is high, as well as for start-up firms. Our findings integrate the efficiency logic (agency theory), which views state ownership as detrimental to innovation, and institutional logic, which notes that governments in emerging economies have critical influences on regulatory policies and control over scarce resources. We discuss the implications of these findings for research on state ownership and firm innovation in emerging economies. External support by a local coordinating agency facilitates the work of school-to-school networks. This study provides an innovative theoretical framework to analyse how support provided by local education offices for school-to-school networks is perceived by the participating teachers. Based on a quantitative survey and qualitative interview data from a networking project in eight German districts, we argue that in order to enable networks to work independently on innovative reforms, local coordinating agencies should focus on autonomy support, such as training on network management, and on support aimed at establishing significance, i.e. through vision and goal setting. The amount of empirical research on leadership of educational organisations, and especially of schools, which has stressed the importance of being sensitive to context, is not great. This paper seeks to highlight the challenge presented by this situation. First, context is defined. Second, attention is drawn to what can be learned in the area of school leadership from the emphasis in other bodies of scholarship within the broad field of education studies on paying attention to matters of context. Third, an overview is provided of some of the key considerations arising out of the small body of work undertaken in the field by those researchers who have focussed on the broad range of issues that can arise for school leaders in distinctive contexts. Finally, five key and interconnected propositions to guide practice with regard to leadership in diverse contexts that have been generated from an analysis of the latter body of work are presented. In doing so, it is recognised that these are tentative in the absence of a much larger corpus of work. Overall, we hope that, along with providing intellectual sustenance, each of the four areas considered will also stimulate discussion on areas for further research. The University of Birmingham was planned, advanced and established with both national and German models of a University in mind. Civic reasons for the planning of the University need to be viewed within a broader motivational context. Even with a strong sense of civic place, the University was conceived as a modern University with multiple founding visions. The set-up goals shifted as the size and complexity of the University increased and early ideas of social mission were either restricted or largely absent in practice. The paper examines the nature of the original institutional commitment to the civic' dimension of the University between 1900 and 1914 and highlights the many tensions that emerged between the growing academic standing of the University and its continued enthusiasm for the City and regional links. This paper uses Bernstein's concept of grammar to illuminate aspects of educational research. The relationship between internal and external languages of description in the production of disciplinary knowledge is examined. This leads to a reflection on the various factors both internal and external to the discipline of educational studies that foster and undermine forms of research knowledge. This article draws upon recent theorising of the becoming topological' of space- specifically, how new social spaces are constituted through relations rather than physical locations - to explore how standardised data, and specifically test data, have influenced teachers' work and learning. We outline the varied ways in which teacher practices at a primary school in Queensland, Australia, were actively constituted through processes of tracking data' and keeping data on-track', and how teachers were simultaneously being disciplined, or tracked', by these very same data. Our analyses suggest that what appear to be more technical' activities and tasks of using' data are, in fact, actively constituted modes of governance, enabled through and deployed by ongoing practices of comparison and topological respatialisation. Educational activity among adults is not only a key factor of social development but also one of the most important priorities of public policies. Although large sums have been earmarked and numerous actions undertaken to encourage adult learning, many people remain educationally passive, a particularly acute problem in Poland. We point to the main cultural and economic determinants of educational passivity: the family environment, education, low earnings and job. Based on several years of research conducted within the Study of Human Capital in Poland project, we conclude that the Matthew effect is visible in the field of adult learning: better-educated people increase their educational capital, moving further away from those with a lower level of education. Comparison of the levels of self-evaluation of competences between educationally active and non-active adults of various levels of education indicates that the highest increase in evaluation occurs among less educated people. Owing to this group's very low level of education, however, the scale of using the potential for educational activity is very low. The outcome of this is that the opportunities created by adult educational activity are not exploited to reduce social differences and instead it sometimes reinforces these differences. Background: Studies have shown that counseling about risk factor-related lifestyle habits can produce significantly beneficial changes in the lifestyle habits of patients with stroke. However, it is not sufficient only to provide a patient with appropriate information, but the quality of lifestyle counseling is also essential. The aim of the study was to investigate the effects of a lifestyle counseling intervention on lifestyle counseling quality in patients with stroke and transient ischemic attack. Methods: Posttest control group design was used. Patients with stroke and transient ischemic attack (n = 98), divided into intervention and control groups, completed the Counseling Quality Questionnaire after receiving lifestyle counseling at the hospital (January 2010 to October 2011). Data were analyzed with an analysis of variance. Results: The patients rated lifestyle counseling quality quite high in terms of all sum variables except patient centeredness. Counseling quality except for counseling resources was estimated to be significantly better by the intervention group. Conclusions: Lifestyle counseling quality at the hospital can be enhanced by a counseling intervention. More attention to factors that promote patient centeredness of counseling is required because patient centeredness has repeatedly been recognized as the weakest aspect of counseling by both patients with stroke and other patient groups. The American Association of Neuroscience Nurses (AANN) has worked toward meeting the challenges and addressing the key messages from the 2010 Institute of Medicine report on the future of nursing. In 2012, AANN developed an article summarizing how the association has addressed key issues. Since that time, new recommendations have been made to advance nursing, and AANN has updated its strategic plan. The AANN has assessed organizational progress in these initiatives in a 2017 white paper. This process included review of plans since the initial report and proposal of further efforts the organization can make in shaping the future of neuroscience nursing. The purpose of this manuscript is to provide an overview of the AANN white paper. Background: A diagnosis of stroke is a life-changing event. Effective discharge teaching after a stroke is crucial for recovery, but the overload of information can be overwhelming for patients and caregivers. Purpose: The purpose of this study was to examine differences in discharge readiness and postdischarge coping in patients admitted for stroke after the use of individualized postdischarge information/education provided via a technology package (including patient online portal access, e-mail/secure messaging) compared with current standard discharge teaching methods (verbal/written instructions). Methods: This study used a descriptive comparative design to evaluate the difference between the nonintervention group A and the intervention group B. Patients in group B received additional discharge information via secured e-mail messaging at postdischarge days 2, 6, and 10. Two validated tools, Readiness for Hospital Discharge Form and Post-Discharge Coping Difficulty Scale, were used. Results: One hundred patients were recruited for the study, but the final number of complete data sets collected was 86-42 in group A and 44 in group B. There was no statistically significant difference between the groups in discharge readiness. There was a significant difference in coping scores between the 2 groups, with the technology group exhibiting higher coping. Conclusions: New technology affords new options to improve discharge readiness and contribute to positive patient coping after stroke. The researchers hope that this study will contribute to the growing body of evidence showing success using aspects of technology to enhance discharge teaching and follow-up after discharge. Focal seizures are divided into simple and dyscognitive, with the latter resulting in the alteration of consciousness. In the ictal and postictal stages, patients may present with confusion, delirium, and psychosis, presenting a risk of safety to themselves and others. This article presents 3 case studies where patients have been admitted for visual and electroencephalographic monitoring. Seizure activity is provoked for the diagnosis and development of a management plan. These cases illustrate the unique nursing implications when caring for patients experiencing focal dyscognitive seizures, highlighting the unique circumstances for the neuroscience nurse regarding risk management, safe administration of radioactive isotopes, detection of subtle seizure manifestation, and use of family as experts in patient-centered care. Through a deliberate onset of seizures, neuroscience nurses are placed in nontypical nursing situations, thus managing risk in unpredictable conditions and displaying advanced and distinctive nursing skills. The neuroscience intermediate unit is a 23-bed unit that was initially staffed with a nurse-to-patient ratio of 1:4 to 1:5. In time, the unit's capacity to care for the exceeding number of progressively acute patients fell short of the desired goals in the staff affecting the nurse satisfaction. The clinical nurses desired a lower nurse-patient ratio. The purpose of this project was to justify a staffing increase through a return on investment and increased quality metrics. Methods:This initiative used mixed methodology to determine the ideal staffing for a neuroscience intermediate unit. The quantitative section focused on a review of the acuity of the patients. The qualitative section was based on descriptive interviews with University Healthcare Consortium nurse managers from similar units. Design:The study reviewed the acuity of 9,832 patient days to determine the accurate acuity of neuroscience intermediate unit patients. Nurse managers at 12 University Healthcare Consortium hospitals and 8 units at the Medical University of South Carolina were contacted to compare staffing levels. Discussion:The increase in nurse staffing contributed to an increase in many quality metrics. There were an 80% decrease in controllable nurse turnover and a 75% reduction in falls with injury after the lowered nurse-patient ratio. These 2 metrics established a return on investment for the staffing increase. In addition, the staffing satisfaction question on the Press Ganey employee engagement survey increased from 2.44 in 2013 to 3.72 in 2015 in response to the advocacy of the bedside nurses. Introduction: Hyperosmolar therapy with hypertonic saline (HTS) is a cornerstone in the management of intracranial hypertension and hyponatremia in the neurological intensive care unit. Theoretical safety concerns remain for infiltration, thrombophlebitis, tissue ischemia, and venous thrombosis associated with continuous 3% HTS administered via peripheral intravenous (pIV) catheters. It is common practice at many institutions to allow only central venous catheter infusion of 3% HTS. Methods: Hospital policy was changed to allow the administration of 3% HTS via 16-to 20-gauge pIVs to a maximum infusion rate of 50 mL/h in patients without central venous access. We prospectively monitored patients who received peripheral 3% HTS as part of a quality improvement project. We documented gauge, location, maximum infusion rate, and total hours of administration. Patients were assessed for infiltration, erythema, swelling, phlebitis, thrombosis, and line infection. Results: There were 28 subjects across 34 peripheral lines monitored. Overall, subjects received 3% HTS for a duration between 1 and 124 hours with infusion rates of 30 to 50 mL/h. The rate of complications observed was 10.7% among all subjects. Documented complications included infiltration (n = 2), with an incidence of 6%, and thrombophlebitis (n = 1), with an incidence of 3%. Conclusions: There has been a long concern among healthcare providers, including nursing staff, in regard to pIV administration of prolonged 3% HTS infusion therapy. Our study indicates that peripheral administration of 3% HTS carries a low risk of minor, nonlimb, or life-threatening complications. Although central venous infusion may reduce the risk of these minor complications, it may increase the risk of more serious complications such as large vessel thrombosis, bloodstream infection, pneumothorax, and arterial injury. The concern regarding the risks of pIV administration of 3% HTS may be overstated and unfounded. Efforts to widen the participation in higher education for disadvantaged and under-represented groups are common to many countries. In England, higher education institutions are required by government to invest in 'outreach' activities designed to encourage such groups. There is increasing policy and research interest around the effectiveness of these activities and how this might be evaluated. This paper reports the results of a project designed to explore concepts of 'success' and 'impact' with two generations of practitioner-managers working in this field, including extended telephone interviews with ten active in the mid-2000s, and online questionnaires from 57 engaged in the mid-2010s. The paper concludes that the drive to 'measure the measurable' may be undermining successful activities, while unhelpful inter-institution competition has replaced the co-operative ethos and wider social justice aims that dominated ten years ago. An approach to state funding of universities that has attracted some attention in the literature is the funding of universities indirectly though state-funded student vouchers; vouchers would replace, or complement, direct budgetary allocations to individual universities. Proponents of this form of demand-side funding for higher education see it as a means of incorporating market mechanisms into public subsidies for universities; a central motivation is to promote both student choice and university competition, which, in turn, is expected to stimulate efficiency and quality of the university system as a whole. It is unclear how well such a system would perform because its adoption in practice has been rare and only minimal analyses of the few existing schemes have been executed. A voucher system for universities was introduced in the Republic of Georgia two decades ago. Exceptionally, a comprehensive administrative database, containing individual information on all students enrolled in the higher education system, was available. This has formed the basis for the review of the detailed working of this voucher scheme, and the lessons to be learned from it, which constitutes the focus of the paper. Research suggests that higher education changes affect the work and identity of academics. The consequent challenges often mentioned include too little time, work overload and limited autonomy among academic staff. Space is rarely mentioned, especially in relation to the context of open distance learning. This article reports on the findings from a study that was conducted in an open distance learning institution to understand how the institutional space is experienced by academics as they construct their identity. Institutional policy was examined, semi-structured interviews were conducted with academics and observations were made. The findings suggest that imagined institutional space is sometimes different from the lived space due to academics' differing preferences of space. It is recommended that the institution should find the common ground between academics' spatial needs and the institution's imagined space as provided for through institutional policy. Collaboration between international development NGOs and Africa's national knowledge institutes, particularly universities, is receiving increasing support from global policy-makers and donor agencies. In spite of this, little is reported about the practice of such collaboration. This paper helps fill this lack of knowledge. It shares findings from a research project by a consortium of four international NGOs exploring the potential for collaboration with knowledge institutes in Burundi, DR Congo, Liberia, South Sudan, Sudan, and Uganda. The findings are based on analysis of interviews with NGO managers in these countries and on subsequent interviews by these NGO managers of staff in national knowledge institutes. The views of the NGO managers regarding collaboration lean towards scepticism, in keeping with the limited literature on the matter. However, after interviewing staff in the knowledge institutes, the NGO managers did find potential for collaboration based on personal relations and meeting both parties' more immediate interests. In both higher education and other policy sectors, agencies have become a popular instrument adopted by governments to regulate the behavior of universities from a distance. This paper addresses this apparently common trend by proposing a typology of these agencies that assumes that evaluation agencies' autonomy is dependent upon not only legal powers but also the government's capacity to behave as a principal and to design, over time, coherent systemic governance modes. This typology is assessed through a comparative analysis of the roles and functions of evaluative agencies within the field of higher education in the UK, France, and Italy. Knight has proposed a common transnational education framework for use within and among countries. How this framework may be applied in particular contexts such as those of host countries like China remains unclear. The purpose of this article is to examine the literature to explore the framework to ascertain the extent of its utility in China in terms of application and research. The investigation highlights two areas for considerations by researchers and data collectors who may use the framework. The article concludes that understanding the peculiarities of transnational education in host countries plays a critical role in addressing the challenges associated with the development and application of a common framework and ultimately a robust international protocol for data collection of transnational education. The purpose of this article is to provide an overview of mental health issues and counseling services on college campuses in the USA. The findings from several national surveys are reviewed to estimate the prevalence of anxiety and depression, suicide and suicidal ideation, and violence among college students. Common prevention and treatment programs are then described with particular attention to innovative campus-wide programs. Student outcomes research is examined to determine whether receiving counseling services is associated with academic performance and the likelihood of graduation. The article concludes with a set of recommended practices to improve the effectiveness of counseling services on campus. Although mobile devices have been considerably upgraded to more powerful terminals, yet their lightness feature still impose intrinsic limitations in their computation capability, storage capacity and battery lifetime. With the ability to release and augment the limited resources of mobile devices, mobile cloud computing has drawn significant research attention allowing computations to be offloaded and executed on remote resourceful infrastructure. Nevertheless, circumstances like mobility, latency, applications execution overload and mobile device state; any can affect the offloading decision, which might dictate local execution for some tasks and remote execution for others. We present in this article a novel system model for computations offloading which goes beyond existing works with smart centralized, selective, and optimized approach. The proposition consists of (1)hotspots selection mechanism to minimize the overhead of the offloading evaluation process yet without jeopardizing the discovery of the optimal processing environment of tasks, (2)a multi-objective optimization model that considers adaptable metrics crucial for minimizing device resource usage and augmenting its performance, and (3)a tailored centralized decision maker that uses genetics to intelligently find the optimal distribution of tasks. The scalability, overhead and performance of the proposed hotspots selection mechanism and hence its effect on the decision maker and tasks dissemination are evaluated. The results show its ability to notably reduce the evaluation cost while the decision maker was able in turn to maintain optimal dissemination of tasks. The model is also evaluated and the experiments prove its competency over existing models with execution speedup and significant reduction in the CPU usage, memory consumption and energy loss. (C) 2017 Elsevier Ltd. All rights reserved. The entity reconciliation (ER) problem aroused much interest as a research topic in today's Big Data era, full of big and open heterogeneous data sources. This problem poses when relevant information on a topic needs to be obtained using methods based on: (i) identifying records that represent the same real world entity, and (ii) identifying those records that are similar but do not correspond to the same real-world entity. ER is an operational intelligence process, whereby organizations can unify different and heterogeneous data sources in order to relate possible matches of non-obvious entities. Besides, the complexity that the heterogeneity of data sources involves, the large number of records and differences among languages, for instance, must be added. This paper describes a Systematic Mapping Study (SMS) of journal articles, conferences and workshops published from 2010 to 2017 to solve the problem described before, first trying to understand the state-of-the-art, and then identifying any gaps in current research. Eleven digital libraries were analyzed following a systematic, semiautomatic and rigorous process that has resulted in 61 primary studies. They represent a great variety of intelligent proposals that aim to solve ER. The conclusion obtained is that most of the research is based on the operational phase as opposed to the design phase, and most studies have been tested on real-world data sources, where a lot of them are heterogeneous, but just a few apply to industry. There is a clear trend in research techniques based on clustering/blocking and graphs, although the level of automation of the proposals is hardly ever mentioned in the research work. (C) 2017 Elsevier Ltd. All rights reserved. Accurate car positioning on the Earth's surface is a requirement for many state-of-the-art automotive applications, but current low-cost Global Navigation Satellite System (GNSS) receivers can suffer from poor precision and transient unavailability in urban areas. In this article, a real-time data fusion system of absolute and relative positioning data is proposed with the aim of increasing car positioning precision. To achieve this goal, a system based on the Extended Kalman Filter (EKF) was employed to fuse absolute positioning data coming from a low-cost GNSS receiver with data coming from four wheel speed sensors, a lateral acceleration sensor, and a steering wheel angle sensor. The bicycle kinematic model and the Ackerman steering geometry were employed to particularize the EKF. The proposed system was evaluated through experimental tests. The results showed precision improvements of up to 50% in terms of the Root Mean Square Error (RMSE), 50% in terms of the 95th-percentile of the distance error distribution, and 75% in terms of the maximum distance error, with respect to using a stand-alone, low-cost GNSS receiver. These results suggest that the proposed data fusion system for car vehicles can significantly reduce the positioning error with respect to the positioning error of a low-cost GNSS receiver. The best precision improvements of the system are expected to be achieved in urban areas, where tall buildings hinder the effectiveness of GNSS systems. The main contribution of this work is the proposal of a novel system that enables accurate car positioning during short GNSS signal outages. This advance could be integrated in larger expert and intelligent systems such as autonomous cars, helping to make self-driving easier and safer. (C) 2017 Elsevier Ltd. All rights reserved. Memory networks show promising context understanding and reasoning capabilities in Textual Question Answering (Textual QA). We improve the previous dynamic memory networks to do Textual QA by processing inputs to simultaneously extract global and hierarchical salient features. We then use them to construct multiple feature sets at each reasoning step. Experiments were conducted on a public Textual Question Answering dataset (Facebook bAbl dataset) in two ways: with and without supervision from labels of supporting facts. Compared to previous works such as Dynamic Memory Networks, our models show better accuracy and stability. (C) 2017 Elsevier Ltd. All rights reserved. In smart cities, an intelligent traffic surveillance system plays a crucial role in reducing traffic jams and air pollution, thus improving the quality of life. An intelligent traffic surveillance should be able to detect and track multiple vehicles in real-time using only limited resources. Conventional tracking methods usually run at a high video-sampling rate, assuming that the same vehicles in successive frames are similar and move only slightly. However, in cost effective embedded surveillance systems (e.g., a distributed wireless network of smart cameras), video frame rates are typically low because of limited system resources. Therefore, conventional tracking methods perform poorly in embedded surveillance systems because of discontinuity of the moving vehicles in the captured recordings. In this study, we present a fast and light algorithm that is suitable for an embedded real-time visual surveillance system to detect effectively and track multiple moving vehicles whose appearance and/or position changes abruptly at a low frame rate. For effective tracking at low frame rates, we propose a new matching criterion based on greedy data association using appearance and position similarities between detections and trackers. To manage abrupt appearance changes, manifold learning is used to calculate appearance similarity. To manage abrupt changes in motion, the next probable centroid area of the tracker is predicted using trajectory information. The position similarity is then calculated based on the predicted next position and progress direction of the tracker. The proposed method demonstrates efficient tracking performance during rapid feature changes and is tested on an embedded platform (ARM with DSP-based system). (C) 2017 Elsevier Ltd. All rights reserved. The Fuzzy Nearest Neighbor Classification (FuzzyNNC) has been successfully used, as a tool to deal with supervised classification problems. It has significantly increased the classification accuracy by considering the uncertainty associated with the class labels of the training patterns. Nevertheless, FuzzyNNC's limited methods fail to efficiently handle the imprecision in features measurement and the uncertainty induced by the choice of the distance measure and the number of neighbors in the decision rule. In this paper, we propose a new method called Fuzzy Analogy-based Classification (FABC) to tackle the FuzzyNNC limitations. In this work, we exploit the fuzzy linguistic modeling and approximate reasoning materials in order to endow FABC with intelligent capabilities, like imprecision tolerance, optimization, adaptability and trade-off. Hence, our approach is composed of two main steps. Firstly, we describe the domain features using fuzzy linguistic variables. Secondly, we define the classification process using two intelligent aggregation operators. The first one allows the optimization of the similarity evaluation, by defining the adequate features to be considered. The second one integrates a trade-off strategy within the decision rule, by using a global voting approach with compensation property. The integration of such mechanisms will increase the classification accuracy and make the FuzzyNNC approach more useful for classification problems where imprecision and uncertainty are unavoidable. The proposed FABC is validated on the most known datasets, representing various classification difficulties and compared to the many extensions of the FuzzyNNC approach. The results obtained show that our proposed FABC method can be adapted to different classification problems and improve the classification accuracy. Thus, the FABC has the best rank value against the comparison methods with high significant level. Moreover, we conclude that our optimized similarity and global voting rule are more robust to handle the uncertainty in the classification process than those used by the comparison methods. (C) 2017 Elsevier Ltd. All rights reserved. To avoid the complexity and time consumption of traditional statistical and mathematical programming, intelligent techniques have gained great attention in different financial research areas, especially in banking decisions' optimization. However, choosing optimum bank lending decisions that maximize the bank profit in a credit crunch environment is still a big challenge. For that, this paper proposes an intelligent model based on the Genetic Algorithm (GA) to organize bank lending decisions in a highly competitive environment with a credit crunch constraint (GAMCC). GAMCC provides a framework to optimize bank objectives when constructing the loan portfolio, by maximizing the bank profit and minimizing the probability of bank default in a search for a dynamic lending decision. Compared to the state-of-the art methods, GAMCC is considered a better intelligent tool that enables banks to reduce the loan screening time by a range of 12%-50%. Moreover, it greatly increases the bank profit by a range of 3.9%-8.1%. (C) 2017 Elsevier Ltd. All rights reserved. Supervised text classification methods are efficient when they can learn with reasonably sized labeled sets. On the other hand, when only a small set of labeled documents is available, semi-supervised methods become more appropriate. These methods are based on comparing distributions between labeled and unlabeled instances, therefore it is important to focus on the representation and its discrimination abilities. In this paper we present the ST LDA method for text classification in a semi-supervised manner with representations based on topic models. The proposed method comprises a semi-supervised text classification algorithm based on self-training and a model, which determines parameter settings for any new document collection. Self-training is used to enlarge the small initial labeled set with the help of information from unlabeled data, We investigate how topic-based representation affects prediction accuracy by performing NBMN and SVM classification algorithms on an enlarged labeled set and then compare the results with the same method on a typical TF-IDF representation. We also compare ST LDA with supervised classification methods and other well-known semi-supervised methods. Experiments were conducted on 11 very small initial labeled sets sampled from six publicly available document collections. The results show that our ST LDA method, when used in combination with NBMN, performed significantly better in terms of classification accuracy than other comparable methods and variations. In this manner, the ST LDA method proved to be a competitive classification method for different text collections when only a small set of labeled instances is available. As such, the proposed ST LDA method may well help to improve text classification tasks, which are essential in many advanced expert and intelligent systems, especially in the case of a scarcity of labeled texts. (C) 2017 Elsevier Ltd. All rights reserved. Creditors such as banks frequently use expert systems to support their decisions when issuing loans and credit assessment has been an important area of application of machine learning techniques for decades. In practice, banks are often required to provide the rationale behind their decisions in addition to being able to predict the performance of companies when assessing corporate applicants for loans. One solution is to use Data Envelopment Analysis (DEA) to evaluate multiple decision-making units (DMUs or companies) which are ranked according to the best practice in their industrial sector. A linear programming algorithm is employed to calculate corporate efficiency as a measure to distinguish healthy companies from those in financial distress. This paper extends the cross-sectional DEA models to time varying Malmquist DEA, since dynamic predictive models allow one to incorporate changes over time. This decision-support system can adjust the efficiency frontier intelligently over time and make robust predictions. Results based on a sample of 742 Chinese listed companies observed over 10 years suggest that Malmquist DEA offers insights into the competitive position of a company in addition to accurate financial distress predictions based on the DEA efficiency measures. (C) 2017 Elsevier Ltd. All rights reserved. In many previous methods for facial age simulation, the shape and appearance features of Active Appearance Models (AAM) are widely used to model the global facial characteristics. However, they cannot sufficiently represent facial details such as spots, scars, fine wrinkles, and skin blemishes, because many of them are removed from the MM features during the dimension reduction process. Therefore, previous methods are not suitable for real-world applications, such as forensics, face recognition, and entertainment production systems, which require more accurate and realistic age simulation. To overcome the limitation, this paper proposes an automatic age simulation method based on a synergetic combination of the residual image, local features, and AAM global features. The residual image, which is the difference between a facial image and its reconstructed image by using MM features, contains facial details of the input image that are not included in the AAM features. Representation of facial details in the age simulation process is achieved by generating and adding a targeted age-weighted residual image to the facial image synthesized by the AAM features. Further, facial details such as wrinkles and skin blemishes that have not yet appeared at the current age but normally appear as aging proceeds, are supplemented, and represented by local features that show locally different aging characteristics. The experimental results show that the proposed method simulates a face more accurately and realistically than previous methods, thereby confirming that it is more suitable for the real-world applications. (C) 2017 Elsevier Ltd. All rights reserved. Community Question Answering (cQA) services encourage their members to ask any kind of question, which later on can get multiple answers from other community fellows. The research objective of this paper is understanding, and particularly automatically detecting, what motivates community members to ask questions to their unknown peers. In so doing, we first crawled a set of cQA questions from Yahoo! Answers, each of which was manually labelled according to their multiple motivations afterwards. Thus, one of the innovative aspects of our work is exploring a wide variety of multi-label classification strategies for the automatic recognition of concurrent motivations behind cQA questions. In order to build effective models, high-dimensional feature spaces were constructed on top of assorted linguistic features, this way discovering some linguistic traits that characterize some of these combinations. Overall, our experiments reveal that multi-label classification frameworks hold a real promise for this task. More precisely, our best configuration finished with a Hamming Score of 0.71. In terms of features, our outcomes unveil that the concurrence of motivations is likely to be signalled by the complexity in writing and the distribution of entity mentions across the entire question, ergo both question titles and bodies are required to be able to recognize their confluence. (C) 2017 Elsevier Ltd. All rights reserved. Aspect extraction is one of the fundamental steps in analyzing the characteristics of opinions, feelings and emotions expressed in textual data provided for a certain topic. Current aspect extraction techniques are mostly based on topic models; however, employing only topic models causes incoherent aspects to be generated. Therefore, this paper aims to discover more precise aspects by incorporating co-occurrence relations as prior domain knowledge into the Latent Dirichlet Allocation (LDA) topic model. In the proposed method, first, the preliminary aspects are generated based on LDA. Then, in an iterative manner, the prior knowledge is extracted automatically from co-occurrence relations and similar aspects of relevant topics. Finally, the extracted knowledge is incorporated into the LDA model. The iterations improve the quality of the extracted aspects. The competence of the proposed ELDA for the aspect extraction task is evaluated through experiments on two datasets in the English and Persian languages. The experimental results indicate that ELDA not only outperforms the state-of-the-art alternatives in terms of topic coherence and precision, but also has no particular dependency on the written language and can be applied to all languages with reasonable accuracy. Thus, ELDA can impact natural language processing applications, particularly in languages with limited linguistic resources. (C) 2017 Elsevier Ltd. All rights reserved. To the best of our knowledge, there is no method to find the product of unrestricted LR-type intuitionistic fuzzy numbers (IFNs) as well as the optimal solutions of LR-type intuitionistic fuzzy linear programming problems (IFLPPs) in which some or all the decision variables are unrestricted. Therefore it is necessary to pay attention to this issue and there is need to find the product of unrestricted LR-type IFNs as well as the optimal solutions of such programming problems. In this paper, the product of unrestricted LR-type IFNs based on the alpha-cut, beta-cut and (alpha, beta)-cut is proposed. Then with the help of proposed product, a new method is proposed to find the optimal solutions of unrestricted LR-type IFLPPs. Finally, an illustrative example is given to support the proposed method and investigated the applicability of existing approaches exist in the literature. (C) 2017 Published by Elsevier Ltd. To solve, manage and analyze biological problems using computer technology is called bioinformatics. With the emergent evolution in computing era, the volume of biological data has increased significantly. These large amounts of data have increased the need to analyze it in reasonable space and time. DNA sequences contain basic information of species, and pattern matching between different species is an important and challenging issue to cope with. There exist generalized string matching and some specialized DNA pattern matching algorithms in the literature. There is still need to develop fast and space efficient pattern matching algorithms that consider new hardware development. In this paper, we present a novel DNA sequences pattern matching algorithm called EPMA. The proposed algorithm utilizes fixed length 2-bits binary encoding, segmentation and multi-threading. The idea is to find the pattern with multiple searcher agents concurrently. The proposed algorithm is validated with comparative experimental results, The results show that the new algorithm is a good candidate for DNA sequence pattern matching applications. The algorithm effectively utilizes modern hardware and will help researchers in the sequence alignment, short read error correction, phylogenetic inference etc. Furthermore, the proposed method can be extended to generalized string matching and their applications. (C) 2017 Elsevier Ltd. All rights reserved. With a random price-dependent demand, this paper investigates the capital-constrained retailer's integrated purchase timing, quantity and financing decisions towards seasonal products. Results show that at both purchase moments (i.e. early-purchasing at the beginning of lead time and late-purchasing at the beginning of selling season), there always exists a critical value, and when the retailer's internal capital level is less than the critical value, it will borrow from the bank to purchase a larger quantity; otherwise, it will not borrow and just use up its internal capital for purchasing. The capital-constrained retailer can get an "information bonus" from late-purchasing only when its internal capital level is relatively low, so it needs a trade-off between the "conditional information bonus" and the "inevitable cost loss" brought by late-purchasing and then makes an optimal purchase timing decision. A specific multi-parameter-based method is highlighted to solve the timing decision problem. Based on above findings, this paper designs a simple intelligent purchasing decision support system for small retailers. The proposed system integrates two main functions of purchasing and financing to help small retailers, especially those with limited working capital, make more scientific and intelligent purchasing decisions. (C) 2017 Elsevier Ltd. All rights reserved. Expert and intelligent systems are being developed to control many technological systems including mobile robots. However, the PID (Proportional-Integral-Derivative) controller is a fast low-level control strategy widely used in many control engineering tasks. Classic control theory has contributed with different tuning methods to obtain the gains of PID controllers for specific operation conditions. Nevertheless, when the system is not fully known and the operative conditions are variable and not previously known, classical techniques are not entirely suitable for the PID tuning. To overcome these drawbacks many adaptive approaches have been arisen, mainly from the field of artificial intelligent. In this work, we propose an incremental Q-learning strategy for adaptive PID control. In order to improve the learning efficiency we define a temporal memory into the learning process. While the memory remains invariant, a non-uniform specialization process is carried out generating new limited subspaces of learning. An implementation on a real mobile robot demonstrates the applicability of the proposed approach for a real-time simultaneous tuning of multiples adaptive PID controllers for a real system operating under variable conditions in a real environment. (C) 2017 Elsevier Ltd. All rights reserved. Advanced manufacturing systems are becoming increasingly complex, subjecting to constant changes driven by fluctuating market demands, new technology insertion, as well as random disruption events. While information about production processes has been becoming increasingly transparent, detailed, and real-time, the utilization of this information for real-time manufacturing analysis and decision-making has been lagging behind largely due to the limitation of the traditional methodologies for production system analysis, and a lack of real-time manufacturing processes modeling approach and real-time performance identification method. In this paper, a novel data-driven stochastic manufacturing system model is proposed to describe production dynamics and a systematic method is developed to identify the causes of permanent production loss in both deterministic and stochastic scenarios. The proposed methods integrate available sensor data with the knowledge of production system physical properties. Such methods can be transferred to a computer for system self-diagnosis/prognosis to provide users with deeper understanding of the underlying relationships between system status and performance, and to facilitate real-time production control and decision making. This effort is a step forward to smart manufacturing for system real-time performance identification in achieving improved system responsiveness and efficiency. (C) 2017 Elsevier Ltd. All rights reserved. In this paper an efficient approach for segmentation of the individual characters from scanned documents typed on old typewriters is proposed. The approach proposed in this paper is primarily intended for processing of machine-typed documents, but can be used for machine-printed documents as well. The proposed character segmentation approach uses the modified projection profiles technique which is based on using the sliding window for obtaining the information about the document image structure. This is followed by histogram processing in order to determine the spaces between lines, words and characters in the document image. The decision-making logic used in the process of character segmentation is describes and represents the most an integral aspect of the proposed technique. Beside the character segmentation approach, the ultra-fast architecture for geometrical image transformations, which is used for image rotation in the process of skew correction, is presented, and its fast implementation using pointer arithmetic and a highly optimized low-level machine routine is provided. The proposed character segmentation approach is semi-automatic and uses threshold values to control the segmentation process. Provided results for segmentation accuracy show that the proposed approach outperforms the state-of-the-art approaches in most cases. Also, the results from the aspect of the time complexity show that the new technique performs faster than state-of-the-art approaches and can process even very large document images in less than one second, which makes this approach suitable for real-time tasks. Finally, visual demonstration of the proposed approach performances is achieved using original documents authored by Nikola Tesla. (C) 2017 Elsevier Ltd. All rights reserved. This work focusses on exploitation of the notion of writer dependent parameters for online signature verification. Writer dependent parameters namely features, decision threshold and feature dimension have been well exploited for effective verification. For each writer, a subset of the original set of features are selected using different filter based feature selection criteria. This is in contrast to writer independent approaches which work on a common set of features for all writers. Once features for each writer are selected, they are represented in the form of an interval valued symbolic feature vector. Number of features and the decision threshold to be used for each writer during verification are decided based on the equal error rate (EER) estimated with only the signatures considered for training the system. To demonstrate the effectiveness of the proposed approach, extensive experiments are conducted on both MCYT (DB1) and MCYT (DB2) benchmarking online signature datasets consisting of signatures of 100 and 330 individuals respectively using the available 100 global parametric features. (C) 2017 Elsevier Ltd. All rights reserved. We introduce a formal model of explanatory dialogue called EDS. We extend this model by including argumentation capacities to facilitate knowledge acquisition in inconsistent knowledge bases. To prove the relevance of such model we provide the DALEK (DiALectical Explanation in Knowledge-bases) framework that implements this model. We show the usefulness of the framework on a real-world application in the domain of Durum Wheat sustainability improvement within the ANR (French National Agency) funded Dur-Dur project. The preliminary pilot evaluation of the framework with agronomy experts gives a promising indication on the impact of explanation dialogues on the improvement of the knowledge's content. (C) 2017 Elsevier Ltd. All rights reserved. Recently, researchers discovered that the major problems of mining event logs is to discover a simple, sound and complete process model. But since the mining techniques can only reproduce the behaviour recorded in the log, the fitness of the reproduced model is a function of the event log completeness. In this paper, a Fuzzy-Genetic Mining model based on Bayesian Scoring Functions (FGM-BSF) which we called probabilistic approach was developed to tackle problems which emanated from the incomplete event logs. The main motivation of using genetic mining for the process discovery is to benefit from the global search performed by the algorithm. The incompleteness in processes deals with uncertainty and is tackled by using the probabilistic nature of the scoring functions in Bayesian network based on a fuzzy logic value prediction. The global search performed by the genetic approach is panacea to dealing with the population that has both good and bad individuals. Hence, the proposed approach helps to enhance a robust fitness function for the genetic algorithm through highlift traces representing only good individuals not detected by mining model without an intelligent system. The implementation of our approach was carried out on java platform with MySQL for event log parsing and preprocessing while the actual discovery was done in ProM. The results showed that the proposed approach achieved 0.98% fitness when compared with existing schemes. (C) 2017 Elsevier Ltd. All rights reserved. The problem of ranking Decision Making Units (DMUs) in Data Envelopment Analysis (DEA) has been widely studied in the literature. Some of the proposed approaches use cooperative game theory as a tool to perform the ranking. In this paper, we use the Shapley value of two different cooperative games in which the players are the efficient DMUs and the characteristic function represents the increase in the discriminant power of DEA contributed by each efficient DMU. The idea is that if the efficient DMUs are not included in the modified reference sample then the efficiency score of some inefficient DMUs would be higher. The characteristic function represents, therefore, the change in the efficiency scores of the inefficient DMUs that occurs when a given coalition of efficient units is dropped from the sample. Alternatively, the characteristic function of the cooperative game can be defined as the change in the efficiency scores of the inefficient DMUs that occurs when a given coalition of efficient DMUs are the only efficient DMUs that are included in the sample. Since the two cooperative games proposed are dual games, their corresponding Shapley value coincide and thus lead to the same ranking. The more an efficient DMU impacts the shape of the efficient frontier, the higher the increase in the efficiency scores of the inefficient DMUs its removal brings about and, hence, the higher its contribution to the overall discriminant power of the method. The proposed approach is illustrated on a number of datasets from the literature and compared with existing methods. (C) 2017 Published by Elsevier Ltd. Parkinson's disease (PD) is the world's second most common progressive neurodegenerative disease. This disease is characterized by a combination of various non-motor symptoms (e.g., depression, olfactory, and sleep disturbance) and motor symptoms (e.g., bradykinesia, tremor, rigidity), therefore diagnosis and treatment of PD are usually complex. There are some machine learning techniques that automate PD diagnosis and predict clinical scores. These techniques are promising in assisting assessment of stage of pathology and predicting PD progression. However, existing PD research mainly focuses on single-function model (i.e., only classification or prediction) using one modality, which limits performance. In this work, we propose a novel feature selection framework based on multi-modal neuroimaging data for joint PD detection and clinical score prediction. Specifically, a unique objective function is designed to capture discriminative features which are used to train a support vector regression (SVR) model for predicting clinical score (e.g., sleep scores and olfactory scores), and a support vector classification (SVC) model for class label identification. We evaluate our method using a dataset of 208 subjects, which includes 56 normal controls (NC), 123 PD and 29 scans without evidence of dopamine deficit (SWEDD) via a 10 fold cross-validation strategy. Our experimental results demonstrate that multi-modal data effectively improves the performance in disease status identification and clinical scores prediction as compared to one single modality. Comparative analysis reveals that the proposed method outperforms state-of-art methods. (C) 2017 Elsevier Ltd. All rights reserved. We present a novel learning system for human demographic estimation in which the ethnicity, gender and age attributes are estimated from facial images. The proposed approach consists of the following three main stages: 1) face alignment and preprocessing; 2) constructing a Pyramid Multi-Level face representation from which the local features are extracted from the blocks of the whole pyramid; 3) feeding the obtained features to an hierarchical estimator having three layers. Due to the fact that ethnicity is by far the easiest attribute to estimate, the adopted hierarchy is as follows. The first layer predicts ethnicity of the input face. Based on that prediction, the second layer estimates the gender using the corresponding gender classifier. Based on the predicted ethnicity and gender, the age is finally estimated using the corresponding regressor. Experiments are conducted on five public databases (MORPH II, PAL, IoG, LFW and FERET) and another two challenge databases (Apparent age; Smile and Gender) of the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop. These experiments show stable and good results. We present many comparisons against state-of-the-art methods. We also provide a study about cross-database evaluation. We quantitatively measure the performance drop in age estimation and in gender classification when the ethnicity attribute is misclassified. (C) 2017 Elsevier Ltd. All rights reserved. Increasingly more digital communication is routed among wireless, mobile computers over ad-hoc, unsecured communication channels. In this paper, we design two stochastic search algorithms (a greedy heuristic, and an evolutionary algorithm) which automatically search for strong insider attack methods against a given ad-hoc, delay-tolerant communication protocol, and thus expose its weaknesses. To assess their performance, we apply the two algorithms to two simulated, large-scale mobile scenarios (of different route morphology) with 200 nodes having free range of movement. We investigate a choice of two standard attack strategies (dropping messages and flooding the network), and four delay-tolerant routing protocols: First Contact, Epidemic, Spray and Wait, and MaxProp. We find dramatic drops in performance: replicative protocols (Epidemic, Spray and Wait, MaxProp), formerly deemed resilient, are compromised to different degrees (delivery rates between 24% and 87%), while a forwarding protocol (First Contact) is shown to drop delivery rates to under 5% - in all cases by well-crafted attack strategies and with an attacker group of size less than 10% the total network size. Overall, we show that the two proposed methods combined constitute an effective means to discover (at design-time) and raise awareness about the weaknesses and strengths of existing ad-hoc, delay-tolerant communication protocols against potential malicious cyber-attacks. (C) 2017 Elsevier Ltd. All rights reserved. Multi-class sentiment classification has extensive application backgrounds, whereas studies on this issue are still relatively scarce. In this paper, a framework for multi-class sentiment classification is proposed, which includes two parts: 1) selecting important features of texts using the feature selection algorithm, and 2) training multi-class sentiment classifier using the machine learning algorithm. Then, experiments are conducted for comparing the performances of four popular feature selection algorithms (document frequency, CHI statistics, information gain and gain ratio) and five popular machine learning algorithms (decision tree, naive Bayes, support vector machine, radial basis function neural network and K-nearest neighbor) in multi-class sentiment classification. The experiments are conducted on three public datasets which include twelve data subsets, and 10-fold cross validation is used to obtain the classification accuracy concerning each combination of feature selection algorithm, machine learning algorithm, feature set size and data subset. Based on the obtained 3600 classification accuracies (4 feature selection algorithms x 5 machine learning algorithms x 15 feature set sizes x 12 data subsets), the average classification accuracy of each algorithm is calculated, and the Wilcoxon test is used to verify the existence of significant difference between different algorithms in multi-class sentiment classification. The results show that, in terms of classification accuracy, gain ratio performs best among the four feature selection algorithms and support vector machine performs best among the five machine learning algorithms. In terms of execution time, the similar comparisons are also conducted. The obtained results would be valuable for further improving the existing multi-class sentiment classifiers and developing new multi-class sentiment classifiers. (C) 2017 Elsevier Ltd. All rights reserved. This study investigates stock market indices prediction that is an interesting and important research in the areas of investment and applications, as it can get more profits and returns at lower risk rate with effective exchange strategies. To realize accurate prediction, various methods have been tried, among which the machine learning methods have drawn attention and been developed. In this paper, we propose a basic hybridized framework of the feature weighted support vector machine as well as feature weighted K-nearest neighbor to effectively predict stock market indices. We first establish a detailed theory of feature weighted SVM for the data classification assigning different weights for different features with respect to the classification importance. Then, to get the weights, we estimate the importance of each feature by computing the information gain. Lastly, we use feature weighted K-nearest neighbor to predict future stock market indices by computing k weighted nearest neighbors from the historical dataset. Experiment results on two well known Chinese stock market indices like Shanghai and Shenzhen stock exchange indices are finally presented to test the performance of our established model. With our proposed model, it can achieve a better prediction capability to Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index in the short, medium and long term respectively. The proposed algorithm can also be adapted to other stock market indices prediction. (C) 2017 Elsevier Ltd. All rights reserved. This paper is in response to an invitation to give a presentation on the origins and subsequent development of the Jameson-Schmidt-Turkel scheme. After a description of the historical background and initial development of the scheme, the paper discusses three main developments: first, the development of convergence acceleration methods, including residual averaging and multigrid; second, the extension to unstructured grids; and third, the reformulation of the Jameson-Schmidt-Turkel scheme as a total-variation-diminishing scheme and its relationship to symmetric total-variation-diminishing schemes. The paper concludes with a brief review of applications to unsteady and viscous flows. The present work reports on the flow physics of turbulent supersonic flow over backward-facing step at Mach 2 using Large-Eddy Simulation methodology where the dynamic Smagorinsky model is used for subgrid-scale modeling, whereas proper orthogonal decomposition is invoked to identify the coherent structures present in the flow. The mean data obtained through the computations are in good agreement with the experimental measurements, whereas the isosurfaces of Q-criterion at different time instants show the complex flow structures. The presence of counter-rotating vortex pair in the shear layer along with the complex shock-wave/boundary-layer interaction leading to the separation of boundary layer is also evident from the contours of both Q and the modulus of vorticity. Further, the proper orthogonal decomposition analysis reveals the presence of coherent structures, where the first and second modes confirm the vortical structures near the step as well as along the shear layer in the downstream region, whereas the second, third, and fourth modes confirm the presence of vortices along the shear layer due to Kelvin-Helmholtz instability. Moreover, proper orthogonal decomposition as well as frequency analysis is extended at different planes to extract the detailed flow features. The boundary-layer transition on a generic hypersonic forebody at Mach 6 downstream of a sonic wall injection is investigated by means of implicit large-eddy simulation, Fourier analysis, and dynamic mode decomposition of numerical data. An academic Mach 4.2 flat-plate configuration with boundary-layer edge conditions matching those of the forebody is considered. Two empirical correlations are proposed to predict the penetration of the underexpanded tripping jet into the boundary layer (that is, the Mach disk height) as a function of the pressure ratio. Different transition mechanisms are observed downstream of the injection port, depending on the jet penetration. The dynamic mode decomposition reveals the spatial structures, temporal frequencies, and growth rates of three-dimensional instability modes. These modes, arising from the jet counter-rotating vortices, are either varicose or sinusoidal, with the latter being more efficient than the former for tripping the boundary layer at low-injection pressure. Recent experiments in the Boeing/U.S. Air Force Office of Scientific Research Mach 6 quiet tunnel at Purdue University show similar transition patterns, but also some discrepancies. A more representative configuration including crossflow effects, and more resolved simulations, is needed for a quantitative comparison. An unstructured flow solver has been equipped with transition prediction techniques based on different streamline-based approaches, applying the e(N) method for the estimation of the points of transition onset. The integration paths for the N-factor integration can be approximated using lines parallel to the direction of the oncoming flow for some configurations. If arbitrary three-dimensional geometries are to be computed, the local flow direction can be taken into account. The calculation of the laminar boundary-layer data can either be carried out applying a suitable laminar boundary-layer method or by direct determination from the field solution of the unstructured computational fluid dynamics code. The development and characteristics of the two streamline-based transition prediction techniques, their elements, and properties are described. The focus is put on the latest achievements in the development activities of the two approaches and on their application to various configurations, most of them of industrial relevance. In addition, the specific advantages and shortcomings of the different approaches are discussed. Eventually, a number of challenges for the future development and extension of the transition prediction techniques against the background of a multidisciplinary simulation environment and for the application of computational fluid dynamics solvers to cruise and high-lift aircraft configurations are specified. This paper presents a three-dimensional numerical study of the bubble generation process in a micro T-junction, performed with the commercial computational fluid dynamics solver ANSYS Fluent, version 15.0.7. Numerical results on the bubble generation frequency, bubble velocity, volume void fraction, bubble volume, and characteristic bubble lengths are compared with experimental data. Additionally, a new simple fitting for the bubble generation frequency, based upon previously reported experimental works, is proposed here. A comprehensive review of aerofoil shape parameterization methods that can be used for aerodynamic shape optimization is presented. Seven parameterization methods are considered for a range of design variables: class-shape transformations; B-splines; Hicks-Henne bump functions; a radial basis function domain element approach; Bezier surfaces; a singular-value decomposition modal extraction method; and the parameterized sections method. Because of the large range of variables involved, the most effective way to implement each method is first investigated. Their performances are then analyzed by considering the geometric shape recovery of over 2000 aerofoils using a range of design variables, testing the efficiency of design space coverage with respect to a given tolerance. It is shown that, for all the methods, between 20 and 25 design variables are needed to cover the full design space to within a geometric tolerance, with the singular-value decomposition method doing this most efficiently. A set of transonic aerofoil case studies are also presented, with geometric error and convergence of the resulting aerodynamic properties explored. These results show a strong relationship between geometric error and aerodynamic convergence and demonstrate that between 38 and 66 design variables may be needed to ensure aerodynamic convergence to within one drag and one lift count. The wake behind a NACA0012 wing at incidence with cut-in sinusoidal trailing edges is experimentally investigated. A wing model with interchangeable trailing edges is used to study their impact on the wake properties. Both vertical and spanwise traverses of hot wires are done at different downstream positions to obtain the downstream evolution of statistical properties and to perform spectral analysis. Stereoscopic particle image velocimetry is used to study the flow structure in a spanwise/cross-stream plane. Spanwise inhomogeneity of the velocity deficit and of the wake width is observed and explained by the presence of a spanwise/cross-stream flow induced by the cut-in modifications. Spectral analysis shows a decrease of shedding intensity with a shorter trailingedge wavelength, with a reduction of up to 57% when compared to a straight blunt wing. Blunt sinusoidal trailing edges exhibit a reduction of spanwise correlation compared to a blunt straight one. A sharp cut-in design is also studied, which exhibits a more broadband shedding spike at a lower frequency. A novel approach, which is capable of characterizing different types of ice accretion, was developed by leveraging the significant discrepancies in attenuation characteristics of ultrasonic waves traveling in different ice types (i.e., rime or glaze). While the theories of acoustic attenuation in pulse-echo configuration for a multilayer structure were formulated, a feasibility study was also performed to demonstrate the ultrasonic-attenuation-based technique to characterize two different types of ice samples representative of typical rime and glaze ice accretion seen over airframe surfaces. Significant differences were found in the ultrasonic attenuation characteristics between the two compared ice samples over the frequency range of 5-15 MHz. The attenuation coefficients of the rime-like ice sample were found to be much greater than those of the glaze-like ice sample at any given ultrasonic frequencies. While the values of ultrasonic attenuation coefficients would increase with ultrasonic frequency for both the ice samples, the attenuation coefficients of the rime-like ice were found to have a much greater increasing slope (i.e., da/df) and wider scattering range than those of the glaze-like ice. Such information could be used to develop ice-type-specific de-icing systems to reduce de-icing operational costs for aircraft operation in cold climate. A numerical investigation was carried out to assess the effectiveness of plasma-based flow control for a swept-wing configuration. A single dielectric barrier discharge plasma actuator was used to delay transition generated by excrescence near the leading edge. Large-eddy simulations were performed for the configuration, which had a wing airfoil section that is representative of modern reconnaissance air vehicles and maintained an appreciable region of laminar flow at design conditions. High-fidelity numerical solutions to the Navier-Stokes equations were generated for wing sweep angles of 30 and 45 deg and compared with the previously obtained unswept case. Control was generally more difficult to attain for nonzero sweep angles and transition could not be delayed for the 45 deg case unless a novel segmented swept actuator configuration was employed. Features of the computed flowfields are elucidated and effects of plasma actuation are quantified. It was found, for a sweep angle of 30 deg, that the wing lift-to-drag ratio could be increased by 9% by applying plasma control, which is comparable to that of the unswept situation. With a sweep angle of 45 deg, although plasma control could delay transition, the lift-to-drag ratio was not favorably altered. A novel conception of the plasma Gurney flap is proposed, which consists of two dielectric barrier discharge plasma actuators attached to the airfoil pressure surface near the trailing edge. The upstream and downstream plasma actuators can be actuated individually, where it is found that the most significant control effect takes place when only the downstream plasma actuator is turned on. Such optimal conception is applied to a NACA 0012 airfoil and an unmanned aerial vehicle without and with flaps deflection. The present proposed plasma Gurney flap is found to have the similar lift-increment effect and the control mechanism with the conventional Gurney flap. A large-scale counterclockwise recirculation region forms over the airfoil pressure surface near the trailing edge with plasma control, acting as the virtual aeroshaping effect. Consequently, both the suction over the upper surface and the pressure over the lower surface are increased. Thus, the plasma Gurney flap can increase the lift coefficient before the stall angle, reduce the drag coefficient, and increase the lift-to-drag ratio at small angles of attack. In addition, the plasma Gurney flap has additional advantages, such as active control ability and small device drag, making it a very attractive lift-enhancement technique for aeronautical applications. The third part of a research program that investigates the possibility of using low-temperature ionized air (plasma) for adaptive optics in an airborne laser directed-energy system is presented. It involves a novel optical measurement method that consists of a dual-wavelength Michelson interferometer that includes a microactuated mirror to allow heterodyne operation. The heterodyne interferometry is an important element in extracting the individual effects of electrons and heavy particles on the optical properties of the plasma. The plasma is generated in a hollow glass cylinder that is sealed at both ends by optical glass. It uses an ac voltage source for which the amplitude is modulated in order to produce sideband phase modulations for heterodyne analysis. A robust demodulation scheme is developed to extract the modulated interferometric phase shifts to reveal the temporal evolution of electron and heavy particle densities within the plasma. The contributions of the electron and heavy particle densities to the plasma optical path difference, which is a measure of optical control, are calculated. Based on an aero-optic standard 1 mu m wavelength, for the conditions examined in the experiment with air, the combined contributions yield an optical path difference of -300 nm, which is two times larger than nominal optical path difference values in aero-optic applications. An oxidation model for carbon surfaces has been developed where the gas-surface reaction mechanisms and corresponding rate parameters are based solely on observations from recent molecular beam experiments. In the experiments, a high-energy molecular beam containing atomic and molecular oxygen (93% atoms and 7% molecules) was directed at a high-temperature carbon surface. The measurements revealed that carbon monoxide (CO) is the dominant reaction product and that its formation required a high surface coverage of oxygen atoms. As the temperature of the carbon sample was increased during the experiment, the surface coverage reduced and the production of CO was diminished. Most importantly, the measured time-of-flight distributions of surface reaction products indicated that CO and carbon dioxide (CO2) are predominately formed through thermal reaction mechanisms as opposed to direct abstraction mechanisms. These observations have enabled the formulation of a finite-rate oxidation model that includes surface-coverage dependence, similar to existing finite-rate models used in computational fluid dynamics simulations. However, each reaction mechanism and all rate parameters of the new model are determined individually based on the molecular beam data. The new model is compared to existing models using both zero-dimensional gas-surface simulations and computational fluid dynamics simulations of hypersonic flow over an ablating surface. The new model predicts similar overall mass loss rates compared to existing models; however, the individual species production rates are completely different. The most notable difference is that the new model (based on molecular beam data) predicts CO as the oxidation reaction product with virtually no CO2 production, whereas existing models predict the exact opposite trend. Turbulent boundary layers, which are observed in almost all aerospace vehicles and in natural phenomena, radiate acoustic waves. A closed-form mathematical model is proposed to predict the intensity and coherence of acoustic radiation from turbulent boundary layers. For this purpose, the Navier-Stokes equations are rearranged and solved using a cross-spectral acoustic analogy. Arguments of the model are the spatial two-point cross correlations of the turbulent statistics and mean flow of the turbulent boundary layer. These arguments are modeled using relations whose coefficients are calibrated with numerical and measurement data drawn from a wide range of sources. Models for turbulent statistics are proposed for the zero-pressure-gradient turbulent boundary layer at a wide range of ambient Mach numbers. Predictions of acoustic intensity, spatial coherence, and model arguments are validated with numerical and measurement data. Predicted sound-pressure levels agree well with numerical results. The variation of the near-field, midfield, and far-field decay of acoustic intensity is investigated. The decay of spatial coherence is demonstrated to be a reflection of the turbulent statistics within the boundary layer. Finally, an analysis of the model equation shows that it is consistent with canonical theory. The tonal self-noise emission of a vehicle side mirror and the associated flowfield are investigated experimentally. The relevant surface on the model includes a region of geometry-induced laminar flow separation close to the trailing edge. In the separated shear layer, high-amplitude instability modes at the acoustic mode frequencies are identified. The distinctive pattern of tonal noise emission combined with the shear layer instability modes are characteristic for a self-excited aeroacoustic feedback loop known from investigations of airfoil tonal self-noise emission. The resonance frequencies of the loop are derived from a simplified transfer model and adapted to the present conditions. The existence of the feedback mechanism on the side mirror is finally demonstrated by applying the resonance condition, in which the calculated modes are in very good agreement with the measured modes. Even though the feedback mechanism allows for the coexistence of several modes, a special aspect of the side mirror tonal self-noise emission is that the acoustic modes can alternate in time. The selection of a particular mode is successfully triggered by excitation with an external acoustic disturbance at or near the corresponding mode frequency, which emphasizes the sensitivity of the mode selection toward external disturbances. The efficient global optimization approach was often used to reduce the computational cost in the optimization of complex engineering systems. This algorithm can, however, remain expensive for large-scale problems because each simulation uses the full numerical model. A novel optimization approach for such problems is proposed in this paper, in which the numerical model solves partial differential equations involving the resolution of a large system of equations, such as by finite element. This method is based on the combination of the efficient global optimization approach and reduced-basis modeling. The novel idea is to use inexpensive, sufficiently accurate reduced-basis solutions to significantly reduce the number of full system resolutions. Two applications of the proposed surrogate-based optimization approach are presented: an application to the problem of stiffness maximization of laminated plates and an application to the problem of identification of orthotropic elastic constants from full-field displacement measurements based on a tensile test on a plate with a hole. Compared with the crude efficient global optimization algorithm, a significant reduction in computational cost was achieved using the proposed efficient reduced-basis global optimization. Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low-order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large-scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low-order set of nonlinear equations using a truncated set of fixed-interface and characteristic constraint modes. The method used to extract the coefficients of the nonlinear reduced-order model is nonintrusive, in that it does not require any modification to the commercial finite element code but computes the reduced-order model from the results of several nonlinear static analyses. The nonlinear reduced-order models are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees of freedom, and the nonlinear normal modes of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861-degree-of-freedom model down to only 23 degrees of freedom while still accurately reproducing the nonlinear normal modes. The current drive for increased efficiency in aeronautic structures such as aircraft, wind-turbine blades, and helicopter blades often leads to weight reduction. Aconsequence of this tendency can be increased flexibility, which in turn can lead to unfavorable aeroelastic phenomena involving large-amplitude oscillations and nonlinear effects such as geometric hardening and stall flutter. Vibration mitigation is one of the approaches currently under study for avoiding these phenomena. In the present work, passive vibration mitigation is applied to a nonlinear experimental aeroelastic system by means of a linear tuned vibration absorber. The aeroelastic apparatus is a pitch and flap wing that features a continuously hardening restoring torque in pitch and a linear restoring torque in flap. Extensive analysis of the system with and without absorber at precritical and postcritical airspeeds showed an improvement in flutter speed of around 36%, a suppression of a jump due to stall flutter, and a reduction in limit-cycle oscillation amplitude. Mathematical modeling of the experimental system is used to demonstrate that optimal flutter delay is achieved when two of the system modes flutter at the same flight condition. Nevertheless, even this optimal absorber quickly loses effectiveness as it is detuned. The wind-tunnel measurements showed that the tested absorbers were much slower to lose effectiveness than those of the mathematical predictions. A numerical model was developed for nonlinear vibro-acoustic analysis of a skin-core debonded sandwich plate immersed in an infinite fluid and subjected to time-varying loads. The structural model of the plate was formulated using a modified variational principle in conjunction with a multilevel partitioning procedure. The nonlinear contact constraints on the detached interfaces of the plate were imposed by using the Nitsche's method. This led to a stable contact solution that does not depend on the penalty parameter as strongly as in the traditional penalty approach. A time-domain Burton-Miller boundary integral formulation was employed to couple with the structural model of the plate to determine the acoustic field, which avoids spurious instabilities in long-time simulations. Comparisons of the present results were made with solutions obtained from finite element analyses. The results show that both the structural and acoustic responses of a sandwich plate can be significantly affected by the presence of a debonding. For a debonded sandwich plate under harmonic excitation forces, the vibration and acoustic responses of the plate consist of not only the fundamental excitation frequencies, but also subharmonic and superharmonic components and combination harmonics of the excitation frequencies. The effect of the debonding size on the nonlinear vibration and acoustic-radiation responses of sandwich plates was investigated. The nonlinear transient behavior of the delaminated composite curved shell panel under different kinds of mechanical loading is investigated in this analysis. The delaminated shell panel model is developed mathematically using two higher-order midplane theories in conjunction with the Green-Langrage type of geometrical nonlinear strains including all the nonlinear higher-order terms. Further, the desired nonlinear responses are computed numerically with the help of a unique computer code developed in the MATLAB (R) environment. The nonlinear numerical responses are computed using Newmark's time integration scheme together with the direct iterative method in conjunction with finite-element steps. Further, the convergence behavior of the present numerical results is checked. In addition, the validity of the present numerical responses is shown by comparing the results with those of the available published literature. Finally, the role of the size, location, and position of delamination as well as the effects of different design parameters (curvature ratio, aspect ratio, modular ratio, shell configuration, and constraint condition) on the nonlinear transient responses are computed through a wide variety of numerical examples and discussed in detail. Background: The Institute of Medicine (IOM) and the Quality and Safety Education for Nurses (QSEN) project have identified six nursing competencies and supported their integration into undergraduate and graduate nursing curricula nationwide. But integration of those competencies into clinical practice has been limited, and evidence for the progression of competency proficiency within clinical advancement programs is scant. Using an evidence-based approach and building on the competencies identified by the IOM and QSEN, a team of experts at an academic health system developed eight competency domains and 186 related knowledge, skills, and attitudes (KSAs) for professional nursing practice. Purpose: The aim of our study was to validate the eight identified competencies and 186 related KSAs and determine their developmental progression within a clinical advancement program. Methods: Using the Delphi technique, nursing leadership validated the newly identified competency domains and KSAs as essential to practice. Clinical experts from 13 Magnet-designated hospitals with clinical advancement programs then participated in Delphi rounds aimed at reaching consensus on the developmental progression of the 186 KSAs through four levels of clinical advancement. Results: Two Delphi rounds resulted in consensus by the expert participants. All eight competency domains were determined to be essential at all four levels of clinical practice. At the novice level of practice, the experts identified a greater number of KSAs in the domains of safety and patient-and family-centered care. At more advanced practice levels, the experts identified a greater number of KSAs in the domains of professionalism, teamwork, technology and informatics, and continuous quality improvement. Conclusion: Incorporating the eight competency domains and the 186 KSAs into a framework for clinical advancement programs will likely result in more clearly defined role expectations; enhance accountability; and elevate and promote nursing practice, thereby improving clinical outcomes and quality of care. With their emphasis on quality and safety, the eight competency domains also offer a framework for enhancing position descriptions, performance evaluations, clinical recognition, initial and ongoing competency assessment programs, and orientation and residency programs. Venous thromboembolism (VTE) is a leading cause of death and disability worldwide. Each year, more than 10 million cases of VTE are diagnosed; studies suggest there are as many as 900,000 cases per year in the United States. The condition is estimated to cost the U.S. health care system between $7 billion and $10 billion annually. In February 2016, the American College of Chest Physicians released the 10th edition of the Antithrombotic Therapy for VTE Disease: CHEST Guideline and Expert Panel Report. After providing an overview of VTE pathophysiology, risk factors, signs, symptoms, and key clinical assessments, this article details recommendations from the new guideline, which incorporates the most up-to-date treatment options for patients with VTE. The authors highlight key changes from the 2012 guideline, particularly those related to nursing practice, patient education, care coordination, patient adherence, medication costs, follow-up appointments, and diagnostic testing. This article assesses the association between the Title I School Improvement Grant (SIG) program's personnel replacement policy and teacher employment patterns within an urban school district. Hannan and Freeman's population ecology model allowed the authors to consider schools within districts as individual organizations nested within a larger organization. The data are drawn from employment records of 2,470 teachers who worked in 19 high schools in a single school district from 2006 to 2011. The personnel replacement policy of the Title I SIG program appears to have reinforced, and in some cases intensified, existing patterns of teacher selection, retention, and migration. Academic self-efficacy reflects an adolescent's level of confidence or belief that she or he can successfully accomplish educational assignments and tasks, which are also argued to be a fundamental factor in educational progress and success. Little is known, however, about the academic self-efficacy that the children of immigrants have, which is particularly relevant today in the midst of the current social, political, and economic debate over the influence of immigration in U.S. public schools. Segmented assimilation theory guides this study's understanding of the children of immigrants' academic self-efficacy. Analyses, which draw from the Educational Longitudinal Study of 2002 and multilevel analyses, indeed reveal imperative findings. Most notably, the association between academic self-efficacy and assimilation is moderated by gender, race, and ethnicity. This article also discusses the importance of understanding the schooling of the children of immigrants in the educational system. Illinois education policymakers have adopted the completion agenda that emphasizes increasing postsecondary credential attainment. Meeting completion agenda goals necessitates addressing the achievement gap. To aid in developing policy to support improved completion, this study analyzes a comprehensive statewide dataset of the 2003 Illinois high school graduating class attending 4-year institutions using Cox regression survival analysis. Study findings indicate that African American (0.768 odds ratio) and Hispanic students (0.746) were significantly less likely to complete a baccalaureate degree within 7 years of graduating from high school when compared with their White peers. Furthermore, significance held regardless of income level. Several factors significantly related to improved likelihood of baccalaureate completion were identified including high school composite American College Testing (ACT) score, dual credit and advanced placement (AP) course taking, type of curriculum, ACT English and mathematics scores, and completing the ACT core curriculum. Analysis was conducted by race and income to compare the differences in significance across these groups. The sectarian structure of the Lebanese political system has contributed to periods of sectarian violence and wars over the past four decades. This article highlights the origin of sectarianism in Lebanon and discusses how public and religious schools in the country have reinforced sectarian divisions in the Lebanese society. This is a conceptual article showing that the existing poor educational policies and approaches have de-emphasized national identity and permitted the establishment of religiously segregated schools leading to the growth of sectarian divisions among the Lebanese communities. Better educational approaches are thus necessary for the creation of responsible and socially aware citizens, as well as a culture of tolerance within the country. The article proposes educational reforms, such as the greater implementation of citizenship education, the diversification of school communities, and the promotion of interaction among students from different religious backgrounds as an effective strategy that can build social cohesion and reduce future sectarian violence in Lebanon. As Lebanon is highly susceptible to regional and internal political crises, a long-term educational strategy must be developed to protect children from future hazards of sectarian hatred and violence. Effective interprofessional collaborative practice is critical to maximizing patient safety and providing quality patient care; incorporating these strategies into the curriculum is an important step toward implementation. This study assessed whether TeamSTEPPS training using simulation could improve student knowledge of TeamSTEPPS principles, self-efficacy toward interprofessional collaborative practice, and team performance. Students (N = 201) demonstrated significant improvement in all of the targeted measurements. This article presents an evidence-based approach to integrate concepts of civility, professionalism, and ethical practice into nursing curricula to prepare students to foster healthy work environments and ensure safe patient care. The author provides evidence to support this approach and includes suggestions for new student orientation, strategies for the first day of class, exemplars for incorporating active learning strategies to enhance student engagement, an emphasis on positive faculty role modeling, and suggestions for curricular integration. The effect of educational interventions on the transition experiences of new graduates of prelicensure programs is unclear. This study investigated the effect of curriculum revision on transition to practice of nursing graduates. The nursing curriculum can have a positive influence on professional and job satisfaction at 3 months postgraduation, but the practice environment becomes the dominant force after that. Graduates who demonstrated poorer transition to practice at 3 months were more likely to leave their first positions by 12 months. Lateral violence among nurses persists as a pervasive problem in health care, contributing to detrimental individual and organizational consequences. Nurse educators can prepare students to respond effectively to lateral violence before they graduate and enter the workplace, where it is likely to be encountered. Simulation provides an effective platform for delivering this type of student-centered education. This article presents step-by-step guidelines for educators to integrate lateral violence response training into simulations in prelicensure nursing education. Many nursing educators have considered the implementation of a concept-based curriculum, with active, conceptual teaching and learning strategies, which offers a way to respond to the overwhelming content saturation in many nursing curricula. However, barriers abound, including faculty concerns about loss of control, changing faculty role and identity, and fear of failure. This article clarifies these legitimate barriers and offers practical strategies for success in curriculum change. Adjunct faculty are being used more frequently to meet the instructional and practice experience needs of growing nursing program cohorts. While most adjunct faculty tend to have clinical expertise, many lack formal training in online instruction. This article describes how faculty used technology to develop and implement a faculty support site to provide ongoing orientation and encourage informal mentoring relationships for online adjunct faculty. Doctor of nursing practice (DNP) faculty advisers help students navigate academic challenges, professional development, and leadership opportunities while earning a DNP degree. Student needs during DNP education are unique from other programs and require careful advising to address common challenges. This article links student needs with advising competencies and presents strategies for faculty development and support. The purpose of this article was to describe an international partnership to establish and study simulation in India. A pilot study was performed to determine interrater reliability among faculty new to simulation when evaluating nursing student competency performance. Interrater reliability was below the ideal agreement level. Findings in this study underscore the need to obtain baseline interrater reliability data before integrating competency evaluation into a simulation program. Being knowledgeable about end-of-life care can help nurses overcome barriers to managing chronicity in terminally ill patients. The purpose of this causal-comparative research study was to examine the influence of a palliative care elective course on 74 senior nursing students' knowledge and attitudes toward providing end-of-life care. This study compared the differences between 2 groups of students, with 1 group receiving end-of-life care instruction based on the principles of the End-of-Life Nursing Education Consortium as an elective course. Despite program completion, not all graduates are successful on the National Council Licensure Examination for Registered Nurses (NCLEX-RN). Contemplative practices such as meditation and guided imagery were added to an NCLEX-RN preparatory course. The difference between self-efficacy scores at the beginning and end of the course was statistically significant. Students reported that the contemplative activities were beneficial, and they would use these activities again in the future. As companies continue to utilize co-production (customer participation in product or service creation) strategies with consumers, academic researchers have expanded their study on issues related to co-production. However, research has been scant on the issue of control in such situations. The underlying belief in increasing customer participation and involvement is it increases customers' perceived control, thereby enhancing their experience and outcomes; this belief creates the necessity for further examination of control in co-production environments. This study examines consumers' affective responses to differing levels of three types of control (cognitive, behavioral, and decisional) in low and high co-production conditions. Using two experimental contexts and one survey study, the results show increasing cognitive control will increase affect when co-production is low. Behavioral control can negatively or positively influence affect depending on specific situational contexts and perceptions of customization in low co-production conditions. Lastly, decisional control is found to be an important positive contributor to affect regardless of co-production level. Theoretical and practical implications are discussed. The country-of-origin (COO) of products has been shown to affect consumer choice, especially in situations where the origin has a stereotypical association with particular products and depending on certain consumer traits (e.g., national identity, consumer ethnocentrism). However, little is known about how these phenomena are related. Two controlled experiments conducted in two different countries and product categories reveal that product ethnicity moderates the impact of national identity but not of consumer ethnocentrism. National identity is found to influence consumer preference only if the foreign product ethnicity is higher but not lower than that of comparable domestic products. Furthermore, while consumers with a low national identity are positively affected by a high product ethnicity of foreign products, this effect vanishes with increasing levels of national identity. This research has implications for academics and practitioners alike, as it examines important boundary conditions of country-of-origin effects that have been undiscovered so far. Age impacts the brands a consumer knows, i.e., the "awareness set" which critically determines brand consideration and choice. Brands are in between common nouns and proper names but previous psychology research offers contradictory results on the impact of age on knowledge of common nouns versus proper names. Our empirical study on radio stations shows that the direct effect of age on awareness sets is marked by a turning point in consumers' early 60s, with two contrasted patterns. For long-established brands, age has a direct positive impact up to the turning point but no significant direct impact afterward. For recent brands, there is no direct impact of age before that point but a strongly negative direct impact afterward. Age has also indirect effects through several mediators. We examine whether the unethical actions of marketplace brands (e.g., the Volkswagen emissions scandal) hurt the ethical perceptions of competing brands (e.g., Ford, BMW). Across two studies, we find evidence for this unethical spillover effect and show that it can negatively affect consumers' liking and purchase intentions for a competing brand. The results show that the spillover effect (1) only occurs for similar competitors and (2) is moderated by construal level (CL). Specifically, the spillover effect is more likely to occur when consumers focus on the finer details of the unethical brand's transgression (i.e., low CL) but not when they focus on the bigger picture of the transgression (i.e., high CL). Thus, while it is intuitively appealing to assume that brands may benefit from a competitor's foible, this research indicates that competitors may be hurt by a similar brand's wrongdoing. The image of a brand provides a key driver of brand equity. To build and control a strong brand image though, brand managers require a valid procedure to measure it. This article empirically compares the predictive validity of two measurement techniques to assess brand image: First, a brand-anchored discrete choice experiment (BDCE) which is based on a brand-anchored conjoint approach where brands serve as the levels for any attribute and which was originally introduced as ranting-based approach by Louviere and Johnson Journal of Retailing, 66, 359-382 (1990) and further extended to a BDCE by Eckert et al. International Journal of Research in Marketing, 29, 256-264 (2012). Second, a direct attribute rating (DAR) approach which is commonly used for commercial applications of brand image measurement. An empirical study using a representative sample of the German beer market shows that BDCE shows significantly higher levels of predictive validity (i.e., higher correlations with the actual market shares of the brands under investigation) than the widely used DAR method. Prior research has investigated a number of drivers of consumers' perceived product attractiveness, such as a product's shape and color. The context, in which a product is presented, has so far been largely neglected in examining consumers' aesthetic appraisal of products. Drawing on social cognition theory, this research investigates how the attractiveness of the visual context (e.g., websites, advertisements) influences consumers' perceptions of product attractiveness and product quality for familiar versus unfamiliar products. Results of two experimental studies show that consumers perceive unfamiliar products as more attractive and, consequently, of higher quality when products are placed in an attractive context than when they are placed in an unattractive context. No differences in consumers' perceived product attractiveness and perceived product quality exist for familiar products. The findings extend our theoretical knowledge of product aesthetics and provide managers with insights into the effective communication of their offerings' attractiveness. Ambiguity averse suggests consumers to prefer risky options over ambiguous ones. In this study, the authors propose that consumer-brand relationship types influence consumers' ambiguity aversion. Specifically, compared with consumers in exchange relationships with the focal brand, consumers in communal relationships are more likely to trust the focal brand and thereby be less averse to ambiguity. These proposals were tested in two experiments. In experiment 1, participants in communal relationships showed less ambiguity aversion than those in exchange relationships. In experiment 2, participants in communal relationships had higher perceived trust with the focal brand than the participants in exchange relationships, and they showed less dislike for tensile promotions. Experiment 2 also tested for and confirmed the mediating effect of perceived trust. This study concludes with a discussion of the theoretical contributions and practical implications of the results. This paper examines to what extent, if any, natural environmental factors affect consumer purchase decisions regarding "green" products. We collect and combine several unique datasets to study the impact of air pollution on consumers' choices of passenger vehicles in China. Exploiting cross-city variation, we find that air pollution levels negatively affect the sales of fuel-inefficient cars on average. This relationship, though, is U-shaped over the observed air pollution levels, in that fuel-inefficient car purchases rise with air pollution beyond some threshold. Furthermore, a city's income level is a significant factor in this non-monotonic relationship, in the sense that consumers of higher-income cities are less likely to suffer this reversal. All these results are consistent with the literature's theoretical predictions of hope. The rich findings of our study yield important implications to both marketers and policy makers. Despite increased interest in nontraditional marketing activities such as sponsorship, the ability of brand marketers to quantify the return on investment from such approaches is a continued challenge. Given their proprietary nature, investigations of sponsorship costs are particularly sparse. Therefore, this study utilizes a dataset of more than 700 sponsorships undertaken by competitors in the financial services industry to investigate the influence of a variety of factors on costs. Results indicate that costs are not simply reflective of firm size, and costs of title and naming rights sponsorships are significantly higher. Evidence of agency conflicts are found in increased costs for sponsorships of events, organizations, and venues residing within the marketers' home market. Sponsorships of sport organizations are significantly more costly than those of arts, entertainment, and nonprofit organizations, presenting a challenge for marketers seeking to engage today's consumer via sport sponsorships in an increasingly competitive environment. Product-related cues, such as brand or price, can influence consumers' taste perception. Going beyond this observation, we examine the extent to which a stimulus-extrinsic factor, such as the format of the measurement tool on which consumers describe attributes of a taste sample, influences concurrent taste perception, and in turn, later taste recognition, overall product evaluation, and willingness to pay (WTP). The results of two experiments show that rating scale format (i) influences consumers' concurrent impression of a taste sample, (ii) systematically biases later identification of the sample in a taste recognition test, and (iii) affects overall product evaluation and WTP. However, scale format (iv) does not influence ratings and downstream judgments when consumers are highly knowledgeable in the product domain. These findings demonstrate that the experience of taste is fleeting and not well represented in memory, and that like other subjective experiences, taste needs to be reconstructed based on accessible cues. This study compares the effects of governance mechanisms on relationship continuity in the aftermath of exchange interruptions in interfirm relationships. We propose that cooperative relationships can be renewed by matching governance mechanisms (formalization or socialization) to specific types of exchange interruptions (opportunism and misunderstanding). Using data collected from two types of senior managers in 304 buyer firms in China (a total of 608 senior managers), we found that the effects of formalization and socialization on relationship continuity are contingent on exchange interruption type. Socialization is more effective than formalization in renewing relationships when the level of opportunism is high, while formalization works better than socialization when the level of misunderstanding is high. Based on our findings, we encourage firms to diagnose exchange interruption types and then choose a proper governance structure. We explore females' reactions to a sexually themed advertising with regard to a key personality variable, sexual self-schema (SSS). In extant research, SSS has largely had a positive impact on females' reactions to sexual advertisements. We further explore this dynamic by considering the role of female sexual self-schema (SSS) on attitudes and purchase intent for products with brand positions that differ with regard to fit with sexual themes. Informed by our study and extant literature, we also offer areas for further SSS-based advertising research, particularly in another unexplored area: the role of SSS in identification and resultant attitude formation in sexual, but less explicit, advertising. Nursing associate professors frequently are confronted with increasing responsibilities and fewer resources. These challenges commonly contribute to declines in job satisfaction and may result in departing academe. This article addresses these challenges by providing answers to four common wicked questions experienced by nursing associate professors: (a) How do I decline a request from a supervisor to take on additional responsibilities while continuing to support the mission of the school and advance my own scholarly productivity? (b) How do I handle the workload of multiple doctoral students with a variety of content areas that are different from my own and maintain my own level of productivity? (c) How do I handle expectations for more service, and leadership for the school, university, and professional organizations, yet the teaching and research responsibilities have not changed or have increased? and (d) What are some additional tips to being a more productive nursing associate professor? The symptoms of an illness that requires chemotherapy and the corresponding effects of such treatment exacerbate the pain and discomfort that patients typically experience. Listening to music may help patients cope with chemotherapy symptoms, thereby contributing to their physical ease and well-being. Seventy patients who were receiving treatment at the outpatient chemotherapy unit were invited to participate in this work. During chemotherapy sessions and the week after the sessions, the patients listened to music with headphones. The occurrence of chemotherapy symptoms such as pain, tiredness, nausea, depression, anxiety, drowsiness, lack of appetite, not feeling well, and shortness of breath in the intervention group was statistically significant after listening to music (p < .05). Improvements in total general comfort, as well as physical, psychospiritual, and sociocultural comfort, were also statistically significant (p < .05). These findings indicate that listening to music effectively reduces the severity of chemotherapy symptoms and enhances the comfort of patients receiving the treatment. Whenever individuals adapt their jobs to make them closer of their personal preferences, they are performing job crafting. This study aims to investigate if the way nurses perceive their leaders' behaviors is related with job crafting development. To do so, a quantitative analysis was conducted among a group of 325 Portuguese nurses. Results indicate that the perception of an empowering leader was found to be strongly related with the increase of challenges in the work environment, and with the development of stronger relations with direct managers and co-workers, which are two job crafting dimensions. The perception of a directive leader was found to be positively related with the avoidance of performing hindering tasks. Strengthening relations with direct managers and co-workers was found to be particularly related to leadership perception. Theoretical and practical implications are discussed. Cultural diversity in health care settings can threaten the well-being of patients, their families, and health care providers. This psychometric study evaluated the construct validity of the recently developed four-factor, 43-item Critical Cultural Competence Scale (CCCS) which was designed to overcome the conceptual limitations of previously developed scales. The study was conducted in Canada with a random sample of 170 registered nurses. Comparisons with the Cultural Competence Assessment instrument, Scale of Ethnocultural Empathy, and Cultural Intelligence Scale provided mixed evidence of convergent validity. Modest correlations were found between the total scale scores suggesting that the CCCS is measuring a more comprehensive and conceptually distinct construct. Stronger correlations were found between the more conceptually similar subscales. Evidence for discriminant validity was also mixed. Results support use of the CCCS to measure health care providers' perceptions of their critical cultural competence though ongoing evaluation is warranted. This research adds to the growing literature on what draws consumers to ethical brands. Findings from three studies demonstrate that guilt motivates consumers to connect with ethical brands, especially those consumers with high levels of moral identity importance (MII). Specifically, Study 1 finds that consumers report stronger self-brand connections (SBCs) with an ethical brand when they feel guilty (vs. control). Study 2 finds that guilt particularly motivates consumers with high MII to report stronger SBCs with an ethical (vs. unethical) brand. In turn, these strong connections lead to increased intentions to purchase the ethical brand. Finally, Study 3 finds evidence for the proposed motivation-based process explanation by showing that high MII consumers' propensity to connect with ethical brands when feeling guilty (vs. control) is attenuated when these consumers are first given the opportunity to donate to a charitable cause to alleviate their guilt. Overall, the findings suggest that ethical brands can foster strong connections with and elicit higher purchase intentions from consumers seeking ways to alleviate their guilt. (C) 2017 Wiley Periodicals, Inc. While there is substantial evidence regarding the role of generalized self-esteem and identity deficits as potential antecedents of materialism, the exact nature of the domains from which such self-esteem deficits (that breeds materialism) emanate has remained unexplored. Moreover, there is scant research attention on intrinsically oriented contingent self-esteem and how it relates to materialism. The present study investigated contingent self-esteem in extrinsic domains as antecedents of materialism. It was shown that extrinsic and intrinsic forms of contingent self-esteem relate differently with materialism such that intrinsically contingent self-esteem is incompatible with materialistic attitudes. Study 1 (N = 231 Singaporean adults) furnished cross-sectional evidence that extrinsically oriented contingent self-esteem positively predicts materialism. Study 2 (N = 206 undergraduates from a public university in Singapore) found that intrinsically oriented contingent self-esteem is negatively related to materialism. Study 3 (N = 105 Singaporean undergraduates) showed that experimental induction of extrinsic and intrinsic contingent self-esteem leads to higher or lower materialism among participants respectively. The findings advance understanding on the self-esteem-materialism link by showing how the domain-specific view of self-esteem has the potential to promote or discourage materialism based on whether self-esteem is anchored to external or internal domains. Recommendations for intervention researchers and practitioners are proposed. (C) 2017 Wiley Periodicals, Inc. It is commonly known in the positive psychology literature that people who want to increase their happiness ought to engage in so-called happiness-enhancing activities. Building on this stream of research, work that emphasizes the duality of happiness (affect vs. meaning) is introduced in order to propose a new conceptualization of happiness activities. The new conceptualization distinguishes between self- and other-focused happiness activities, and argues for the importance of other-focused activities over self-focused ones. Results from a six-week long study show that other-focused happiness activities consistently outperformed self-focused ones in terms of raising participants' levels of happiness. Although self-focused happiness activities also increased happiness, by showing increases over time relative to participants' baseline level, other-focused happiness activities consistently outperformed such increases. (C) 2017 Wiley Periodicals, Inc. Since the existing measures to prevent ambush marketing are widely ineffective, sponsors can use countercommunications, a public response to an ambushing attempt that aims to strengthen their own brand, relative to the ambusher. This research examines consumer responses to three types of counterambush marketing ads: humorous complaining, naming and shaming, and consumer education. Three experimental studies using both real and fictitious brands as well as different event settings indicate that a humorous counterad (vs. naming and shaming and consumer education counterads) results in more favorable consumer evaluations of the countermessage. The studies also show that perceptions of the advertising tactic's appropriateness mediate these effects and that a humorous counterad is only advantageous when consumers hold positive (vs. negative) attitudes toward the practice of ambush marketing. In addition, comparing the three types of counterads with a common sponsorship leveraging ad suggests that a humorous counterad and simply ignoring the ambusher produce equal perceptions of tactical appropriateness and similar positive indirect effects on consumer attitudes toward the ad. The studies thus provide implications for how sponsors can respond to ambushers. (C) 2017 Wiley Periodicals, Inc. With a sample of Australian at-risk gamblers, this research examines the impact of gender and individual difference in experiential avoidance (EA; cognitive and emotional suppression) on the processing of fear appeals. Study 1, through thematic analysis, explores fear appeal perceptions among at-risk gamblers. The results identify that relevant threats, such as social and psychological, should be integrated into fear-inducing advertising stimuli. Study 2 uses multigroup comparisons in structural equation modeling (SEM) to test the robustness of the revised protection motivation model (RPMM) in predicting the effectiveness of fear appeals to induce help-seeking intentions in at-risk gamblers. This research examines the boundary conditions of the RPMM through the moderating roles of gender and EA. The results provide evidence that fear partially mediates the impact of perceived susceptibility (PS) on help-seeking intentions in low experiential avoiders, whereas high experimental avoiders resist fear elicitation. Furthermore, evoked fear does not lead to help-seeking intentions in male at-risk gamblers. In female at-risk gamblers, while fear prompts help-seeking intentions, PS (i.e., probability of harm) does not translate to behavioral intentions via fear. For both genders and low and high experiential avoiders, cognitive appraisals of PS significantly and positively impact help-seeking intentions. This research demonstrates the unique roles of gender and EA on fear appeal effectiveness in at-risk gamblers. (C) 2017 Wiley Periodicals, Inc. This paper empirically analyzes how the use of vertical price restraints has impacted retail prices in the market for e-books. In 2010, five of the six largest publishers simultaneously adopted the agency model of book sales, allowing them to directly set retail prices. This led the Department of Justice to file suit against the publishers in 2012, the settlement of which prevents the publishers from interfering with retailers' ability to set e-book prices. Using a unique dataset of daily e-book prices for a large sample of books across major online retailers, we exploit cross-publisher variation in the timing of the return to the wholesale model to estimate its effect on retail prices. We find that e-book prices for titles that were previously sold using the agency model decreased by 18 percent at Amazon and 8 percent at Barnes & Noble. Our results are robust to different specifications, placebo tests, and synthetic control groups. Our findings illustrate a case where upstream firms prefer to set higher retail prices than retailers and help to clarify conflicting theoretical predictions on agency versus wholesale models. Price discrimination policies vary widely across companies. Some firms offer new customers the lowest price; others give preferential prices to their past customers. We contribute to the literature on price discrimination in behavior-based pricing by exploring how customers' social price comparisons, i.e., comparing one's price to that received by similar peers, impact the optimal structure of price discrimination. Social price comparisons have a negative (positive) impact on customers' transaction utility if the price charged to past customers is higher (lower) than a new customer's price. Using an analytical model with vertically differentiated firms, we show that a firm with relatively large market share will reward its past customers with relatively low prices when social price comparisons have a sufficiently large impact on utility. Furthermore, we find that social price comparisons lead to a relaxation of the price competition for new customers. Thus, both firms can earn higher profits when such comparisons are made than when they are absent. We also examine how other factors, such as horizontal competition and strategic customers, interact with social price comparison concerns to impact pricing strategies. Finally, we show how pricing behavior differs when price comparisons are based on historic reference prices rather than on peers' prices. Many retailers offer price-matching guarantees (PMGs) whereby they promise their customers that any lower price offered by competition for an identical product will be matched. Suppliers sometimes also offer PMGs to consumers in their direct channels. However, the extant literature on PMGs focuses on retailers and is silent on the role of upper stream chain members. We contribute to the literature by identifying the implications of PMGs in a dual distribution channel in which a supplier reaches consumers via a direct channel in addition to the retail channel. We show that the presence of PMGs in a dual channel hinges on supplier's strategic ability, or lack thereof, to adjust its wholesale price in relation to the guarantee. Specifically, a PMG fails to prevail at equilibrium when the supplier is capable of strategically adjusting its wholesale price - but may prevail at equilibrium otherwise. The main reason is that the supplier can manage the competition between the retail channel and the direct channel through its wholesale price decision, and offering a PMG limits this ability. On the other hand, offering a PMG can be a beneficial strategy for the supplier when the supplier cannot adjust its wholesale price; for instance in a retail dominant chain where the retailer dictates the transfer price. In a retail dominant chain, if the direct and retail channels are perceived to be similar in quality and service offerings, then both channel members benefit from offering a PMG because it softens the intensity of price competition. On the other hand, when the two channels are sufficiently differentiated in quality and service offerings, then retail managers should be cautious and avoid offering the guarantee if their channel is in a superior position in terms of perceived quality. This study investigates the economic impact of the financial regulations that aimed to control the housing market in Korea during the reign of late President Ro's Administration, which had diligently fought against the then speculative bubble in the Korean real-estate market. We test for the validity of the general prediction that the financial regulations in the form of the loan-to-value (LTV) and debt-to-income (DTI) restrictions would have adverse impacts on the value of the firms operating in the mortgage-lending industry. In this event study, we select two critical days as event dates and check whether the stock prices of the financial firms react negatively to the announcements of the regulations. Overall, the initial imposition of the DTI restrictions (i.e., the first event) adversely affects those banks that possess a relatively large number of mortgage loans in their asset portfolio. By contrast, banks that hold a small number of mortgage loans appear to benefit from the risk-reducing effect of the DTI regulation. Subsequently, the reinforcement of the LTV and DTI rules (i.e., the second event) has negative impacts on the banks with large mortgage loans. The degree of this adverse effect is greater in the second event than in the first event (i.e., the DTI restrictions). The reinforced regulations also unfavorably affect the savings banks with large mortgage loans but to a lesser degree compared with their counterparts in the banks. Meanwhile, the reinforcement of the financial regulations has negligible impacts on the banks and the savings banks with smaller mortgage loans. This paper assesses the classification performance of the Z-Score model in predicting bankruptcy and other types of firm distress, with the goal of examining the model's usefulness for all parties, especially banks that operate internationally and need to assess the failure risk of firms. We analyze the performance of the Z-Score model for firms from 31 European and three non-European countries using different modifications of the original model. This study is the first to offer such a comprehensive international analysis. Except for the United States and China, the firms in the sample are primarily private, and include non-financial companies across all industrial sectors. We use the original Z-Score model developed by Altman, Corporate Financial Distress: A Complete Guide to Predicting, Avoiding, and Dealing with Bankruptcy (1983) for private and public manufacturing and non-manufacturing firms. While there is some evidence that Z-Score models of bankruptcy prediction have been outperformed by competing market-based or hazard models, in other studies, Z-Score models perform very well. Without a comprehensive international comparison, however, the results of competing models are difficult to generalize. This study offers evidence that the general Z-Score model works reasonably well for most countries (the prediction accuracy is approximately 0.75) and classification accuracy can be improved further (above 0.90) by using country-specific estimation that incorporates additional variables. Small businesses are the backbone of the economy in many countries. In Europe, for example, small companies represent more than 90 per cent of all companies (e.g., Lukacs, ). Although these companies represent such an important portion of the economy, few studies have examined their voluntary disclosure decisions. Because small companies have certain unique characteristics compared with their larger counterparts, the general applicability of past voluntary disclosure studies to small companies is questionable. Drawing on agency and proprietary cost theory, this study investigates whether ownership, competition, and accountant factors influence the decision to disclose financially sensitive information on a voluntary basis. Our results (using an e-mail questionnaire to small private companies in Belgium, n=1,068) indicate that nearly 40 per cent of the responding companies are not aware of their disclosure behavior. For companies that are aware of their disclosure behavior, the logistic regression analysis demonstrates that factors relating to the separation of ownership and control, namely the type of ownership and number of shareholders, are among the most important determinants in the voluntary disclosure decision of small private companies. Companies with at least one legal entity as an owner of a company are less likely to disclose, while companies with more shareholders are more likely to disclose. We also provide evidence that perceived competition and the default setting of the accounting software used have a significant influence on the voluntary disclosure behavior. Banking regulators and market participants learn from price signals in the stock market (e.g., Flannery, Journal of Money, Credit and Banking 30: 273, 1998). Therefore, the system becomes more secure and developed as stock prices become more informative about banks' financial conditions. Using a sample that includes major banks from 35 countries, this study investigates how accounting regulations affect bank stock valuation and volatility. The evidence suggests that bank stocks have higher valuation and lower volatility in countries that strictly regulate the quality of external audits and financial statement transparency. This study presents a comprehensive picture of the effects of bank accounting regulations on the stock market. Can the U.S. hang on to its factories? Taylor Manufacturing is a case study in why the news is not all bad. Brad Cordova had deeply personal reasons to start a company to combat distracted driving and tap into the new market for data-driven insurance. After inventing and licensing "cushion technology" for decades, Tony and Terry Pearce launched a startup to make and sell mattresses. In just a year, Purple has surpassed $50 million in revenue, with plans to crack open a very crowded field. Ted Stanley made a fortune on knickknacks, then promised it to medical research on mental illness. His son is bird-dogging that commitment. And, yes, it's personal. Modern portfolio theory is the foundation of global money management, but a pair of mathematicians in Boston have revamped it with market beating success. WHILE WALL STREET WASN'T LOOKING, ACCOUNTANT BRUCE FLATT BECAME A BILLIONAIRE BY ASSEMBLING ONE OF THE WORLD'S LARGEST PORTFOLIOS OF OFFICE BUILDINGS, POWER PLANTS AND INFRASTRUCTURE PROJECTS-AND MAKING BROOKFIELD ASSET MANAGEMENT THE SAFEST GROWTH STOCK ON THE PLANET. Two well-funded startups are battling to build a smartphone-based alternative to Craigslist, the 22-year-old online dinosaur that's been a cash machine for founder and newly minted billionaire Craig Newmark. Purpose: We provide an overview of the health of neonates, infants, and children around the world. Issues in maximizing neonatal health are examined using the Sustainable Development Goals developed by the United Nations as a framework. Recommendations: Interventions that can help optimize neonatal, infant, and child health in the future are reviewed, including increasing preventative healthcare (immunizations, malaria prevention, exclusive breastfeeding for the first 6 months of life), enhancing point-of-care interventions (including umbilical cord care, antenatal corticosteroids if preterm birth is anticipated, and antibiotic therapy), enhancing nutritional interventions (to decrease diarrheal diseases and decrease wasting, stunting, and underweight), and building systems capacity. Clinical Implications: In an increasingly global world where wars, climate change, civil unrest, and economic uncertainty all influence health, it is important that nurses understand global health problems common for neonates, infants, and children and current recommendations to enhance their health. Background: Immunizations are one of the most important health interventions of the 20th century, yet people in many areas of the world do not receive adequate immunizations. Approximately 3 million people worldwide die every year from vaccine-preventable diseases; about half of these deaths are young children and infants. Global travel is more common; diseases that were once localized now can be found in communities around the world. Problem: Multiple barriers to immunizations have been identified. Healthcare access, cost, and perceptions of safety and trust in healthcare are factors that have depressed global immunization rates. Interventions: Several global organizations have focused on addressing these barriers as part of their efforts to increase immunization rates. The Bill and Melinda Gates Foundation, The World Health Organization, and the United Nations Children's Emergency Fund each have a part of their organization that is concentrated on immunizations. Clinical Implications: Maternal child nurses worldwide can assist in increasing immunization rates. Nurses can participate in outreach programs to ease the burden of patients and families in accessing immunizations. Nurses can work with local and global organizations to make immunizations more affordable. Nurses can improve trust and knowledge about immunizations in their local communities. Nurses are a powerful influence in the struggle to increase immunization rates, which is a vital aspect of global health promotion and disease prevention. Purpose: The purpose of this ethnographic study was to describe the meaning of childbirth for Tongan women. Study Design and Methods: In this qualitative descriptive study, 38 Tongan women, 18 from Tonga and 20 from the United States, who had given birth in the past year were invited to share their perceptions of childbirth. Themes were generated collaboratively by the research team. Findings: The overarching theme was honoring motherhood; other themes include using strength to facilitate an unmedicated vaginal birth, describing the spiritual dimensions of birth, adhering to cultural practices associated with childbearing, and the influence of the concept of respect on childbearing. Implications for Clinical Practice: Understanding the value Tongan women and their families place on motherhood can help nurses to give culturally sensitive nursing care. Tongan beliefs and cultural practices should be respected. Nurses should assess women's personal preferences for their care and advocate for them as needed. Sensitivity to stoicism is important, especially on pain control and patient education. Nurses should be aware of Tongan values regarding modesty and respect, and provide an appropriate care environment. A culturally competent nurse understands the importance of sociocultural influences on women's health beliefs and behaviors and generates appropriate interventions. Purpose: The purpose of our study was to explore how foreign-born Chinese women living in California engage in various traditional and American birth practices. Study Design and Methods: A descriptive qualitative study was conducted using a grounded theory approach. Chinese women from Mainland China, Hong Kong, and Taiwan who had childbirth experiences in the United States were purposively sampled. Semistructured interviews were conducted with 13 women, with follow-up interviews with 5 women. Interview data were analyzed using grounded theory according to the method of Strauss and Corbin. Results: There are many traditional practices for pregnancy and childbirth. Women investigated the traditions through various means, and built their own perspective on each tradition by integrating an evaluation of the Chinese perspective and an evaluation of the American perspective. Women considered several factors in the process of evaluating the Chinese and American perspectives to reach their own integrated perspective on each tradition. These factors included whether or not the tradition made sense to them, how the traditional practice affected their comfort, nature of available options, attitudes of female elders, previous experiences of their peers and themselves, and outcomes of temporary trials of traditional or nontraditional practices. Clinical Implications: Healthcare providers should respect women's diverse perspectives on traditional practices and encourage flexible arrangements. Including the elder generation in health education may be useful in helping women manage conflicts and to support their decisions. Background: Nitrous oxide has a long history of use and has been well documented in the literature as a safe, effective, and inexpensive option for pain management in labor in other countries, but it is underused in the United States. Local Problem: Pain relief options for laboring women in rural community hospitals with a small perinatal service are limited due to lack of availability of in-house anesthesia coverage. Method: This quality improvement project involved development and implementation of a nurse-driven, self-administered, demand-flow nitrous oxide program as an option for pain relief for laboring women in a rural community hospital. Intervention: Women's Services registered nurses developed the project using an interdisciplinary team approach based on an extensive literature review and consultation with experts across the country. The hospital is part of a large healthcare system; approval was sought and obtained by the system as part of the project. Cost analysis and patient satisfaction data were evaluated. Outcomes were monitored. Results: Approximately one half of the patients who have given birth at the hospital since initiation of the project have used nitrous oxide during labor. The majority of women who participated in a survey after birth found it helpful during mild-to-moderate labor pain. No adverse effects have noted in either the mother or the baby following nitrous oxide use. Clinical Implications: Initiation and management of nitrous oxide by registered nurses is a safe and cost-effective option for labor pain. It may be especially beneficial in hospitals that do not have 24/7 in-house anesthesia coverage. Purpose To describe and explore patterns of postpartum sleep, fatigue, and depressive symptoms in low-income urban women. Study design and methodS In this descriptive, exploratory, nonexperimental study, participants were recruited from an inpatient postpartum unit. Subjective measures were completed by 132 participants across five time points. Objective sleep/wake patterns were measured by 72-hour wrist actigraphy at 4 and 8 weeks. Mean sample age was 25 years, high school educated with 3.1 children. Over half the sample reported an annual income less than 50% of the federal poverty level. Results Objectively, total nighttime sleep was 5.5 hours (week 4) and 5.4 hours (week 8). Subjectively, 85% met criteria for "poor sleep quality" at week 4, and nearly half were persistently and severely fatigued through 8 weeks postpartum. Clinical implications The majority (65%) of women in this study met the definition of "short sleep duration," defined as sleeping <= 6 hours per night. Adverse effects of this short sleep on physical and mental health as well as safety and functioning, especially within the context of poverty, may be profound. There is an urgent need for further research on sleep in low-income underrepresented women to identify interventions that can improve sleep and fatigue as well as discern the implications of sleep deprivation on the safety and physical and mental health of this population. Purpose: The purpose of this study was to evaluate breastfeeding practices of teen mothers in a pre- and postnatal education and support program. Study Design and Methods: We studied breastfeeding practices of primarily Hispanic and non-Hispanic White teen mothers who participated in the Teen Outreach Pregnancy Services (TOPS) program, which promoted breastteeding through prenatal programming and postpartum support. Analyses identified the most common reasons participants had not breastfed and, for those. who initiated breastteeding, the most common reasons they stopped. Results: Participants (g = 314) reported on whether and for how long they breasffed. Nearly all participants reported initiating breastfeeding but few breasffed to 6 months. For the most part, reasons they reported stopping breastteeding paralleled those previously reported for adult mothers across the first several months of motherhood. Clinical implications: We found that teen mothers can initiate breastteeding at high rates. Results highlight areas in which teen mothers' knowledge and skills can be supported to promote breastteeding duration, including pain management and better recognizing infant cues. Our findings expand limited previous research investigating reasons that teen mothers who initiate breastteeding stop before 6 months. We investigate whether unpleasant environmental conditions affect stock market participants' responses to information events. We draw from psychology research to develop a new prediction that weather-induced negative moods reduce market participants' activity levels. Exploiting geographic variation in equity analysts' locations, we find compelling evidence that analysts experiencing unpleasant weather are slower or less likely to respond to an earnings announcement relative to analysts responding to the same announcement but experiencing pleasant weather. Price association tests find evidence consistent with reduced activity due to weather-induced moods delaying equilibrium price adjustments following earnings announcements. We also use our analyst-based research design to re-examine an existing prediction that unpleasant weather induces investor pessimism, and find evidence of both analyst pessimism and reduced activity in the presence of unpleasant weather. Together, our study provides new evidence that both extends and reaffirms findings of a relation between unpleasant weather and market activities, and contributes to the broader psychology and economics literature on the impact of weather-induced mood on labor productivity. It is conventionally perceived in the literature that weak analysts are likely to under weight their private information and strategically bias their announcements in the direction of the public beliefs to avoid scenarios where their private information turns out to be wrong, whereas strong analysts tend to adopt an opposite strategy of over weighting their private information and shifting their announcements away from the public beliefs in an attempt to stand out from the crowd. Analyzing a reporting game between two financial analysts, who are compensated based on their relative forecast accuracy, we demonstrate that it could be the other way around. An investigation of the equilibrium in our game suggests that, contrary to the common perception, analysts who benefit from information advantage may strategically choose to understate their exclusive private information and bias their announcements toward the public beliefs, while exhibiting the opposite behavior of overstating their private information when they estimate that their peers are likely to be equally informed. We investigate whether Public Company Accounting Oversight Board (PCAOB) inspections affect the quality of internal control audits. Our research design improves on prior studies by exploiting both cross-sectional and time-series variation in the content of PCAOB inspection reports, while also controlling for audit firm and year fixed effects, effectively achieving a difference-in-differences research design. We find that when PCAOB inspectors report higher rates of deficiencies in internal control audits, auditors respond by increasing the issuance of adverse internal control opinions. We also find that auditors issue more adverse internal control opinions to clients with concurrent misstatements, who thus genuinely warrant adverse opinions. We further find that higher inspection deficiency rates lead to higher audit fees, consistent with PCAOB inspections prompting auditors to undertake costly remediation efforts. Taken together, our results are consistent with the PCAOB inspections improving the quality of internal control audits by prompting auditors to remediate deficiencies in their audits of internal controls. We investigate whether the levels of social capital in U.S. counties, as captured by strength of civic norms and density of social networks in the counties, are systematically related to tax avoidance activities of corporations with headquarters located in the counties. We find strong negative associations between social capital and corporate tax avoidance, as captured by effective tax rates and book-tax differences. These results are incremental to the effects of local religiosity and firm culture toward socially irresponsible activities. They are robust to using organ donation as an alternative social capital proxy and fixed effect regressions. They extend to aggressive tax avoidance practices. Additionally, we provide corroborating evidence using firms with headquarters relocation that changes the exposure to social capital. We conclude that social capital surrounding corporate headquarters provides environmental influences constraining corporate tax avoidance. Using 113 staggered changes in corporate income tax rates across U.S. states, we provide evidence on how taxes affect corporate risk-taking decisions. Higher taxes reduce expected profits more for risky projects than for safe ones, as the government shares in a firm's upside but not in its downside. Consistent with this prediction, we find that risk taking is sensitive to taxes, albeit asymmetrically: the average firm reduces risk in response to a tax increase (primarily by changing its operating cycle and reducing R&D risk) but does not respond to a tax cut. We trace the asymmetry back to constraints on risk taking imposed by creditors. Finally, tax loss-offset rules moderate firms' sensitivity to taxes by allowing firms to partly share downside risk with the government. I test whether the anticipation of earnings news stimulates acquisition of customer information and mitigates returns to the customer-supplier anomaly documented by Cohen and Frazzini (Economic Links and Predictable Returns. The Journal of Finance 63 (2008): 1977-2011). I find that attention to a firm's publicly disclosed customers increases shortly before the firm announces earnings, and that customer stock returns predict supplier stock returns shortly before, but not after, the supplier's earnings announcement. I further find some evidence that these predictable returns are increasing in the level of customer information acquisition. These results are unique to anticipated disclosure events and suggest that anticipation of supplier earnings announcements resolves investor limited attention to customer information and accelerates price discovery of customer news. Although there is substantial research on the effect of emotions on educational outcomes in the classroom, relatively little is known about how emotion affects learning in informal science contexts. We examined the role of emotion in the context of an informal science learning experience by utilizing a path model to investigate the relationships among emotional arousal, valence, attention, environmental values and learning outcomes. Sixty undergraduate and graduate students participated in one of two treatments consisting of watching an exciting or neutral nature documentary video, reading an associated narrative, and taking a post-test. Our findings suggested that higher emotional arousal, less pleasant feelings about the content, and stronger environmental values led to greater short-term learning outcomes. We discuss our findings in relation to current understandings of emotion and learning in informal science education settings. Teachers' ability to critically reflect classroom situations in relation to their own actions constitutes an important prerequisite to improve teaching performance and professional behavior. This study investigated electroencephalogram activity in the alpha band during a reflection task before and after training in a sample of preservice teachers. Reflection was associated with stable brain activity patterns over two separate time points of assessment. Increased alpha power during reflection of pedagogically relevant scenarios was found at occipital sites, especially in participants with higher reflection competence. This is consistent with prior studies that showed increased alpha power during mental imagery among other tasks. Training of reflection competence, administered in a subsample of participants, was accompanied by increased reflection performance, while no intervention-related effects were found at the neurophysiological level. The findings of this study provide first preliminary evidence of increased alpha power at occipital sites as a neurophysiological indicator of reflections of pedagogically relevant educational scenarios. There has been increasing evidence in recent years of the need to adapt intervention programs to the specific needs of children with attention deficit hyperactivity disorder (ADHD). The main goal of this research work was to study the efficacy of an educational intervention program to improve attention and reflexivity in school children with ADHD in order to verify the improvements in symptoms associated with ADHD such as aggressivity, social isolation, anxiety, and attention deficit. The sample was comprised of 26 primary school children ranging from 7 to 10 years of age with ADHD. Symptoms of children with ADHD were evaluated by applying the Escalas Magallanes Screening Scale for Attention Deficits and Other Developmental Problems in Children (EMA-DDA) at two time points (pre and post). The results show a statistically significant reduction in symptoms on the aggressivity and social isolation scales measured with the EMA-DDA after applying the intervention program. These data supports the potential value of an intervention program for working with ADHD children. An inquiry-based intervention has been found to have a positive effect on burnout and mental well-being parameters among teachers. The aim of the current study was to qualitatively evaluate the effect of the inquiry-based stress reduction (IBSR) meditation technique on the participants. Semi-structured interviews were conducted before and after the IBSR intervention and were analyzed using the interpretative phenomenological analysis method. Before the intervention, the teachers described emotional overload caused by two main reasons: (1) multiple stressful interactions with students, parents, colleagues, and the educational system, and (2) the ideological load of their professiontrying to fulfill high expectations of performance and the manifesting educational values. Following the intervention, the teachers described a sense of centeredness and a greater ability to accept reality. They reported improvements in setting boundaries, thought flexibility, and self-awareness. These improvements assisted them in coping with the complex and dynamic nature of their profession. These positive effects suggest that IBSR is an effective technique in reducing teachers' burnout and promoting mental well-being. Cognitive skills are associated with academic performance, but little is known about how to improve these skills in the classroom. Here, we present the results of a pilot study in which teachers were trained to engage students in cognitive skill practice through playing games. Fifth-grade students at an experimental charter school were randomly assigned to receive cognitive skill instruction (CSI) or instruction in geography and typing. Students in the CSI group improved significantly more than the control group on a composite measure of cognitive skills. CSI was more effective for students with lower standardized test scores. Although there was no group effect on test scores, cognitive improvement correlated positively with test score improvement only in the CSI group. Beyond showing that cognitive skills can be improved in the classroom, this study provides lessons for the future of CSI, including changes to professional development and challenges for scalability. R.W. Home was appointed the first and, up to 2016, the only Professor of History and Philosophy of Science (HPS) at the University of Melbourne. A pioneering researcher in the history of Australian science, Rod believes in both the importance and universality of scientific knowledge, which has led him to focus on the international dimensions of Australian science, and on a widespread dissemination of its history. This background has shaped five major projects Rod has overseen or fostered: the Australasian Studies in History and Philosophy of Science (a monograph series), Historical Records of Australian Science (a journal), the Australian Science Archives Project (now a cultural informatics research centre), the Australian Encyclopedia of Science (a web resource), and the Correspondence of Ferdinand von Mueller Project (an archive, series of publications and a forthcoming web resource). In this review, we outline the development of these projects (all still active), and reflect on their success in collecting, producing and communicating the history of science in Australia. The proportion of funds received by the Commonwealth Scientific and Industrial Organisation (CSIRO) from sources other than Treasury, referred to as external earnings, has been used by the Australian government as an indicator of CSIRO's engagement with industry and contribution to the economy. Two periods of decline in external earnings in the 1940s and the 1980s were followed by enquiries into the organisation's purpose and operation, amendments to CSIRO's enabling legislation and introduction of measures to improve industry engagement. After 1988 these measures included a 30% external earnings target. External earnings subsequently rose from 24% of total revenue in 1988/89 to average 36% over the period to 2014/15, peaking at 51% in 2011. Following a review in 2002 the target was removed due to its unintended consequences that included encouraging competition with private industry, placing emphasis on earning capacity over public good, and acting as a disincentive to innovation and collaboration. In the early 1890s, J. H. Maiden considered that kino exudates might be a useful character to consider alongside structural features in grouping Eucalyptus species. He found that although some types of kino supported morphological analyses, most did not and they were not as useful as he had hoped. His commitment to the concept of variability caused him to oppose his former colleagues R. T. Baker and H. G. Smith in their use of essential oils as an over-riding characteristic in Eucalyptus taxonomy and higher classification. When their work was rebutted in the late 1920s by the discovery that differences in the oils included intra-specific polymorphisms, the concept of phyto-chemical classification went into abeyance, and Maiden's insightful, but cautious, pioneering work was also forgotten. Henry Parkes' intervention to placate the Sabbatarian movement and prevent British astronomer Richard Proctor from delivering an astronomical lecture on Sunday 5 September 1880 created a major controversy in the Australian colonies. Controversy had been central to much of Proctor's success, and in this case drew on a long-standing connection between astronomy and religion. An examination of the Proctor-Parkes incident shows how popular science works in culture by drawing on and sustaining the analogical connections between scientific ideas and broader cultural concerns. This paper explores Australia's responses to questions about the environment', particularly in the period from the 1960s-80s, showing how they were informed in varying amounts by international science, by the emerging aesthetics of the idea of the environment and by social movements, including one later known as environmentalism. The rise of integrated science', particularly Big Science and international collaborations in science, modelling and the information technology revolution all shaped the interdisciplinary expertise that frames the environment still. It is, however, very rare to find an individual like Max Day, whose biography enables a re-examination of the way thinking about the environment shaped strategic national thinking, public science and popular concerns including national parks management across the second half of the twentieth century. The argument that the second law of thermodynamics contradicts the theory of evolution has recently been revived by anti-evolutionists. In its basic form, the argument asserts that whereas evolution implies that there has been an increase in biological complexity over time, the second law, a fundamental principle of physics, shows this to be impossible. Scientists have responded primarily by noting that the second law does not rule out increases in complexity in open systems, and since the Earth receives energy from the Sun, it is an open system. This reply is correct as far as it goes, and it adequately rebuts the most crude versions of the second law argument. However, it is insufficient against more sophisticated versions, and it leaves many relevant aspects of thermodynamics unexplained. We shall consider the history of the argument, explain the nuances various anti-evolution writers have brought to it, and offer thorough explanations for why the argument is fallacious. We shall emphasize in particular that the second law is best viewed as a mathematical statement. Since anti-evolutionists never make use of the mathematical structure of thermodynamics, invocations of the second law never contribute anything substantive to their discourse. Seventy-two Internet documents promoting creationism, intelligent design (I.D.), or evolution were selected for analysis. The primary goal of each of the 72 documents was to present arguments for creationism, I.D., or evolution. We first identified all arguments in these documents. Each argument was then coded in terms of both argument type (appeal to authority, appeal to empirical evidence, appeal to reason, etc.) and argument topic (age of earth, mechanism of descent with modification, etc.). We then provided a quantitative summary of each argument type and topic for each of the three positions. Three clear patterns were revealed by the data. First, websites promoting evolution were characterized by a narrow focus on appeals to empirical evidence, whereas websites promoting creationism and I.D. were quite heterogeneous in regards to argument type. Second, websites promoting evolution relied primarily on a small number of empirical examples (e.g., fossils, biogeography, homology, etc.), while websites promoting creationism and I.D. used a far greater range of arguments. Finally, websites promoting evolution were narrowly focused on the topic of descent with modification. In contrast, websites promoting creationism tackled a broad range of topics, while websites promoting I.D. were narrowly focused on the issue of the existence of God. The current study provides a quantitative summary of a systematic content analysis of argument type and topic across a large number of frequently accessed websites dealing with origins. The analysis we have used may prove fruitful in identifying and understanding argumentation trends in scientific writing and pseudo-scientific writing. The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic-models of versus models for-that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms. This article presents a qualitative study, descriptive-interpretive in profile, of the effectiveness in learning about the nature of science (NOS) of an activity relating to the historical controversy between Pasteur and Liebig on fermentation. The activity was implemented during a course for pre-service secondary science teachers (PSSTs) specializing in physics and chemistry. The approach was explicit and reflective. Three research questions were posed: (1) What conceptions of NOS do the PSSTs show after a first reflective reading of the historical controversy?, (2) What role is played by the PSSTs' whole class critical discussion of their first reflections on the aspects of NOS dealt with in the controversy?, and (3) What changes are there in the PSSTs' conceptions of NOS after concluding the activity? The data for analysis was extracted from the PSSTs' group reports submitted at the end of the activity and the audio-recorded information from the whole class discussion. A rubric was prepared to assess this data by a process of inter-rater analysis. The results showed overall improvement in understanding the aspects of NOS involved, with there being a significant evolution in some cases (e.g., conception of scientific theory) and moderate in others (e.g., differences in scientific interpretations of the same phenomenon). This reveals that the activity has an educational utility for the education of PSSTs in NOS issues. The article concludes with an indication of some educational implications of the experience. Critical thinking skills are often assessed via student beliefs in non-scientific ways of thinking, (e.g, pseudoscience). Courses aimed at reducing such beliefs have been studied in the STEM fields with the most successful focusing on skeptical thinking. However, critical thinking is not unique to the sciences; it is crucial in the humanities and to historical thinking and analysis. We investigated the effects of a history course on epistemically unwarranted beliefs in two class sections. Beliefs were measured pre- and post-semester. Beliefs declined for history students compared to a control class and the effect was strongest for the honors section. This study provides evidence that a humanities education engenders critical thinking. Further, there may be individual differences in ability or preparedness in developing such skills, suggesting different foci for critical thinking coursework. Keeping classroom animals is a common practice in many classrooms. Their value for learning is often seen narrowly as the potential to involve children in learning biological science. They also provide opportunities for increased empathy, as well as socio-emotional development. Realization of their potential for enhancing primary children's learning can be affected by many factors. This paper focuses on teachers' perceptions of classroom animals, drawing on accounts and reflections provided by 19 participants located in an Australian primary school where each classroom kept an animal. This study aims to progress the conversation about classroom animals, the learning opportunities that they afford, and the issues they present. Phenomenographic analysis of data resulted in five categories of teachers' perceptions of the affordances and constraints of keeping classroom animals. Fundamental knowledge of natural history is lacking in many western societies, as demonstrated by its absence in school science curricula. And yet, to meet local and global challenges such as environmental degradation, biodiversity loss and climate change, we need to better understand the living and non-living parts of the natural world. Many have argued passionately for an increased understanding of natural history; others have developed successful pedagogical programmes for applying knowledge of natural history in environmental initiatives. In joining wider calls, we choose here to focus on the educational value afforded by understanding the epistemological bases of natural history and its particular forms of reasoning. We also briefly discuss the ways in which an education in natural history provides the foundation for environmental and social justice efforts that directly affect the lives of young people and their communities. We end by highlighting the ease by which natural history may be incorporated in learning opportunities both in and outside of the classroom. The purpose of this study is to interpret and qualitatively characterise the content in some research articles and evaluate cases of possible difference in meanings of the gene concept used. Using a reformulation of Hirst's criteria of forms of knowledge, articles from five different sub-disciplines in biology (transmission genetic, molecular biology, genomics, developmental biology and population genetics) were characterised according to knowledge project, methods used and conceptual contexts. Depending on knowledge project, the gene may be used as a location of recombination, a target of regulatory proteins, a carrier of regulatory sequences, a cause in organ formation or a basis for a genetic map. Methods used range from catching wild birds and dissecting beetle larvae to growing yeast cells in 94 small wells as well as mapping of recombinants, doing statistical calculations, immunoblotting analysis of protein levels, analysis of gene expression with PCR, immunostaining of embryos and automated constructions of multi-locus linkage maps. The succeeding conceptual contexts focused around concepts as meiosis and chromosome, DNA and regulation, cell fitness and production, development and organ formation, conservation and evolution. These contextual differences lead to certain content leaps in relation to different conceptual schemes. The analysis of the various uses of the gene concept shows how differences in methodologies and questions entail a concept that escapes single definitions and "drift around" in meanings. These findings make it important to ask how science might use concepts as tools of specific inquiries and to discuss possible consequences for biology education. The use of mobile devices is increasing rapidly as a potential tool for science teaching. In this study, five educators (three middle school teachers and two museum educators) used a mobile application that supported the development of a driving question. Previous studies have noted that teachers make little effort to connect learning experiences between classrooms and museums, and few studies have focused on creating connections between teachers and museum educators. In this study, teachers and museum educators created an investigation together by designing a driving question in conjunction with the research group before field trips. During field trips, students collected their own data using iPods or iPads to take pictures or record videos of the exhibits. When students returned to the school, they used the museum data with their peers as they tried to answer the driving question. After completing the field trips, five educators were interviewed to investigate their experiences with designing driving questions and using mobile devices. Besides supporting students in data collection during the field trip, using mobile devices helped teachers to get the museum back to the classroom. Designing the driving question supported museum educators and teachers to plan the field trip collaboratively. Interventions in out-of-school settings have been shown in previous studies to be effectively increase students' science knowledge and motivation, with mixed results on whether they are more effective than teaching at school. In this study, we compared an out-of-school setting in a reptile and amphibian zoo (Landau, Germany) with a sequence of classroom teaching and a control group without teaching on the topic. We compared learning at school (School) and out-of-school learning (Reptilium), which were tested in a randomized field setting with a focus on knowledge and motivation. Sixty-five elementary students participated in either the School group, Reptilium group or control group. We measured knowledge on the topics reptiles and amphibians with a newly developed two-factorial test, calibrated with item response theory, before the intervention, immediately afterwards (posttest) and 2 weeks later (follow-up). Motivation was measured immediately after the intervention. Data analyses were performed using SPSS and Mplus. We conclude that the two interventions appeared highly superior to the control group and that the out-of-school setting in the Reptilium was more effective than the school-only program. Concerning motivation, perception of choice was higher in the Reptilium than in the School group. There were gender-by-treatment interaction effects for knowledge in the posttest and follow-up, for perceived competence and for pressure/tension. Concerning knowledge, boys performed better in the School group than girls but this gender gap was not significant in the Reptilium group. Boys perceived themselves as more competent in the School group while girls reported less pressure/tension in the Reptilium group. In conclusion, encountering living animals in a formal zoo learning arrangement is encouraged in primary school since it supports self-determination (free choice), leads to higher achievement and closes gender disparities in achievement. Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images. The purpose of this research was to explore children's understandings of everyday, synthetic and scientific concepts to enable a description of how abstract, verbally taught material relates to previous experience-based knowledge and the consistency of understanding about cloud formation. This study examined the conceptual understandings of cloud formation and rain in kindergarten (age 5-7), second (age 8-9) and fourth (age 10-11) grade children, who were questioned on the basis of structured interview technique. In order to represent consistency in children's answers, three different types of clouds were introduced (a cirrus cloud, a cumulus cloud, and a rain cloud). Our results indicate that children in different age groups gave a similarly high amount of synthetic answers, which suggests the need for teachers to understand the formation process of different misconceptions to better support the learning process. Even children in kindergarten may have conceptions that represent different elements of scientific understanding and misconceptions cannot be considered age-specific. Synthetic understanding was also shown to be more consistent (not depending on cloud type) suggesting that gaining scientific understanding requires the reorganisation of existing concepts, that is time-consuming. Our results also show that the appearance of the cloud influences children's answers more in kindergarten where they mostly related rain cloud formation with water. An ability to create abstract connections between different concepts should also be supported at school as a part of learning new scientific information in order to better understand weather-related processes. The paper is set in socio-material farming to offer a way of conceptualising actions and interactions of children in preschool involved in the understanding of scientific concepts. A model of early science education about the physical phenomena of shadow formation is implemented in group work in a French context. The research involved 44 children (13 females and 31 males) of 5-6 years old. The research design was organised in three video recording steps: pre-test, teaching session and post-test. We focus on the analysis of nine teaching sessions to investigate children's 'understanding' of shadow formation. A descriptive and qualitative approach was used. In particular, we have identified three main categories (the interaction of the children with the tools, the embodiment and verbal dimension)-with respective indicators-to perform the analysis. From the results, all the categories explored seem to influence each other: all material, human and social dimensions contribute to the children's understanding of shadow formation. Also we have identified some elements that can serve as a potential source of improvement of the teaching session on shadow formation. Finally, the research provides insights on how to improve science activities in preschool with the aim of supporting early understanding of physical phenomena. This research investigates the professional life histories of upper elementary science teachers who were identified as effective both within the classroom and in the outdoor learning environment (OLE). The narratives of five teachers, collected through semistructured and open-ended interviews, provided the data for the study. Professional life histories were constructed for each teacher participant and an analysis of the teacher narratives identified the themes of teacher development across the voices of the participants. Narrative reasoning was used to unify those themes into a hypothetical professional life history as reported in this manuscript. Implications of this research can be realized for stakeholders in the preparation of pre-service teachers as well as the development of in-service teachers. Future research regarding the early induction years of new teachers, impacts of inclusion of the OLE in pre-service teacher instruction, and teacher experiences regarding professional development relating to efforts to include the OLE in formal education should be investigated. The research reported here investigated the everyday and scientific repertoires of children involved in semi-structured, Piagetian interviews carried out to check their understanding of dynamic astronomical concepts like daytime and night-time. It focused on the switching taking place between embedded and disembedded thinking; on the imagery which subjects referred to in their verbal dialogue and their descriptions of drawings and play-dough models of the Earth, Sun and Moon; and it examined the prevalence and character of animism and figurative speech in children's thinking. Five hundred and thirty-nine children (aged 3-18) from Wairarapa in New Zealand (171 boys and 185 girls) and Changchun in China (99 boys and 84 girls) took part in the study. Modified ordinal scales for the relevant concept categories were used to classify children's responses and data from each age group (with numbers balanced as closely as practicable by culture and gender) analysed with Kolmogorov-Smirnov two-sample tests (at an alpha level of 0.05). Although, in general, there was consistency of dynamic concepts within and across media and their associated modalities in keeping with the theory of conceptual coherence (see Blown and Bryce 2010; Bryce and Blown 2016), there were several cases of inter-modal and intra-modal switching in both cultures. Qualitative data from the interview protocols revealed how children switch between everyday and scientific language (in both directions) and use imagery in response to questioning. The research indicates that children's grasp of scientific ideas in this field may ordinarily be under-estimated if one only goes by formal scientific expression and vocabulary. This longitudinal study examined the role of metaconceptual awareness in the change and the durability of preservice teachers' conceptual understandings over the course of several months. Sixteen preservice early childhood teachers participated in the study. Semi-structured interviews were conducted to reveal the participants' conceptual understandings of lunar phases (pre, post, and delayed-post) and level of metaconceptual awareness (delayed-post only). Based on the change and stability in participants' conceptual understandings from pre to post and from post to delayed-post interviews, participants' conceptual understandings were assigned into three groups that described the profile of their long-term conceptual understandings: "decay or stability", "continuous growth", and "growth and stability". The results indicated that participants in the "continuous growth" and "growth and stability" groups had significantly higher metaconceptual awareness scores than participants in the "decay or stability" group. The results provided evidence that metaconceptual awareness plays a more decisive role in the restructuring of conceptual understandings than the durability of conceptual understandings. Agency is a construct facilitating our examination of when and how young people extend their own learning across contexts. However, little is known about the role played by adolescent learners' sense of agency. This paper reports two cases of students' agentively employing and developing science literacy practices-one in Singapore and the other in the USA. The paper illustrates how these two adolescent learners in different ways creatively accessed, navigated and integrated in-school and out-of-school discourses to support and nurture their learning of physics. Data were gleaned from students' work and interviews with students participating in a physics curricular programme in which they made linkages between their chosen out-of-school texts and several physics concepts learnt in school. The students' agentive moves were identified by means of situational mapping, which involved a relational analysis of the students' chosen artefacts and discourses across time and space. This relational analysis enabled us to address questions of student agency-how it can be effected, realised, construed and examined. It highlights possible ways to intervene in these networked relations to facilitate adolescents' agentive moves in their learning endeavours. Adding pictures to a text is very common in today's education and might be especially beneficial for elementary school children, whose abilities to read and understand pure text have not yet been fully developed. Our study examined whether adding pictures supports learning of a biology text in fourth grade and whether the text modality (spoken or written) plays a role. Results indicate that overall, pictures enhanced learning but that the text should be spoken rather than written. These results are in line with instructional design principles derived from common multimedia learning theories. In addition, for elementary school children, it might be advisable to read texts out to the children. Reading by themselves and looking at pictures might overload children's cognitive capacities and especially their visual channel. In this case, text and pictures would not be integrated into one coherent mental model, and effective learning would not take place. In this theoretical paper, we present a framework for conceptualizing proof in terms of mathematical values, as well as the norms that uphold those values. In particular, proofs adhere to the values of establishing a priori truth, employing decontextualized reasoning, increasing mathematical understanding, and maintaining consistent standards for acceptable reasoning across domains. We further argue that students' acceptance of these values may be integral to their apprenticeship into proving practice; students who do not perceive or accept these values will likely have difficulty adhering to the norms that uphold them and hence will find proof confusing and problematic. We discuss the implications of mathematical values and norms with respect to proof for investigating mathematical practice, conducting research in mathematics education, and teaching proof in mathematics classrooms. This paper finds its origins in a multidisciplinary research group's efforts to assemble a review of research in order to better appreciate how "spatial reasoning" is understood and investigated across academic disciplines. We first collaborated to create a historical map of the development of spatial reasoning across key disciplines over the last century. The map informed the structure of our citation search and oriented an examination of connection across disciplines. Next, we undertook a network analysis that was based on highly cited articles in a broad range of domains. Several connection gaps-that is, apparent blockages, one-way flows, and other limitations on communications among disciplines-were identified in our network analysis, and it was apparent that these connection gaps may be frustrating efforts to understand the conceptual complexity and the educational significance of spatial reasoning. While these gaps occur between the academic disciplines that we evaluated, we selected a few examples for closer analysis. To illustrate how this lack of flow can limit development of the field of mathematics education, we selected cases where it is evident that researchers in mathematics education are not incorporating the important work of mathematicians, psychologists, and neuroscientists-and vice versa. Ultimately, we argue, a more pronounced emphasis on transdisciplinary (versus multidisciplinary or interdisciplinary) research might be timely, and perhaps even necessary, in the evolution of educational research. Studies on identity in general and mathematical identity in particular have gained much interest over the last decades. However, although measurements have been proven to be potent tools in many scientific fields, a lack of consensus on ontological, epistemological, and methodological issues has complicated measurements of mathematical identities. Specifically, most studies conceptualise mathematical identity as something multidimensional and situated, which obviously complicates measurement, since these aspects violate basic requirements of measurement. However, most concepts that are measured in scientific work are both multidimensional and situated, even in physics. In effect, these concepts are being conceptualised as sufficiently uni-dimensional and invariant for measures to be meaningful. We assert that if the same judgements were to be made regarding mathematical identity, that is, whether identity can be measured with one instrument alone, whether one needs multiple instruments, or whether measurement is meaningless, it would be necessary to know how much of the multidimensionality can be captured by one measure and how situated mathematical identity is. Accordingly, this paper proposes a theoretical perspective on mathematical identity that is consistent with basic requirements of measurement. Moreover, characteristics of students' mathematical identities are presented and the problem of "situatedness" is discussed. Recent research suggests that children in elementary grades have some facility with variable and variable notation in ways that warrant closer attention. We report here on an empirically developed progression in first-grade children's thinking about these concepts in functional relationships. Using learning trajectories research as a framework for the study, we developed and implemented an instructional sequence designed to foster children's understanding of functional relationships. Findings suggest that young children can learn to think in sophisticated ways about variable quantities and variable notation. This challenges assumptions that young children are not "ready" for a study of such concepts and raises the question of whether difficulties adolescents exhibit might be ameliorated by an earlier introduction to these ideas. The study reported in this paper concerns the tensions and conflicts that teachers experience while they enact a new set of reform-oriented curricular materials into their classrooms. Our focus is omicron n the interactions developed in two groups of teachers in two schools for a period of a school year. We use Activity Theory to study emerging contradictions and we elaborate on the construct of dialectical opposition to understand the nature of these contradictions and their potential for teacher learning. We provide evidence that discussions about contradictions and their dialectical character in the two groups support teachers to engage differently in mathematics teaching and learning and carry potentials for shifts in the practices that evolve in their classrooms. Our study addresses empirically in the context of mathematics teaching the philosophical claim about the role of contradictions as a driving force for any dynamic system. Entrepreneurship, and individuals' predisposition toward entrepreneurial activities in particular, i.e. Individual Entrepreneurial Orientation (IEO), has been gaining increasing relevance in academia and management practice alike. Understanding IEO is a critical element not only for its promotion, but for better and more informed managerial and investor decision making as well. As such, this study proposes a new framework for understanding and measuring IEO based on the integrated use of cognitive mapping and the interactive multiple criteria decision making (TODIM) method. We present the steps for building such a framework, as well as a practical application of these steps. The results are promising: the methodology applied allowed a large number of determinants of IEO and their relationships to be mapped; and, subsequently, ranked and weighted for the creation of an IEO measurement tool. The implications of the resulting framework for theory and practice, its limitations and possibilities for further research are also discussed. The concept of entrepreneurship embedded in the backdrop of business has been increasingly applied to the context of addressing social problems and sustainability challenges. Known as 'social entrepreneurship' the topic has garnered the heightened attention of researchers in recent years. As a nascent stream of research social entrepreneurship is still in the early stages of development. Recent evidence suggests a growing body of scholarly research in this field; however, its conceptualisation remains obscure as it is predominantly dictated by definitional arguments. Consequently, the literature is still anecdotal in trying to unveil different dimensions of social entrepreneurship and its potential benefits that might help to battle sustainability challenges. To bridge the existing gap in social entrepreneurship research this study adopts an inductive content analysis approach. Accordingly, a sizeable number of prior studies were extracted from five major databases from 1991 to date. Findings from the prior studies were synthesised in a systematic manner to draw valid conclusions. Based on the findings drawn from prior literature the study also proposes a conceptual framework and prompts further empirical research. The implications of the study are two-fold: academic and practical. The academic implication is primarily to contribute to the relatively uncultivated area of social entrepreneurship literature. The practical implications of the study are potentially instrumental for social entrepreneurs and policy-makers who are involved in social wealth creation. Moreover, the practical implication of the study is deemed to be very significant given the rising impetus of sustainability issues, where it is believed that entrepreneurs can play a vital role in this regard. Firms' technological distinctive competencies (TDCs) help CEOs to confront their reality based on technological knowledge to achieve and exploit competitive advantage by encouraging the different dimensions of corporate entrepreneurship (innovation, new business venturing, proactiveness and self-renewal). The main purpose of this paper is thus to highlight how companies that strive to improve technological competencies within the firm achieve higher organizational performance through different components of corporate entrepreneurship and their interrelationships. This study seeks to fill this research gap by analyzing theoretically and empirically how TDCs enhance innovation, new business venturing and proactiveness and their interrelationships to achieve self-renewal and thus improve firms' organizational performance. The methodology used is LISREL analysis. We test the model with data from 201 Spanish organizations. Our research contributes theoretical and empirical arguments on the value of TDCs to the organization, arguments that are especially important because organizations sometimes fail to achieve sustainable competitive advantage due to their limited understanding of the relationships between these strategic variables. This study discusses how the perceived entrepreneurial orientation (EO) of individuals, and in particular that of the work group leader impacts the perceived EO of the work group, which in turn improves the work group's performance. Using a sample of 356 individual employees of five companies operating globally in diverse industries, the results of our structural equation model and multiple linear regression analyses indicate that both the group members and work group leader's EO decisively encourage the EO of the work group, with a significantly positively effect on its performance. Furthermore, while work group heterogeneity shows a small but significant positive influence, a shared vision in the work group is found to have a clearly positive impact on work group EO. Accordingly, this study has crucial practical implications for work groups and their leaders aspiring to ameliorate work group performance through EO increasing measures with reference to the specific individual and group characteristics. From a theoretical perspective, this research contributes to the academic debate on the EO concept at individual and work group levels. Few studies have investigated the factors that enhance entry mode choice in the context of international new ventures (INVs). In this paper, we hypothesize that the characteristics of INVs' products or services can explain their preference for equity entry modes. We also hypothesize that the inter-firm networks in which INVs are embedded play a deciding role in their choice of non-equity entry modes. When INVs are in inter-firm networks in which activities are developed to manage them, non-equity entry modes are preferred. We have adopted an effectuation approach to study the influence of different inter-firm network management activities on entry mode choice. In short, we have studied the effect of developing inter-firm network knowledge exchange, coordination, adaptation, conflict resolution and resource sharing management activities. In this paper we attempt to contribute to international entrepreneurship studies by reconciling the most widely accepted approaches to entry mode choice (Transactional Cost Economics, Organizational Capabilities-based and Network perspective) and international new ventures. Our findings show that the technological complexity of INVs' products/services explain their preference for equity entry modes. Additionally, the development of network management activities among the networked firms determines the INVs' preference for non-equity entry modes. Our results draw a decision model that differs from the ones derived from previous perspectives, which highlight the role of different characteristics of international new ventures. We used a high-quality cross-sectional data set that covers a diverse set of 29 transitional countries, to find the effect of education of probability of people being self-employed using standard probit models and instrumental variable biprobit that address endogeneity. Our findings suggest a negative effect of university education on the propensity of being self-employed. This finding remains the same for the single-stage model (i.e. standard probit) and the instrumental variable model (i.e. biprobit). We found strong endogeneity in the estimation of education effect on the propensity of being self-employed, ignoring which renders estimations biased. Regression models, which do not address endogeneity tend to underestimate the negative effect of the education on the probability of being self-employed in the countries of transition. Researchers should use alternative approaches to reduce endogeneity, such as instrumental variables and longitudinal analysis. Entering a newly liberalized market is a great challenge for companies as the environment is new and untested. On one side, to have success in these markets, firms must have a plan of action before resources are committed. Entrepreneurial orientation (EO) is associated with the successful exploration of resources and the creation of new niches as the Resource Theory supports. On another side, a natural bond between EO and marketing is found in the Value Creation Theory. So, to maximize firm success in newly liberalized markets (such as Cuba), firms must be able to objectively gauge their own entrepreneurial orientation adopting a marketing approach. Within this framework, the present paper will attempt to effectively measure the entrepreneurial orientation of US firms that have an interest in entering the Cuban market. A final sample of 81 US firms was obtained. The sample was then split into two groups (high and low entrepreneurial orientation) and compared regarding their marketing strategies (H1) and their levels of success (H2). Our results confirmed both hypotheses. To survive and grow, new ventures must establish initial legitimacy, and subsequently diffuse this legitimacy through a given population. While the notion of initial legitimacy has received substantial attention in the recent literature, diffusion has not. This work endeavors to outline the legitimacy diffusion process via drawing parallels with the field of epidemiology. Ultimately, to effectively diffuse legitimacy (and grow) a firm must gain positive judgments of appropriateness from members of a given network. Importantly, as with diseases, the characteristics of the network are critical to the diffusion process. A relatively dense network is posited to invoke a normative evaluation process by its members, and can be difficult for new ventures to access, but subsequent diffusion of new venture legitimacy can be rapid. A less dense network, on the other hand, is posited to invoke a pragmatic evaluation process by its members, and is likely easier for new ventures to access initially, but may result in lower levels of new venture legitimacy diffusion in the long run. Theoretical and practical implications are discussed. Our investigation uses structuration theory to explore the emergence of a microfranchise whose aim is to raise the income of smallholder farmers in Kenya by enabling an increase in productivity. This longitudinal real time qualitative study tracks the key actions taken in developing the venture, beginning in the conception phase of startup and continuing through to the initial stage of operations. In doing so it focuses on how agency and structure reciprocally influence the resulting social enterprise. The findings indicate that agency is not exclusive to the founders. Rather it was distributed among the micro-franchisor's stakeholders to significantly shape the nature and scope of the enterprise. While franchising, generally, is not noted to provide autonomy and independence to franchisees, we find the opposite in this emerging market context. Implications are discussed. Drawing on previous literature on proposing that there exists a positive relationship between family involvement and firm performance, this study refines the explanatory role of market learning in explaining the relationship between family involvement and firm performance to be conditional to firm age and environmental turbulence. The data from 344 small-medium enterprises show that family involvement is positively related to market exploitation while family involvement is negatively related to market exploration as family firms age. Also, we provide empirical evidence that family involvement is positively related to firm performance in turbulent environments through market exploration irrespective of the firm's age. Conversely, family involvement is positively related to firm performance through market exploitation in less turbulent environments irrespective of firm age. This study provides empirical evidence of the market exploration and exploitation capabilities may be the capabilities that glue family involvement to firm performance. Incubators are a prominent way to support technology based start-ups. Yet, it remains unclear to what extent these incubators enhance start-up performance, nor is it known through which mechanisms this would occur. In this paper we test two mechanisms to explain the relationship between incubation and the amount of investments raised by early stage start-ups as performance measure. The 'hit maker' mechanism refers to beneficial effects of the direct transfer of resources and organizational or business knowledge from the incubator to the start-up. The 'network broker' mechanism refers to the benefits that start-ups enjoy from being connected to external funding sources through the incubator's networks. We test which of these mechanisms contribute to the performance of early stage start-ups. Our data comes from a unique survey from 935 entrepreneurs with early-stage technology based start-ups in Western Europe and North America. We find that incubators have a positive effect on (1) the amount of funding that start-ups attract and (2) the ability of start-ups to attract funding from formal investors and banks. Moreover, our results provide evidence for the network broker mechanism, but not for the hit maker mechanism. This manuscript provides a foundation for future research on strategic entrepreneurship (SE) through the application of nine prominent organizational theories. Specifically, we apply a "theoretical toolbox" approach to examine SE through the lens of general systems, institutional, organizational ecology, strategic choice, upper echelons, real options, agency, network, and social identity theories. We consider the implications for SE offered by these perspectives and, using insights from each organizational theory, reflect on the nature of the challenges faced by firms when attempting the simultaneous exploration and exploitation required by SE. More broadly, the application of diverse theoretical perspectives is encouraged individually and in combination within future SE research. This paper analyzes the effect of the individual perceptions of social capital and culture in entrepreneurial aspirations before and after the economic crisis in Western Europe. Following the approach of the Theory of Planned Behavior (Ajzen in Organizational Behavior and Human Decision Processes, 50, 179-211, 1991), we advance the analysis of the effect of the perception of subjective norms in the entrepreneurial intentions. We studied the Total Early-Stage Entrepreneurial Activity (TEA) of twelve countries in 2006 and 2010. The results reveal that the perception of having social networks is significant for the TEA, and it increases after the economic crisis. However, the cultural factors do not have a significant impact, except the one related with the perception of social equality. The results obtained through the double perspective of this analysis (individual's social capital vs cultural factor of individualistic perspective) offers a certain dilemma when we try to understand the entrepreneurial intntion through the individual's perception of subjective norms, following the Ajzen's model. The more individualist is a person, the lower the weight of its social capital. However, the more a person has access to social networks, the greater his entrepreneurial intention will be. This result opens future lines of research focused on understanding the value of the individual's social capital for different countries and groups of entrepreneurs. One dimensional initial boundary delay pseudo-parabolic problem is being considered. To solve this problem numerically, we construct higher order difference method for approximation to the considered problem and obtain the error estimate for its solution. Based on the method of energy estimate the fully discrete scheme is shown to be convergent of order four in space and of order two in time. Numerical example is presented. (C) 2017 Elsevier B.V. All rights reserved. Systems of reaction-diffusion equations are commonly used in biological models of food chains. The populations and their complicated interactions present numerous challenges in theory and in numerical approximation. In particular, self-diffusion is a nonlinear term that models overcrowding of a particular species. The nonlinearity complicates attempts to construct efficient and accurate numerical approximations of the underlying systems of equations. In this paper, a new nonlinear splitting algorithm is designed for a partial differential equation that incorporates self-diffusion. We present a general model that incorporates self-diffusion and develop a numerical approximation. The numerical analysis of the approximation provides criteria for stability and convergence. Numerical examples are used to illustrate the theoretical results. (C) 2017 Elsevier B.V. All rights reserved. We derive a unified representation for all types of generalized inverses related to the {1}-inverse. Based on this representation, we propose a unified Gauss Jordan elimination procedure for the computation of all types of generalized inverses related to the {1}-inverse. Complexity analysis indicates that when applied to compute the Moore-Penrose inverse, our method is more efficient than the existing Gauss Jordan elimination methods in the literature for a large class of problems. Finally, numerical experiments show our method for Moore-Penrose inverse has good efficiency and accuracy, and especially for computing the Moore-Penrose inverse of m x n matrices with m < n our method gives the best performance in practice. (C) 2017 Elsevier B.V. All rights reserved. Little work has been done for the error estimates of the homotopy analysis method. For general 2nth-order linear and nonlinear differential equations with Lidstone boundary conditions, we obtain sharp upper bounds for the absolute error of the approximations given by the homotopy analysis method. To achieve this goal, the existence and uniqueness of solutions to these differential equations are considered. Numerical results confirm the theoretical predictions. (C) 2017 Elsevier B.V. All rights reserved. The matched interface and boundary (MIB) method has a proven ability for delivering the second order accuracy in handling elliptic interface problems with arbitrarily complex interface geometries. However, its collocation formulation requires relatively high solution regularity. Finite volume method (FVM) has its merit in dealing with conservation law problems and its integral formulation works well with relatively low solution regularity. We propose an MIB-FVM to take the advantages of both MIB and FVM for solving elliptic interface problems. We construct the proposed method on Cartesian meshes with vertex-centered control volumes. A large number of numerical experiments are designed to validate the present method in both two dimensional (2D) and three dimensional (3D) domains. It is found that the proposed MIB-FVM achieves the second order convergence for elliptic interface problems with complex interface geometries in both L-infinity and L-2 norms. (C) 2017 Elsevier B.V. All rights reserved. The global solvers are an attractive class of iterative solvers for solving linear systems with multiple right-hand sides. In this paper, first, a new global method for solving general linear systems with several right-hand sides is presented. This method is the global version of the LSMR algorithm presented by Fong and Saunders (2011). Then, some theoretical properties of the new method are discussed. Finally, numerical experiments from real applications are used to confirm the effectiveness of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. A Schwarz-type preconditioner is formulated for a class of parallel adaptive finite elements where the local meshes cover the whole domain. With this preconditioner, the convergence rate of the conjugate gradient method is shown to depend only on the ratio of the second largest and smallest eigenvalues of the preconditioned system. These eigenvalues can be bounded independently of the mesh sizes and the number of subdomains, which proves the proposed preconditioner is optimal. Numerical results are provided to support the theoretical findings. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we introduce an affine scaling cubic regularization algorithm for solving optimization problem without available derivatives subject to bound constraints employing a polynomial interpolation approach to handle the unavailable derivatives of the original objective function. We first define an affine scaling cubic model of the approximate objective function which is obtained by the polynomial interpolation approach with an affine scaling method. At each iteration a candidate search direction is determined by solving the affine scaling cubic regularization subproblem and the new iteration is strictly feasible by way of an interior backtracking technique. The global convergence and local superlinear convergence of the proposed algorithm are established under some mild conditions. Preliminary numerical results are reported to show the effectiveness of the proposed algorithm. (C) 2017 Elsevier B.V. All rights reserved. This paper proposes, analyzes and tests high order algebraic splitting methods for magnetohydrodynamic (MHD) flows. The main idea is to apply, at each time step, Yosida-type algebraic splitting to a block saddle point problem that arises from a particular incremental formulation of MHD. By doing so, we dramatically reduce the complexity of the nonsymmetric block Schur complement by decoupling it into two Stokes-type Schur complements, each of which is symmetric positive definite and also is the same at each time step. We prove the splitting is 0(Delta t(3)) accurate, and if used together with (block-)pressure correction, is fourth order. A full analysis of the solver is given, both as a linear algebraic approximation, but also in a finite element context that uses the natural spatial norms. Numerical tests are given to illustrate the theory and show the effectiveness of the method. (C) 2017 Elsevier B.V. All rights reserved. Consider a discrete-time risk model in which the insurer is allowed to invest its wealth into a risk-free or a risky portfolio under a certain regulation. Then the insurer is said to be exposed to a stochastic economic environment that contains two kinds of risks, the insurance risk and financial risk. Within period i, the net insurance loss is denoted by X-i and the stochastic discount factor from time i to zero is denoted by theta(i). For any integer n, assume that X-1,...,X-n form a sequence of pairwise asymptotically independent but not necessarily identically distributed real-valued random variables with distributions F-1,...,F-n, respectively; theta(1),theta(2),...,theta(n) On are another sequence of arbitrarily dependent positive random variables; and the two sequences are mutually independent. Under the assumption that the average distribution n(-1)Sigma F-n(i=1)i is dominatedly varying tailed and some moment conditions on theta(i,)i = 1,...,n, we derive a weakly equivalent formula for the finite-time ruin probability. We demonstrate our obtained results through a Crude Monte-Carlo simulation with asymptotics. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we propose a mixed Generalized Multiscale Finite Element Method (GMsFEM) for solving nonlinear Forchheimer flow in highly heterogeneous porous media. We consider the two term law form of the Forchheimer equation in the case of slightly-compressible single-phase flows. We write the resulting system in terms of a degenerate nonlinear flow equation for pressure when the nonlinearity depends on the pressure gradient. The proposed approach constructs multiscale basis functions for the velocity field following Mixed-GMsFEM as developed in Chung et al. (2015). To reduce the computational cost resulting from solving nonlinear system, we combine the GMsFEM with Discrete Empirical Interpolation Method (DEIM) to compute the nonlinear coefficients in some selected degrees of freedom at each coarse domain. In addition, a global reduction method such as Proper Orthogonal Decomposition (POD) is used to construct the online space to be used for different inputs or initial conditions. We present numerical and theoretical results to show that in addition to speeding up the simulation we can achieve good accuracy with a few basis functions per coarse edge. Moreover, we present an adaptive method for basis enrichment of the offline space based on an error indicator depending on the local residual norm. We use this enrichment method for the multiscale basis functions at some fixed time levels. Our numerical experiments show that these additional multiscale basis functions will reduce the current error if we start with a sufficient number of initial offline basis functions. (C) 2017 Elsevier B.V. All rights reserved. In this paper, insurer's surplus process moved within upper and lower levels is analyzed. To this end, a truncated type of Gerber-Shiu function is proposed by further incorporating the minimum and the maximum surplus before ruin into the existing ones (e.g. Gerber and Shiu (1998), Cheung et al. (2010a)). A key component in our analysis of this proposed Gerber-Shiu function is the so-called transition kernel. Explicit expressions of the transition function under two different risk models are obtained. These two models are both generalizations of the classical Poisson risk model: (i) the first model provides flexibility in the net premium rate which is dependent on the surplus (such as linear or step function); and (ii) the second model assumes that claims arrive according to a Markovian arrival process (MAP). Finally, we discuss some applications of the truncated Gerber-Shiu function with numerical examples under various scenarios. (C) 2017 Elsevier B.V. All rights reserved. The Trefftz method is a truly meshless boundary-type method, because the trial solutions automatically satisfy the governing equation. In order to stably solve the high-dimensional backward wave problems and the one-dimensional inverse source problems, we develop a multiple-scale polynomial Trefftz method (MSPTM), of which the scales are determined a priori by the collocation points. The MSPTM can retrieve the missing initial data and unknown time varying wave source. The present method can also be extended to solve the higher-dimensional wave equations long-term through the introduction of a director in the two-dimensional polynomial Trefftz bases. Several numerical examples reveal that the MSPTM is efficient and stable for solving severely ill-posed inverse problems of wave equations under large noises. (C) 2017 Elsevier B.V. All rights reserved. A new phase fitted Runge-Kutta pair of orders 8(7) which is a modification of a well known explicit Runge-Kutta pair for the integration of periodic initial value problems is presented. Numerical experiments show the efficiency of the new pair in a wide range of oscillatory problems. (C) 2017 Elsevier B.V. All rights reserved. The weighted essentially non-oscillatory (WENO) method is a high order numerical method that can handle discontinuous problems efficiently. The hybrid WENO method adopts the WENO reconstruction in the region where the jump discontinuity exists, while other high order reconstruction is adopted in the smooth region. Thus for the hybrid WENO method, the jump location should be identified before the adoption. Various edge detection methods have been developed mostly focusing on finding edges as accurately as possible utilizing as many stencils as possible. However, if the reconstruction in the smooth area is obtained with the fixed finite number of cells, it would suffice to examine whether the considered stencils need to adopt the WENO reconstruction instead of finding edges in a global manner. In this note, we compare the multi-resolution analysis with the local monotone polynomial method for the 5th order hybrid WENO method. The monotone method uses the cell information within the given stencil only to decide whether to adopt the WENO reconstruction or not. In this sense, the local method is optimal. We provide a detailed numerical study and show that the monotone polynomial method is efficient and accurate. (C) 2017 Elsevier B.V. All rights reserved. Based on the k-record values, inference is considered for the parameters of the Kumaraswamy distribution. The maximum likelihood estimates and alternative point estimates based on proposed pivotal quantities are provided for the unknown model parameters, and series of exact confidence intervals and exact confidence regions are constructed as well. For the confidence intervals and regions, the corresponding smallest size confidence sets and their optimization procedures are presented based on minimizing the Lagrangian functions under given significance level. Two illustrative examples based on real and simulated data are provided to assess the performance of the proposed results. (C) 2017 Elsevier B.V. All rights reserved. We discuss accelerating convergence of multipoint iterative methods for solving scalar equations, using particular type rational interpolant. Both derivative-free and Newton-type methods are investigated simultaneously. As a conclusion a Theorem of Konig's type for multipoint iterations is stated. A new optimal multipoint family of methods based on rational interpolation is constructed. The iteration uses n function evaluations per cycle and O(j) operations in jth step of a single iteration to obtain 2(n-1) order of convergence. Several equivalent forms of the obtained iterates and development techniques are presented. (C) 2017 Elsevier B.V. All rights reserved. In 2011, Petkovic, Rancic and Milosevic (Petkovic et al., 2011) introduced and studied a new fourth-order iterative method for finding all zeros of a polynomial simultaneously. They obtained a semilocal convergence theorem for their method with computationally verifiable initial conditions, which is of practical importance. In this paper, we provide new local as well as semilocal convergence results for this method over an algebraically closed normed field. Our semilocal results improve and complement the result of Petkovic, Rancic and Milosevic in several directions. The main advantage of the new semilocal results are: weaker sufficient convergence conditions, computationally verifiable a posteriori error estimates, and computationally verifiable sufficient conditions for all zeros of a polynomial to be simple. Furthermore, several numerical examples are provided to show some practical applications of our semilocal results. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we extend the single-step pseudo-spectral method for second kind Volterra integral equations with vanishing variable delays to the multistep pseudo-spectral method. We also analyze the convergence of the hp-version of the multistep pseudo-spectral method under the L-2-norm, and show that the scheme enjoys high order accuracy and can be implemented in a stable and efficient manner. In addition, it is very suitable for long time calculations and large step size situations. Numerical results show good agreement with the theoretical analysis. (C) 2017 Elsevier B.V. All rights reserved. Error estimates for approximations of solutions of Laplace's equation with Dirichlet, Robin or Neumann boundary value conditions are described. The solutions are represented by orthogonal series using the harmonic Steklov eigenfunctions. Error bounds for partial sums involving the lowest eigenfunctions are found. When the region is a rectangle, explicit formulae for the Steklov eigenfunctions and eigenvalues are known. These were used to find approximations for problems with known explicit solutions. Results about the accuracy of these solutions, as a function of the number of eigenfunctions used, are given. (C) 2017 Elsevier B.V. All rights reserved. For almost a century, stable Levy random variables have been considered as statistical models to stock market data. Due to the difficulty associated with the evaluation of their probability distribution function, practical applications have been limited to the existence of accurate computational routines. In the present paper, the exact expression for the reliability of two stable Levy random variables is analytically obtained in terms of the H-function. An approximate expression, in terms of simpler functions, is also derived in order to make the application of the results easier. Computational codes are provided to aid the evaluation of the formulas derived. Finally, the applicability of the new expressions is illustrated by modelling stock return data. (C) 2017 Elsevier B.V. All rights reserved. We initially derive an asymptotic expansion and a high accuracy combination formula of the derivatives in the sense of pointwise for the isoparametric bilinear finite volume element scheme by employing the energy-embedded method on some non-uniform grids. Furthermore, we prove that the approximate derivatives are convergent of order two. Numerical examples confirm theoretical results. (C) 2016 Elsevier B.V. All rights reserved. Fractional calculus is used to model various different phenomena in nature today. The aim of this paper is to propose the shifted Legendre spectral collocation method to solve stochastic fractional integro-differential equations (SFIDEs). In this approach, we consider the P panels M-point Newton Cotes rules with M fixed for estimating It o integrals. The main characteristic of the presented method is that it reduces SFIDEs into a system of algebraic equations. Thus, we can solve the problem by Newton's method. Furthermore, the convergence analysis of the approach is considered. The method is computationally attractive, and numerical examples confirm the validity and efficiency of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. A first-order linear fully discrete scheme is studied for the incompressible time-dependent Navier-Stokes equations in three-dimensional domains. This scheme is based on an incremental pressure projection method and decouples each component of the velocity and the pressure, solving in each time step, a linear convection-diffusion problem for each component of the velocity and a Poisson-Neumann problem for the pressure. Using an inf-sup stable and continuous finite-elements approach of order O(h) in space, unconditional optimal error estimates of order O(k + h) are deduced for velocity and pressure (without imposing constraints on the mesh size h and the time step k). Finally, some numerical results are performed to validate the theoretical analysis, and also to compare the studied scheme with other current first-order segregated schemes. (C) 2017 Elsevier B.V. All rights reserved. A singularly perturbed parabolic equation of convection diffusion type is examined. Initially the solution approximates a concentrated source. This causes an interior layer to form within the domain for all future times. Using a suitable transformation, a layer adapted mesh is constructed to track the movement of the centre of the interior layer. A parameter uniform numerical method is then defined, by combining the backward Euler method and a simple upwinded finite difference operator with this layer-adapted mesh. Numerical results are presented to illustrate the theoretical error bounds established. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we present a new multiscale model reduction technique for the Stokes flows in heterogeneous perforated domains. The challenge in the numerical simulations of this problem lies in the fact that the solution contains many multiscale features and requires a very fine mesh to resolve all details. In order to efficiently compute the solutions, some model reductions are necessary. To obtain a reduced model, we apply the generalized multiscale finite element approach, which is a framework allowing systematic construction of reduced models. Based on this general framework, we will first construct a local snapshot space, which contains many possible multiscale features of the solution. Using the snapshot space and a local spectral problem, we identify dominant modes in the snapshot space and use them as the multiscale basis functions. Our basis functions are constructed locally with non-overlapping supports, which enhances the sparsity of the resulting linear system. In order to enforce the mass conservation, we propose a hybridized technique, and uses a Lagrange multiplier to achieve mass conservation. We will mathematically analyze the stability and the convergence of the proposed method. In addition, we will present some numerical examples to show the performance of the scheme. We show that, with a few basis functions per coarse region, one can obtain a solution with excellent accuracy. (C) 2017 Elsevier B.V. All rights reserved. This paper studies the estimation for a class of partially linear models with longitudinal data. By combining quadratic inference functions with QR decomposition technology, we propose a new estimation method for the parametric and nonparametric components. The resulting estimators for parametric and nonparametric components do not affect each other, and then it is easy for application in practice. Under some mild conditions, we establish some asymptotic properties of the resulting estimators. Some simulation studies are undertaken to assess the finite sample performance of the proposed estimation procedure. (C) 2017 Elsevier B.V. All rights reserved. A unified framework is established for the a posteriori error analysis of nonconforming finite element approximations to convection-diffusion problems. Under some certain conditions, the theory assures the semi-robustness of residual error estimates in the usual energy norm and the robustness in a modified norm, and applies to several nonconforming finite elements, such as the Crouzeix-Raviart triangular element, the nonconforming rotated (NR) parallelogram element of Rannacher and Turek, the constrained NR parallelogram element, etc. Based on the general error decomposition in different norms, we show that the key ingredients of error estimation are the existence of a bounded linear operator Pi : V-h(c) -> V-h(nc) with some elementary properties and the estimation on the consistency error related to the particular numerical scheme. The numerical results are presented to illustrate the practical behavior of the error estimator and check the theoretical predictions. (C) 2017 Elsevier B.V. All rights reserved. We show a new higher order weak approximation with Malliavin weights for multidimensional stochastic differential equations by extending the method in Takahashi and Yamada (2016). The estimate of global error of the discretization is based on a sharp small time expansion using a Malliavin calculus approach. We give explicit Malliavin weights for second order discretization as polynomials of Brownian motions. The effectiveness is illustrated through an example in option pricing. (C) 2017 Elsevier B.V. All rights reserved. A novel conservative level set method is introduced for the approximation of two-phase incompressible fluid flows. The method builds on recent conservative level set approaches and utilizes an entropy production to construct a balanced artificial diffusion and artificial anti-diffusion. The method is self-tuning, maximum principle preserving, suitable for unstructured meshes, and neither re-initialization of the level set function nor reconstruction of the interface is needed for long-time simulation. Computational results in one, two and three dimensions are presented for finite element and finite volume implementations of the method. (C) 2017 Elsevier B.V. All rights reserved. In this paper we study the center problem for Abel polynomial differential equations of second kind. Computing the focal values and using modular arithmetics and Grobner bases we find the center conditions for such systems for lower degrees. (C) 2017 Elsevier B.V. All rights reserved. In this paper, by constructing a new weighted norm method and analysis technique, we establish the conditions for the existence and uniqueness of positive monotone solutions to a class of nonlinear Schrodinger equations in planar exterior domains. Some examples are also given to demonstrate the application of our main results. (C) 2017 Published by Elsevier B.V. We construct an alternating splitting iteration scheme for solving and preconditioning a block two-by-two linear system arising from numerical discretizations of the bidomain equations. The convergence theory of this class of splitting iteration methods is established and some useful properties of the preconditioned matrix are analyzed. The potential of this approach is illustrated by numerical experiments. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a new method, namely Galerkin Levin method, is proposed to investigate the highly oscillatory integrals. Indeed, we construct an algorithm based on the Levin and Galerkin methods. Using the Levin method, the given integral is transformed into a certain ordinary differential equation, then, a Galerkin method is applied for solving the obtained ODE. The efficiency of the approach will be shown by applying the procedure on some prototype examples. (C) 2017 Elsevier B.V. All rights reserved. We consider a new differential difference operator A on the real line. We study the harmonic analysis associated with this operator. Next, we prove various mathematical aspects of the quantitative uncertainty principles, including Donoho Stark's uncertainty principle and variants of Heisenberg's inequalities for its Hartley transform associated to the operator A. (C) 2017 Elsevier B.V. All rights reserved. `In this paper we propose a new fast Fourier transform to recover a real non-negative signal x is an element of R-+(N) from its discrete Fourier transform (x) over cap = F(N)x is an element of C-N. If the signal x appears to have a short support, i.e., vanishes outside a support interval of length m < N, then the algorithm has an arithmetical complexity of only theta(m log m log(N/m)) and requires theta(m log(N/m)) Fourier samples for this computation. In contrast to other approaches there is no a priori knowledge needed about sparsity or support bounds for the vector x. The algorithm automatically recognizes and exploits a possible short support of the vector and falls back to a usual radix-2 FFT algorithm if x has (almost) full support. The numerical stability of the proposed algorithm is shown by numerical examples. (C) 2017 Elsevier B.V. All rights reserved. This paper gives an error analysis for the method of lines (MOL) using generalized interpolating moving least squares (GIMLS) approximation. In this study, error bound for the time-dependent linear and nonlinear second-order differential equations in d-dimension will be obtained, when the GIMLS method is used for approximating the spatial variables. Also, the well-known Courant-Friedrichs-Lewy (CFL) condition will be derived in both cases (linear and nonlinear equations). Finally, numerical examples will be reported to confirm the ability of the proposed technique. (C) 2017 Elsevier B.V. All rights reserved. In recent years wavelets decompositions have been widely used in computational Maxwell's curl equations, to effectively resolve complex problems. In this paper, we review different types of wavelets that we can consider, the Cohen-Daubechies-Feauveau biorthogonal wavelets, the orthogonal Daubechies wavelets and the Deslauries-Dubuc interpolating wavelets. We summarize the main features of these frameworks and we propose some possible future works. (C) 2017 Elsevier B.V. All rights reserved. We investigated the influence of three different high-pass (HP) and low-pass (LP) filtering conditions and a Gaussian (GNMF) and inverse-Gaussian (IGNMF) non-negative matrix factorization algorithm on the extraction of muscle synergies from myoelectric signals during human walking and running. To evaluate the effects of signal recording and processing on the outcomes, we analyzed the intraday and interday computation reliability. Results show that the IGNMF achieved a significantly higher reconstruction quality and on average needs one less synergy to sufficiently reconstruct the original signals compared to the GNMF. For both factorizations, the HP with a cut-off frequency of 250 Hz significantly reduces the number of synergies. We identified the filter configuration of fourth order, HP 50 Hz and LP 20 Hz as the most suitable to minimize the combination of fundamental synergies, providing a higher reliability across all filtering conditions even if HP 250 Hz is excluded. Defining a fundamental synergy as a single-peaked activation pattern, for walking and running we identified five and six fundamental synergies, respectively using both algorithms. The variability in combined synergies produced by different filtering conditions and factorization methods on the same data set suggests caution when attributing a neurophysiological nature to the combined synergies. In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data. Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was 98.1 +/- 4.9% for the neurologically intact subjects and 90.0 +/- 13.9% for the SCI subjects. The total lag of the system was approximately 250 ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries. Vagus nerve stimulation (VNS) is a widely used neuromodulation technique that is currently used or being investigated as therapy for a wide array of human diseases such as epilepsy, depression, Alzheimer's disease, tinnitus, inflammatory diseases, pain, heart failure and many others. Here, we report a pronounced decrease in brain and core temperature during VNS in freely moving rats. Two hours of rapid cycle VNS (7s on/18s off) decreased brain temperature by around 3 degrees C, while standard cycle VNS (30 s on/300 s off) was associated with a decrease of around 1 degrees C. Rectal temperature similarly decreased by more than 3 degrees C during rapid cycle VNS. The hypothermic effect triggered by VNS was further associated with a vasodilation response in the tail, which reflects an active heat release mechanism. Despite previous evidence indicating an important role of the locus coeruleus-noradrenergic system in therapeutic effects of VNS, lesioning this system with the noradrenergic neurotoxin DSP-4 did not attenuate the hypothermic effect. Since body and brain temperature affect most physiological processes, this finding is of substantial importance for interpretation of several previously published VNS studies and for the future direction of research in the field. Hallucinations are elusive phenomena that have been associated with psychotic behavior, but that have a high prevalence in healthy population. Some generative mechanisms of Auditory Hallucinations (AH) have been proposed in the literature, but so far empirical evidence is scarce. The most widely accepted generative mechanism hypothesis nowadays consists in the faulty workings of a network of brain areas including the emotional control, the audio and language processing, and the inhibition and self-attribution of the signals in the auditive cortex. In this paper, we consider two methods to analyze resting state fMRI (rs-fMRI) data, in order to measure effective connections between the brain regions involved in the AH generation process. These measures are the Dynamic Causal Modeling (DCM) cross-covariance function (CCF) coefficients, and the partially directed coherence (PDC) coefficients derived from Granger Causality (GC) analysis. Effective connectivity measures are treated as input classifier features to assess their significance by means of cross-validation classification accuracy results in a wrapper feature selection approach. Experimental results using Support Vector Machine (SVM) classifiers on an rs-fMRI dataset of schizophrenia patients with and without a history of AH confirm that the main regions identified in the AH generative mechanism hypothesis have significant effective connection values, under both DCM and PDC evaluation. Objective: In this work, we introduce Permutation Disalignment Index (PDI) as a novel nonlinear, amplitude independent, robust to noise metric of coupling strength between time series, with the aim of applying it to electroencephalographic (EEG) signals recorded longitudinally from Alzheimer's Disease (AD) and Mild Cognitive Impaired (MCI) patients. The goal is to indirectly estimate the connectivity between the cortical areas, through the quantification of the coupling strength between the corresponding EEG signals, in order to find a possible matching with the disease's progression. Method: PDI is first defined and tested on simulated interacting dynamic systems. PDI is then applied to real EEG recorded from 8 amnestic MCI subjects and 7 AD patients, who were longitudinally evaluated at time T0 and 3 months later (time T1). At time T1, 5 out of 8 MCI patients were still diagnosed MCI (stable MCI) whereas the remaining 3 exhibited a conversion from MCI to AD (prodromal AD). PDI was compared to the Spectral Coherence and the Dissimilarity Index. Results: Limited to the size of the analyzed dataset, both Coherence and PDI resulted sensitive to the conversion from MCI to AD, even though only PDI resulted specific. In particular, the intrasubject variability study showed that the three patients who converted to AD exhibited a significantly (p < 0.001) increased PDI (reduced coupling strength) in delta and theta bands. As regards Coherence, even though it significantly decreased in the three converted patients, in delta and theta bands, such a behavior was also detectable in one stable MCI patient, in delta band, thus making Coherence not specific. From the Dissimilarity Index point of view, the converted MCI showed no peculiar behavior. Conclusions: PDI significantly increased, in delta and theta bands, specifically in the MCI subjects who converted to AD. The increase of PDI reflects a reduced coupling strength among the brain areas, which is consistent with the expected connectivity reduction associated to AD progression. There are a number of strategies employed by companies to limit price competition, including patenting. This article investigates patent licensing restrictions as a strategy to erode price competition, using mainly information gleaned from the 1960-1962 Kefauver Committee hearings. The article deals with the pharmaceutical industry, which is one of the few sectors in which patents are essential to the development and introduction of innovations. The current study adds to a body of literature that has yielded mixed results with respect to the role of patents in this industry. The main contribution of this research is that restrictive licensing clauses, specifically field-of-use restrictions, are found to be relevant in eroding price competition in the institutional market. However, in the retail ethical market, price competition was absent even when no field-of-use restrictions were included in licensing contracts, although product competition was relevant between patented drugs. In this article, I describe the partial forward integration of turn-of-the-twentieth-century Boston breweries. I argue that both brewers and saloonkeepers used the fluid market in capital lending as a lever of power. An analysis of the minutes of three breweries and their loan records, covering more than ten years, reveals that saloonkeepers were often delinquent in repaying their annual loans and brewery owners only infrequently threatened to call the loans. Using the structure-conduct-performance paradigm, I suggest that the particular conditions in Boston (a limited number of saloon licenses and a geographical position that precluded long-distance shipping of beer) gave the saloonkeepers much greater leverage in the so-called "tied system." Brewers used vertical restraints but, because of obligations to British owners, did not fully forward integrate by buying saloon property, as brewers did in the United Kingdom. This article examines why wine marketers struggled to build a mass market for American wine from the 1930s to the 1950s. Wine promoters worked to both surmount and accommodate existing preferences for spirits by casting wine both as a base for cocktails and as the budget-friendly alternative to them. Previously marked as either too highbrow or too lowbrow, wine gradually lost its foreignness as merchandisers learned to sell the glamour of wine without the demands of connoisseurship. Instead of setting their sights on urban sophisticates, wine promoters aimed for young married couples and budget-conscious new homeowners-the most recent entrants into the middle class. These populist marketing approaches, I contend, sowed the seeds of the table "wine revolution" not in bohemian enclaves and gourmet dining societies but in middle-class suburbia, where wine found its way to the American dinner table via the cocktail glass, the casserole dish, and the backyard barbecue. This study widens the historical perspectives of how a firm coordinates its activities to simultaneously achieve financial and political ends while using regional efforts to enact a national strategy. It examines how AT&T organized Bell Telephone Securities (BTS), a transitional subsidiary during the period 1921-1935, to broaden ownership of corporate shares and to develop political and cultural identities with Bell among small investors, particularly in the South and West. Equally significant was BTS's maintenance of liquidity of the Bell shares in the stock market, particularly in support of periodic rights offerings and debt conversions that were primary channels for increasing corporate equity. The subsidiary was eventually disbanded when its defining financial policies became unsustainable because of the radical socioeconomic and regulatory changes brought on by the Great Depression, but by this time many of its original objectives had been realized. In this paper, we investigate the performance of distributed estimation schemes in a wireless sensor network in the presence of an eavesdropper. The sensors transmit observations to the fusion center (FC), which at the same time are overheard by the eavesdropper. Both the FC and the eavesdropper reconstruct a minimum mean-squared error estimate of the physical quantity observed. We address the problem of transmit power allocation for system performance optimization subject to a total average power constraint on the sensor(s), and a security/secrecy constraint on the eavesdropper. We mainly focus on two scenarios: 1) a single sensor with multiple transmit antennas and 2) multiple sensors with each sensor having a single transmit antenna. For each scenario, given perfect channel state information (CSI) of the FC and full or partial CSI of the eavesdropper, we derive the transmission policies for short-term and long-term cases. For the long-term power allocation case, when the sensor is equipped with multiple antennas, we can achieve zero information leakage in the full CSI case, and dramatically enhance the system performance by deploying the artificial noise technique for the partial CSI case. Asymptotic expressions are derived for the long-term distortion at the FC as the number of sensors or the number of antennas becomes large. In addition, we also consider multiple-sensor multiple-antenna scenario, and simulations show that given the same total number of transmitting antennas the multiple-antenna sensor network is superior to the performance of the multiple-sensor single-antenna network. Doppler distortion degrades the performance of the reiterative minimum mean-squared-error (RMMSE) adaptive pulse compression (APC) algorithm. This paper presents a new approach for robust RMMSE Doppler compensation through the use of covariance matrix tapers (RMMSE-CMT). RMMSE-CMT delivers performance comparable to the existing approaches such as Doppler-compensated APC (DC-APC). RMMSE-CMT is simple to implement and offers considerable computational savings over DC-APC. RMMSE-CMT also provides improved Doppler robustness in the fast-APC algorithm. Self-localization and formation control tasks are considered when each agent in a multiagent formation observes its neighbors but does not communicate. Each agent is restricted to a predefined motion type on a 2-D plane and collects bearing-only measurements over a time interval to localize neighboring agents. The localization process is used by a three agent formation to achieve velocity consensus combined with formation shape control. Simulations are provided and noisy bearing measurements are investigated. The combination of celestial measurement and ground Doppler measurement has been successfully used for orbit determination in many deep-space exploration missions. However, one of the inherent drawbacks of ground tracking is the long communication delay. Using the Doppler measurement from the sun instead of the ground station is a feasible strategy for real-time navigation, but the accuracy of this Doppler measurement is affected by the frequency fluctuation of the solar spectrum. To solve this problem, a differential Doppler measurement-aided celestial navigation method is proposed in this paper, which uses the differential Doppler measurement to reduce the influence caused by instability of the solar spectrum frequency and improve the navigation accuracy. Simulations demonstrate the feasibility and effectiveness of this method. Currently, most Landsat satellites are deployed in the low earth orbit (LEO) to obtain high-resolution data of the Earth surface and atmosphere. However, the return channels of LEO satellites are unstable and discontinuous intrinsically, resulting from the high orbital velocity, long revisit interval, and limited ranges of ground-based radar receivers. Space-based information networks, in which data can be delivered by the cooperative transmission of relay satellites, can greatly expand the spatial transport connection ranges of LEO satellites. While different types of these relay satellites deployed in orbits of different altitudes represent distinctive performances when they are participating in forwarding. In this paper, we consider the cooperative mechanism of relay satellites deployed in the geosynchronous orbit (GEO) and LEO according to their different transport performances and orbital characteristics. To take full advantage of the transmission resource of different kinds of cooperative relays, we propose amultiple access and bandwidth resource allocation strategy for GEO relay, in which the relay can receive and transmit simultaneously according to channel characteristics of space-based systems. Moreover, a time-slot allocation strategy that is based on the slotted time division multiple access is introduced for the system with LEO relays. Based on the queueing theoretic formulation, the stability of the proposed systems and protocols is analyzed and the maximum stable throughput region is derived as well, which provides the guidance for the design of the system optimal control. Simulation results exhibit multiple factors that affect the stable throughput and verify the theoretical analysis. In this paper, an approach for the design and analysis of coherent constant false alarm rate (CFAR) detectors in clutter and interference with a Kronecker covariance structure is described. In a two-dimensional example considered, the interference-plus-noise matrix X is an element of C-NxL is modeled by a doubly correlated, zero-mean multivariate complex Gaussian distribution described by two covariance matrices C and R that are unknown to the receiver. The concatenated columns of X has a structured covariance matrix Sigma given by Sigma = R* circle times C. In the approach described, an estimate of R is used to "prewhiten" and match filter all the rows of both the training data matrices and the test data matrix. The processing enables one to reduce the detection problem to a one-dimensional case that can be handled by any one of the several adaptive detection algorithms. The proposed algorithm for the doubly correlated clutter is analyzed to show that the detection performance is determined by two statistically independent signal-to-interference-plus-noise loss factors both of which have complex beta distributions. Sample results show that the proposed approach requires training samples that is amultiple of N + L, while an adaptive detection algorithm that do not explicitly use the Kronecker constraint on the covariance structure requires training samples that is a multiple of N x L for comparable detection performance. This paper discusses how to solve a filtering problem for a class of continuous nonlinear time-varying systems via the Duncan-Mortensen-Zakai (DMZ) equation. In this paper, the original DMZ equation is changed into the Kolmogorov forward equation (KFE) by exponential transformations in each time interval, and then, under some assumptions, the KFE can be transformed into a time-varying Schrodinger equation, which can be solved explicitly. The novelty of this paper lies in how to transform the KFE into the Schrodinger equation. As a direct application, the results of the paper "Nonlinear filtering and time varying Schrodinger equation" are extended for time-varying Yau systems. This paper investigates the selection of different combinations of features at different multistatic radar nodes, depending on scenario parameters, such as aspect angle to the target and signal-to-noise ratio, and radar parameters, such as dwell time, polarization, and frequency band. Two sets of experimental data collected with the multistatic radar system NetRAD are analyzed for two separate problems, namely the classification of unarmed versus potentially armed multiple personnel, and the personnel recognition of individuals based on walking gait. The results show that the overall classification accuracy can be significantly improved by taking into account feature diversity at each radar node depending on the environmental parameters and target behavior, in comparison with the conventional approach of selecting the same features for all nodes. This paper details designing the precoder of a MIMO-radar spectrally-coexistent with aMIMO cellular system. Spectrum sharing with zero or minimal interference is achieved by using, respectively, the conventional switched null space projection (SNSP) or the newly proposed switched small singular value space projection (SSSVSP). Loss in radar target localization capability due to precoding can be compensated by using SSSVSP instead of SNSP to some extent but increasing the number of radar antenna elements is more effective. This paper describes the non-coherent target detection performance of an airborne surface surveillance radar in the presence of medium grazing angle sea clutter. In the absence of frequency agility, the temporal correlation of the sea clutter can be significant and if it is not accounted for in the clutter model, the required signal to interference ratio for a given probability of detection will be incorrect by several decibels, resulting in overestimated performance. This paper describes a robust method for calculating the detection probability for both K and Pareto compound sea-clutter distributions. Empirical models of the amplitude distribution and the speckle correlation are used to determine the expected detection performance given different collection geometries and environmental conditions with the output used to determine the minimum detectable target radar cross section in a detection scenario. Maintaining the attitude of a spacecraft precisely aligned to a given orientation is crucial for commercial and scientific space missions. The problem becomes challenging when on/off thrusters are employed instead of momentum exchange devices due to, e.g., wheel failures or power limitations. In this case, the attitude control system must enforce an oscillating motion about the setpoint, so as to minimize the switching frequency of the actuators, while guaranteeing a pre-defined pointing accuracy and rejecting the external disturbances. This paper develops a three-axis attitude control scheme for this problem, accounting for the limitations imposed by the thruster technology. The proposed technique is able to track both the period and the phase of periodic oscillations along the rotational axes, which is instrumental to minimize the switching frequency in the presence of input coupling. Two simulation case studies of a geostationary mission and a low Earth orbit mission are reported, showing that the proposed controller can effectively deal with both constant and time-varying disturbance torques. The computation of measurement (or missed detection) to target-association probabilities is an essential part of many tracking algorithms. This paper generalizes previous work to show the relationship between measurement assignment and the matrix permanent given the possibility of missed detections and clutter. A simple expression is given for the target-measurement association probabilities in terms of the matrix permanent and efficient algorithms for computing the matrix permanent are summarized. When radar and communication systems are colocated and operating simultaneously in the same frequency band, interference can be addressed by designing the systems to share resources. This paper develops a framework for cooperative operation of single-input single-output bistatic radar and wireless communication systems. We adopt a low-complexity linear minimum mean square error (LMMSE) optimal pilot symbol aided modulation scheme, and show the achievability region of the joint radar-communication system designed for simultaneous operation in a wireless channel that is both frequency and time-selective. Since the LMMSE estimate requires accurate knowledge of channel statistics, we give a lower bound on performance using least-squares estimation that operates purely on the received signal. Finally, to better match power levels between systems, we compare the performance using optimal training sequences with a suboptimal scheme using Barker sequences. Missing samples within synthetic aperture radar data result in image distortions. For coherent data products, such as coherent change detection and interferometric processing, the image distortion can be devastating to these second-order products, resulting in missed detections, and inaccurate height maps. Previous approaches to repair the coherent data products focus upon reconstructing the missing data samples. This paper demonstrates that reconstruction is not necessary to restore the quality of the coherent data products. In low signal-to-noise ratio or heavy clutter environments, target track initialization is a challenging task. The maximum likelihood probabilistic data association (ML-PDA) algorithm has been demonstrated to be effective in dealing with this issue. In practical scenarios, multiple signals from one target via different propagation paths can be detected in a scan. Signals from different propagation paths convey useful information and can improve track initialization performance. However, the conventional ML-PDA algorithm assumes that a target can generate at most one detection per scan. That is, it cannot handle multiple target-originated measurements per scan correctly, nor take full advantage of the additional information contained in those seemingly extraneous returns. In this paper, a multiple-detection ML-PDA (MD-ML-PDA) estimator is proposed to rectify this shortcoming. The proposed estimator exploits the additional information available in all measurements by considering the combinatorial events of association that are formed from MD patterns. It is capable of handling the possibility of multiple target-originated measurements per scan with less-than-unity detection probability for various paths in the presence of clutter. The proposed MD-ML-PDA estimator is applied to a simulated sonar target tracking scenario. The same algorithm can be used on other angle-only tracking problems as well. Results show that MD-ML-PDA can effectively handle multiple target-originated measurements and yield improved track initialization performance over the traditional single detection ML-PDA. The Cramer-Rao lower bound for MD track initialization is also derived. This paper considers the detection of fluctuating targets via dynamic-programming-based track-before-detect (DP-TBD) in radar systems. Swerling targets of types 0, 1, and 3 are considered. DP-TBD usually integrates either squared amplitude or the logarithm of the envelope likelihood ratio (LELR) scoring functions. Thus, only amplitude information is taken into account regardless of the fact that the measurements are often complex valued. In this paper, the phase information is used in the integration process of DP-TBD to enhance radar detection performance. More precisely, the logarithm of the complex-measurement-based likelihood ratio (LCLR) is used, taking the place of squared amplitude or the LELR. First, we derive the expressions for the LELR and the LCLR for the three Swerling types. Then, to reduce the complexity of computing LELR and LCLR, we also propose efficient but accurate approximations for the LELR and the LCLR. Simulations are used to assess the performance of different DP-TBD strategies. The design of sensor systems to detect an emitter at a random location with an unknown distribution is difficult because measurements are conditionally dependent and the hypothesis test is composite. This paper shows that, when sensors are at deterministic locations, these problems can be circumvented and a conservative design is achieved by adopting a least favorable distribution for the emitter location. An algorithm to achieve the design and the application to generalized likelihood ratio test detectors are presented. ESM systems need to use sophisticated signal processing, such as time-frequency representation techniques, for the interception of nonstationary LPI radar signals. In this study, an adaptive filtering technique using an ambiguity-domain elliptical Gaussian kernel is proposed to increase the readability of pseudo-Wigner-Ville distribution based representations for LPI waveform parameter extraction purposes. The complexity and information content of the outputs obtained by the proposed method are evaluated by objective criteria, such as ratio of norms, Renyi entropy, and Jubisa measure. The results quantify efficient filtering performance under severe SNR conditions with lower computational complexity. In frequency-modulation (FM)-based passive radar, the strong side peaks randomly appearing in the ambiguity function will generate a false alarm of target detection. To mitigate this side peak interference, this paper starts with a detailed analysis of the structure and ambiguity function of FM stereo signal. Then, it expounds the formation and characteristics of side peaks, together with a side peak identification method, which identifies and discards the false detections of the same range and Doppler caused by side peaks, respectively. The performance analysis, conducted using both simulated data and real recorded datasets, proves that the proposed method can eliminate the false targets caused by side peaks thus improving the detection performance in FM-passive radar. Many modern radar systems are overcoming the need for high-power transmitters by utilizing low peak-power, high duty-cycle waveforms, making noncooperative detection methods by traditional electronic surveillance a difficult task. This technological difficulty is driving a need for computationally tractable detection and characterization algorithms. Here, a practical method for detecting and fully characterizing an arbitrary number of low-power linear frequency modulated continuous wave (LFMCW) radar signals is achieved by dividing the time-domain signal into contiguous segments and treating each signal segment as a sum of harmonic components corrupted by noise with an unknown, time-varying power spectral density. This method is developed analytically and evaluated experimentally, revealing that the practicality of the method comes at the expense of a loss in estimation accuracy when compared to the Cramer-Rao lower bound. Experimental results indicate that the parameters of two simultaneous LFMCW signals can be estimated to within 10% of their true values with probability greater than 90% when input signal-to-noise ratios are-10 dB and above with a 25MHz bandwidth receiver. Interference from the proliferation of wind turbines is becoming a problem for ground-based medium-to-high pulse repetition frequency (PRF) pulsed-Doppler air surveillance radars. This paper demonstrates that randomizing some parameters of the transmit waveform from pulse to pulse, a filter can be designed to suppress both the wind turbine interference and the ground clutter. Furthermore, a single coherent processing interval (CPI) is sufficient to make an unambiguous range measurement. Therefore, multiple CPIs are not needed for range disambiguation, as in the staggered PRFs techniques. First, we consider a waveform with fixed PRF but diverse (random) initial phase applied to each transmit pulse. Second, we consider a waveform with diverse (random) PRF. The theoretical results are validated through simulations and analysis of experimental data. Clutter-plusinterference suppression and range disambiguation in a single CPI may be attractive to the Federal Aviation Administration and coastal radars. Noise radars possess several advantageous properties, including low probability of detection and identification, especially when working in continuous-wave mode. One of the main drawbacks of such radars is the occurrence of the masking effect, when a weak target echo is masked by the sidelobes of a strong target echo. The sidelobes of the ambiguity function emerge on the level of the time-bandwidth product below the main peak and are spread in the entire range-Doppler plane. While most approaches to mitigate the masking effect focus on the processing of the received signal, it is possible to move the computation burden to the waveform design phase. This paper describes a filter-based method of creating noise-like waveforms that have very low sidelobes in the area of certain range and Doppler shifts. Both theoretical analysis and measurement results are presented and compared with the lattice filter method of masking effect cancellation. Adaptive algorithms having the capacity to perform real-time optimization in response to a dynamic environment will be necessary for the development of a cognitive radar technology. This paper details an amplifier-in-the-loop algorithm to synthesize radar waveforms which are optimized for desired ambiguity function characteristics in a monostatic radar system, with constraints on the peak-to-average power ratio and spectrum of the waveform. The algorithm uses alternating projections to search for a waveform whose ambiguity function best approximates the goal for a volume-based constraint in desired regions of the range-Doppler plane. Results are shown for both simulation and measurement cases, both with and without the effects of a nonlinear amplifier. This waveform synthesis method is expected to be useful in real-time adaptive radar systems. In this paper, a multisource tracking technique is proposed using a sparse large aperture array of passive sensors of known geometry. First, a novel spherical-spatiotemporal-state-space model is introduced incorporating target ranges, directions, and Doppler effects in conjunction with the array geometry. Subsequently, this array of sensors is integrated with an extended Kalman filter (EKF), defined as the arrayed EKF, to track the trajectory of multiple mobile sources. In addition, a recursive lower bound on the performance of the proposed tracking method is obtained based on the posterior Cramer-Rao bound. Computer simulation studies show that the proposed approach can track the locations of sources, as these move in space, with a very high accuracy. This paper presents the moving target focusing method, which allows focusing moving targets in complex synthetic aperture radar (SAR) images without raw data. The method is developed on the range migration algorithm, where focusing moving target is an interpolation step in the wave domain. The simulated results are provided in the paper to illustrate the proposed method whereas the experimental results show its practicality. The method can be flexibly applied from small area to the whole SAR scene. In this paper, we study the level set estimation of a spatial-temporally correlated random field by using a small number of spatially distributed sensors. The level sets of a random field are defined as regions where data values exceed a certain threshold. The identification of the boundaries of such sets is an important theoretical problem with a wide range of applications such as spectrum sensing, urban sensing, and environmental monitoring, etc. We propose a new active sparse sensing and inference scheme, which can achieve rapid and accurate extraction of level sets in a large random field by using a small number of data samples strategically and sparsely selected from the field. A Gaussian process (GP) prior model is used to capture the spatial-temporal correlations inherent in the random field. It is first shown that the optimal level set estimation can be achieved by performing a GP regression with all data samples and then thresholding the regression results. We then investigate the active sparse sensing scheme, where a central controller dynamically selects a small number of sensing locations according to the information revealed from past measurements, with the objective to minimize the expected level set estimation error probability. The expected estimation error probability is explicitly expressed as a function of the selected sensing locations, and the results are used to formulate the optimal sensing location selection problem as a combinatorial problem. Two low complexity greedy algorithms are developed by using analytical upper bounds of the expected estimation error probability. Both simulation and experiment results demonstrate that the greedy algorithms can achieve significant performance gains over baseline passive sensing algorithms and the GP upper confidence bound level set estimation algorithm. This paper investigates the feasibility of the distance measuring equipment (DME)-based alternative position, navigation, and timing architecture using the recent advances on DME/N (normal) signal processing techniques; an improved DME/N pulse waveform and learning-based multipath mitigation algorithm. This paper evaluates the achievable DMErange accuracy by using the advanced signal processing techniques, and provides the analysis on the required augmentation of ground DME stations in a selected test region in contiguous United States. In this paper, we consider the problem of distributed target detection with subspace signal mismatch. Precisely, the echoes reflected by the distributed target all come from the same direction, and the signal steering vector is assumed to lie in a preassigned subspace. However, the actual signal steering vector does not completely belong to the presumed subspace, resulting in subspace signal mismatch. We focus on the design of selective detectors, which have good capabilities of mismatched signal rejection. To this end, we add a fictitious signal under the null hypothesis, which is orthogonal to the nominal signal subspace in the whitened or quasi-whitened subspace. According to the generalized likelihood ratio test criterion, we devise two effective detectors. Compared with the existing ones, the proposed detectors exhibit improved selectivity capabilities for signal mismatch at the price of a little bit performance loss in the case of no signal mismatch. Moreover, for the case of point-like target, which is a special case of distributed target, we derive analytical expressions for the probabilities of detection and false alarm. Simulation results illustrate the superiority of the proposed detectors and confirm our theoretical results. In this paper, improved signal processing techniques are developed for the analysis and classification of low probability of intercept (LPI) radar waveforms. The intercepted LPI radar signals are classified based on the type of pulse compression waveform. They are classified as linear frequency modulation, nonlinear linear frequency modulation, binary frequency shift keying, polyphase Barker, polyphase P1, P2, P3, P4, and Frank codes. The classification approach is based on the parameters measured from the preprocessed radar signal intercepted by electronic support (ES) or electronic intelligence (ELINT) system. First, signal embedded within the noise is estimated using Wigner Ville distribution to improve the signal-to-noise ratio (SNR). Next, features are extracted using the time-domain and frequency-domain techniques. Furthermore, parameters measured from the fractional Fourier transform are used for the classification. This type of techniques are required in various systems such as ES, electronic attack, radar emitter identification and multi input multi output (MIMO) radar applications. Extensive simulations are carried out with different LPI radar-modulated waveforms corrupted with additive white Gaussian noise of SNR up to -15 dB and impulse noise with 90% of noise density. The proposed algorithm outperforms the existing techniques of classification and can be used under strategic environment. This paper examines the target detection problem in a multistatic passive radar with one noncooperative illuminator of opportunity (IO) and multiple distributed receivers. Specifically, we consider how to address the direct-path interference (DPI), which refers to the direct transmission from the IO to a receiver, to enhance passive detection performance. The DPI is in general much stronger (by many tens to even over a hundred dB) than the target echo. It is standard for a passive radar to apply some kind of interference cancellation by using, e.g., an adaptive array, to reduce the DPI. However, due to practical limitations of such techniques and the significant difference in strength between the DPI and target signal, the residual DPI after cancellation is often at a nonnegligible level. Unlike most existing passive detectors which ignore such residual DPI, we consider explicitly its effect and develop two new detectors under the conditions when the noise level is known and, respectively, when it is unknown. Another distinction from existing solutions is that the proposed detectors exploit the correlation of the IO waveform for passive detection. The proposed detectors are developed within the generalized likelihood ratio test (GLRT) framework, which involves nonlinear estimation that is solved using the expectation-maximization algorithm. Numerical results are presented to illustrate the performance of the proposed methods and several well-known passive detectors. This paper proposes two methods to mitigate the zero bias of noncommensurate sampling based code-tracking loops to significantly improve the pseudonoise ranging accuracy. For the compensation-based method, a set of algorithms is developed to directly calculate the zero bias which is then removed from the range measurement. For the compensation-free method, a special category of sampling ratios are selected such that the zero bias is self-cancelled. Simulations validate the effectiveness of the proposed methods. In a vision-aided autonomous system, it is crucial to have a consistent covariance matrix of the navigation solution. Overconfidence in covariance could lead to significant deviation of the navigation solution and failures of autonomous missions, especially in a global positioning system-denied environment. Consistency of a map-based vision-aided navigation system is investigated in this paper. As has been shown in numerous previous works, the traditional extended Kalman filter (EKF) approach to navigation produces significantly inconsistent (overconfident) covariance estimates. Covariance intersection and adjusted EKF approaches can both help to resolve the overconfidence problem. We present both simulation-based and real-world results of each of these approaches and investigate the consistency of their solutions. This paper proposes a new scheme for improving the classification performance of inverse synthetic aperture radar (ISAR) images. The proposed scheme utilizes the trace transform to gather abundant information from an ISAR image, regardless of the spatial distribution of the target response in the ISAR image. In simulations using ISAR images with various spatial distributions, the proposed scheme substantially improved the classification performance compared with existing methods, which are highly vulnerable to spatial variation. In this paper, an effective synthetic aperture radar ground moving target indication (SAR-GMTI) method is proposed based on cross-track interferometry only, by combining the squint-looking and a 2-D cruciform forward-looking array. First, for a multiple-channel SAR with an arbitrarily array configuration and an arbitrary-looking angle, the 2-D response of a ground moving target, including its phase, amplitude, and position, is derived in the complex image domain via back-projection algorithm. Second, an interferometry phase sensitivity function is defined to evaluate the interferometry properties of an arbitrarily baseline with a closed-form expression, from which the GMTI and altitude-estimating abilities of the conventional along-track baseline and cross-track baseline can be verified. Besides, it is shown that the interferometry phases of the upright cross-track and straightforward along-track baselines are only sensitive to the target's height and velocity, respectively. Furthermore, the interferometry phase of the level cross-track baseline can be sensitive to both target's radial velocity and height when the squint-looking angle is large. Therefore, to solve the coupled phases of velocity and height of the moving target in a real fluctuating terrain, a cruciform cross-track baseline with five receiving channels is further proposed for a squint-looking SAR, by which the height, radial velocity, and location of both fixed and moving targets can be estimated, simultaneously. This means that the SAR-GMTI can be effectively accomplished by cross-track interferometry only without along-track baseline. Finally, some results of numerical experiments are provided to demonstrate the effectiveness of the proposed method. This paper addresses the problem of reliability and makespan optimization of hardware task graphs in reconfigurable platforms by applying fault tolerance (FT) techniques to the running tasks based on the exploration of the Pareto set of solutions. In the presented solution, in contrast to the existing approaches in the literature, task graph scheduling, tasks parallelism, reconfiguration delay, and FT requirements are taken into account altogether. This paper first presents a model for hardware task graphs, task prefetch and scheduling, reconfigurable computer, and a fault model for reliability. Then, a mathematical model of an integer nonlinear multi-objective optimization problem is presented for improving the FT of hardware task graphs, scheduled in partially reconfigurable platforms. Experimental results show the positive impacts of choosing the FT techniques selected by the proposed solution, which is named Pareto-based. Thus, in comparison to nonfault-tolerant designs or other state-of-the-art FT approaches, without increasing makespan, about 850% mean time to failure (MTTF) improvement is achieved and, without degrading reliability, makespan is improved by 25%. In addition, experiments in fault-varying environments have demonstrated that the presented approach outperforms the existing state-of-the-art adaptive FT techniques in terms of both MTTF and makespan. An original 3-D radar imaging system is presented for radar cross section (RCS) analysis, i.e., to identify and characterize the radar backscattering components of an object. Based on a 3-D spherical experimental setup, where the residual echo signal is more efficiently reduced in the useful zone, it is especially adapted to deal with low-RCS analysis. Due to a roll rotation, the electric field direction varies concentrically while the scattered data are collected. To overcome this issue, a specific 3-D radar imaging algorithm is developed. Based on fast regularization inversion, more precisely the minimum norm least squares solution, it manages to determine, from a single pass collection, three huge 3-D scatterer maps at once, which correspond to HH, VV, and HV polarizations at emission and reception. The algorithm is applied successfully to real X-band datasets collected in the accurate 3-D spherical experimental layout, from a metallic cone with patches and an arrow shape. It is compared with the conventional 3-D polar format algorithm where the scatterer information is irretrievably mixed-up. This paper presents an improved multiobjective particle swarm optimization algorithm for magnetometer calibration in spacecraft. The proposed algorithm combines scalar checking with novel rotation axis fitting objective and avoids the requirement for perfectly aligned measurement axis. The improved approach is designed to solve 12 calibration parameters based on the knowledge of the magnetometer rotation axis direction. The performance of the novel algorithm is demonstrated with simulations and experimental data on Aalto-1 nanosatellite. This paper deals with a symbiotic radar, defined as a passive radar that is an integral part of a communication network. The symbiotic radar is integrated with an IEEE 802.22 Wireless Regional Area Network and linked with the base station. It can work as a purely passive radar or, and this is the novelty in the system, can use the base station to suggest the best customer premise equipment that should be scheduled for transmission to improve tracking performance. This paper defines a cognitive passive tracking algorithm that exploits the feedback information contained in the target state prediction to improve the performance while preserving the communication capabilities of the complete network. In 1973, Goldstein introduced his log-t detector, which attained the constant false alarm rate property for a class of clutter models including the lognormal and Weibull distributions. This paper shows that under the assumption of Pareto-distributed clutter, false alarm regulation can also be attained. A modification of the log-t detector is shown to enhance its performance in terms of managing interference. A jump Markov system (JMS) multiple model solution is presented for tracking highly dynamic targets using the Vo-Vo filter (also known as generalized labeled multi-Bernoulli or GLMB filter). The closed-form solution is derived for Gaussian mixture implementation of the multiple-model Vo-Vo filter. The performance of the proposed method is examined and comparedwith the state of the art in challenging scenarios involving numerous appearing and disappearing targets with randomly time-varying dynamics. The results demonstrate that in such applications, the proposed method can outperform the competing techniques in terms of tracking accuracy, and is highly robust to variations in application and filter parameters such as clutter rate and model transition probabilities in the JMS. This paper investigates the formation control problem of multiple unmanned aerial vehicles (UAVs) with limited communication in a known and realistic obstacle-laden environment. In order to deal with the limited communication constraints, the leader-follower strategy and the virtual leader strategy are integrated into an optimal control framework to formulate this formation control problem. This combination formation framework can be achieved by integrating a redefined directed graph and a proposed information vector. In more practical applications, an obstacle/collision avoidance strategy is achieved by constructing a non-quadratic cost function innovatively using a virtual flow field approach. The proposed optimal control laws, which derive from the local information rather than the global information, are proved to guarantee the stability of the close-loop system by an inverse optimal control approach. The simulation results demonstrate the effectiveness of the formation flight of multiple UAVs with limited communication in an obstacle-laden environment. Blade-disk-drum (BDD) assemblies are commonly used in rotors of aeroengine. This paper investigates the vibration characteristics of the drum in the BDD assembly. In the analysis, multilevel modeling method based on ANSYS software is proposed. Free vibrations of single drum, disk-drum assembly, and the BDD assembly are calculated and carefully examined to discover the special modal shapes and coupling effects. Then according to the working environment, equivalent aerodynamic loads are imposed on blades and the drum respectively, to explore the vibration behaviors of the drum in the BDD assembly. Results reveal that, vibration of the drum, which may result in rub-impact or/and fatigue failure, can be induced by coupling vibration result from the aerodynamic loads on blades and drum, as well as the drum-dominant mode triggered by the aerodynamic loads on drum. Cable-driven parallel mechanism is a special kind of parallel robot in which traditional rigid links are replaced by actuated cables. This provides a new suspension method for wind tunnel test, in which an aircraft model is driven by a number of parallel cables to fulfil 6-DOF motion. The workspace of such a cable robot is limited due to the geometrical and unilateral force constraints, the investigation of which is important for applications requiring large flight space. This paper focuses on the workspace analysis and verification of a redundant constraint 6-DOF cable-driven parallel suspension system. Based on the system motion and dynamic equations, the geometrical interference (either intersection between two cables or between a cable and the aircraft) and cable tension restraint conditions are constructed and analyzed. The hyperplane vector projection strategy is used to solve the aircraft's orientation and position workspace. Moreover, software ADAMS is used to check the workspace, and experiments are done on the prototype, which adopts a camera to monitor the actual motion space. In addition, the system construction is designed by using a built-in six-component balance to measure the aerodynamic force. The results of simulation and tests show a good consistency, which means that the restraint conditions and workspace solution strategy are valid and can be used to provide guidance for the cable-driven parallel suspension system's application in wind tunnel tests. With the development of space exploration, researches on space robot will cause more attentions. However, most existing researches about dynamics and control of space robot concern planar problem, and the effect of flexible panel on dynamics of the system is not considered. In this article, dynamics modeling and active control of a 6-DOF space robot with flexible panels are investigated. Dynamic model of the system is established based on the Jourdain's velocity variation principle and the single direction recursive construction method. The computed torque control method is used to design point-to-point active controller of the space robot. The validity of the dynamic model is verified through the comparison with ADAMS software; the effects of panel flexibility on the system performance and the active controller design are studied in detail. Simulation results indicate that the proposed model is effective to describe the dynamics of space robot; panel flexibility has large influence on the dynamic behavior of space robot; the designed controller can effectively make the robot reach a specified position and the elastic vibration of the panels may be suppressed simultaneously. The large-scale of unmanned aerial vehicle applications has escalated significantly within the last few years, and the current research is slowly hinting at a move from single vehicle applications to multivehicle systems. As the number of agents operating in the same environment grows, conflict detection and resolution becomes one of the most important factors of the autonomous system to ensure the vehicles' safety throughout the completion of their missions. The work presented in this paper describes the implementation of the novel distributed reactive collision avoidance algorithm proposed in the literature, improved to fit a swarm of quadrotor helicopters. The original method has been extended to function in dense and crowded environments with relevant spatial obstacle constraints and deconfliction manoeuvres for high number of vehicles. Additionally, the collision avoidance is modified to work in conjunction with a dynamic close formation flight scheme. The solution presented to the conflict detection and Resolution problem is reactive and distributed, making it well suited for real-time applications. The final avoidance algorithm is tested on a series of crowded scenarios to test its performances in close quarters. Based on rigid-body slung-load hypothesis, a nonlinear dynamical model of the helicopter and slung-load system is presented. As for helicopter and slung-load system, the introduction of the rigid-body slung-load results in several extra degrees of freedom and constraints, and this makes the nonlinear equations of helicopter and slung-load motion transfer from 9 orders to 19 orders. The nonlinear equations are linearized by small perturbation hypothesis for stability analysis and they are trimmed by the continuation method. First, the simulation trimmed results are compared with the helicopter flight test data without slung-load and the calculation results of helicopter with mass-point slung-load in the literature. Then, the differences among trimmed states of the helicopter with a rigid-body slung-load, a mass-point slung-load, and without slung-load are studied. The motion modes of helicopter with rigid-body slung-load are calculated in different cases, and the effects of extra slung-load to the helicopter stability are analyzed at the same time. Results showed that the rigid-body slung-load added five new motion modes to the whole helicopter and slung-load system, two of which concerning cable swinging motion are stable, while the others concerning slung-load attitude motion are unstable. At the same time, the introduction of rigid-body slung-load makes the short-period mode and the roll mode of helicopter unstable, which has a strong impact on the flight quality of helicopter. This paper presents a controllability study for a square solar sail that uses the wing tip displacement method for its attitude control. The goal is to determine whether this method of attitude control guarantees full control authority over normal attitude maneuvers that would be expected during a typical solar sail mission. The controllability of a given state is determined by first linearizing the full nonlinear equations of motion of the craft about the chosen state, and then applying the classical controllability test to the resulting linear model. This process is then repeated enough times to adequately span the allowable states of the vehicle. Because of the nature of the expressions for the control torques, direct analytical linearizations are not practical; and a full numerical approach is inefficient because of the sheer volume of computations involved. A hybrid linearization method that judiciously combines analytical and numerical approaches was therefore adopted. Results obtained show that the sailcraft is controllable throughout the tested region. A study of the influence of various actuator failure scenarios on controllability revealed that the wing tip displacement method of attitude control exhibits a very high degree of redundancy. The system's controllability only becomes seriously impaired if more than half of its actuators fail. This paper presents the large angle attitude manoeuvre control design of a single-axis flexible spacecraft system that consists of a central rigid body and a cantilever beam with bonded piezoelectric sensor/actuator pairs as a flexible appendage. The proposed control strategy combines the attitude controller designed by the adaptive robust control technique with the active vibration controller designed by the positive position feedback control method. The desired angular position of the spacecraft is planned and an adaptive robust attitude control approach based on a projection type adaptation law is proposed to track the planned path and to achieve precise attitude manoeuvre control. Meanwhile, the positive position feedback control method is applied to actively increase the damping of the flexible appendage and to suppress the residual vibration induced by manoeuvre. Improved transient and steady state performance during and after large angle attitude manoeuvre can be both achieved by integration of the technical merits of all these control methods. Analytical and numerical results illustrate the effectiveness of this approach. Multi-field coupling problems are taken more and more attention mainly because of the higher requirement of load, efficiency, and reliability in aero-engine operation. This research takes an aero-engine compressor as the research object, 3D flow field and structural models are established. For the method of cyclic symmetric, single-sector model is selected as the calculation domain. Considering the influence of former stator wakes, compressor flow field is simulated. The article analyzes the distribution law of unsteady aerodynamic load on rotor blade. Based on Kriging model, load transfer of aerodynamic pressure and temperature is achieved from flow field to blade structure. Then the effects of centrifugal force, aerodynamic pressure and temperature load are discussed on compressor vibration characteristic and structural strength. The results show dominant fluctuation frequencies of aerodynamic load on rotor blade are manly at frequency doubling of stator-rotor interaction, especially at one time frequency (1xf(0)). Magnitude and pulsation amplitude on pressure surface are far greater than that on suction surface. Load transfer with Kriging model has a higher precision, it can meet the requirement of multi-field coupling dynamic calculation. In multi-field coupling interaction, temperature load makes the natural vibration frequencies decrease obviously, centrifugal force is the main source of deformation and stress. Bending stress induced by aerodynamic pressure and temperature load can counteract part of bending stress induced by centrifugal force. However, temperature load causes the maximum displacement of blade-disk system to increase. The fifth Automated Transfer Vehicle was launched on 29 July 2014 with Ariane-5 flight VA 219 into orbit from Kourou, French Guiana. For the first time, the ascent of an Ariane rocket was independently tracked with a Global Navigation Satellite System (GNSS) receiver on this flight. The GNSS receiver experiment OCAM-G was mounted on the upper stage of the rocket. Its receivers tracked the trajectory of the Ariane-5 from lift-off until after the separation of the Automated Transfer Vehicle. This article introduces the design of the experiment and presents an analysis of the data gathered during the flight with respect to the GNSS tracking status, availability of navigation solution, and navigation accuracy. In this study, a genetic optimization algorithm is applied to the design of environmentally friendly aircraft departure trajectories. The environmental optimization has been primarily focused on noise abatement and local NOx emissions, whilst taking fuel burn into account as an economical criterion. In support of this study, a novel parameterization approach has been conceived for discretizing the lateral and vertical flight profiles, which reduces the need to include nonlinear side constraints in the multiparameter optimization problem formulation, while still permitting to comply with the complex set of operational requirements pertaining to departure procedures. The resulting formulation avoids infeasible solutions and hence significantly reduces the number of model evaluations required in the genetic optimization process. The efficiency of the developed approach is demonstrated in a case study involving the design of a noise abatement departure procedure at Amsterdam Airport Schiphol in The Netherlands. An efficient stall compliance prediction method using quick configuration generation, adapted mesh, high fidelity analysis, and wind tunnel test data for trimmed very light aircraft is proposed. The three-dimensional Navier-Stokes equations are used to determine the characteristics of the flow field around the aircraft, and the k- shear stress transport model is used to interpret the turbulent flow as a solver in the high fidelity analysis. The calibrated mesh and model are developed by comparing the results with the wind tunnel test and adjusting the adapted mesh to match the wind tunnel data. The calibrated mesh and model are applied to conduct the full-scale very light aircraft analysis for the clean and full flap extended flight conditions to comply with the CS-VLA stall regulations. It is recommended that the flap area be increased in the trimmed full flap extended condition. The proposed method demonstrates the feasibility and effectiveness of very light aircraft VLA stall compliance prediction in reducing the development cost and time with small configuration changes at the preliminary very light aircraft design stage. Artificial neural networks are an established technique for constructing non-linear models of multi-input-multi-output systems based on sets of observations. In terms of aerospace vehicle modelling, however, these are currently restricted to either unmanned applications or simulations, despite the fact that large amounts of flight data are typically recorded and kept for reasons of safety and maintenance. In this paper, a methodology for constructing practical models of aerospace vehicles based on available flight data recordings from the vehicles' operational use is proposed and applied on the Jetstream G-NFLA aircraft. This includes a data analysis procedure to assess the suitability of the available flight databases and a neural network based approach for modelling. In this context, a database of recorded landings of the Jetstream G-NFLA, normally kept as part of a routine maintenance procedure, is used to form training datasets for two separate applications. A neural network based longitudinal dynamic model and gust identification system are constructed and tested against real flight data. Results indicate that in both cases, the resulting models' predictions achieve a level of accuracy that allows them to be used as a basis for practical real-world applications. The main specification in the verification by testing of space hardware vulnerability to shock excitations is the shock response spectrum. Although it compiles the most relevant information needed to describe the overall shock environment characteristics, shock testing still poses various difficulties and uncertainties concerning the suitability and operation of the shock test system used, and the adequate definition of the underlying test parameters. The approach followed from the interpretation of typical shock testing specifications to the development, validation, and characterization of the developed shock test system, including the definition and design of the relevant parameters influencing the attained shock environment, is described in this paper. The shock testing method here presented consists of a pendular in-plane resonant mono-plate shock test apparatus where the structural response of the ringing plate depends upon well-defined controllable parameters (e.g. impact velocity, striker shape, mass, and contact stiffness), which are parametrically determined to achieve the target shock environment specification. The concept and analytical model of two impacting bodies are used in a preliminary analysis to perform a rigid body motion analysis and contact assessment. A detailed finite element model is developed for the definition of the ringing plate dimensions, analysis of the plate dynamics and virtual shock testing. The assembled experimental apparatus is described and a test campaign is undertaken in order to properly characterize and assess the design and test parameters of the system. The developed shock test apparatus and corresponding finite element model are experimentally verified and validated. As a result of this study, a reliable finite element modeling methodology available for future shock test simulation and prediction of the experimental results was created, being an important tool for the adjustment of the shock test input parameters for future works. The developed shock test system was well characterized and is readily available to be used for shock testing of space equipment with varying specifications. In a short period of time, climate geoengineering' has been added to the list of technoscientific issues subject to deliberative public engagement. Here, we analyse this rapid trajectory of publicization and explore the particular manner in which the possibility of intentionally altering the Earth's climate system to curb global warming has been incorporated into the field of public engagement with science'. We describe the initial framing of geoengineering as a singular object of debate and subsequent attempts to unframe' the issue by placing it within broader discursive fields. The tension implicit in these processes of structured debate - how to turn geoengineering into a workable object of deliberation without implying a commitment to its reality as a policy option - raises significant questions about the role of public engagement with science' scholars and methods in facilitating public debate on speculative technological futures. While the quality of environmental science journalism has been the subject of much debate, a widely accepted benchmark to assess the quality of coverage of environmental topics is missing so far. Therefore, we have developed a set of defined criteria of environmental reporting. This instrument and its applicability are tested in a newly established monitoring project for the assessment of pieces on environmental issues, which refer to scientific sources and therefore can be regarded as a special field of science journalism. The quality is assessed in a kind of journalistic peer review. We describe the systematic development of criteria, which might also be a model procedure for other fields of science reporting. Furthermore, we present results from the monitoring of 50 environmental reports in German media. According to these preliminary data, the lack of context and the deficient elucidation of the evidence pose major problems in environmental reporting. People's attitudes toward climate change differ, and these differences may correspond to distinct patterns of media use and information seeking. However, studies extending analyses of attitude types and their specific media diets to countries beyond the United States are lacking. We use a secondary analysis of survey data from Germany to identify attitudes toward climate change among the German public and specify those segments of the population based on their media use and information seeking. Similar to the Global Warming's Six Americas study, we find distinct attitudes (Global Warming's Five Germanys) that differ in climate change-related perceptions as well as in media use and communicative behavior. These findings can help tailor communication campaigns regarding climate change to specific audiences. Publicly funded broadcasters with a track record in science programming would appear ideally placed to represent climate change to the lay public. Free from the constraints of vested interests and the economic imperative, public service providers are better equipped to represent the scientific, social and economic aspects of climate change than commercial media, where ownership conglomeration, corporate lobbyists and online competition have driven increasingly tabloid coverage with an emphasis on controversy. This prime-time snapshot of the Australian Broadcasting Corporation's main television channel explores how the structural/rhetorical conventions of three established public service genres - a science programme, a documentary and a live public affairs talk show - impact on the representation of anthropogenic climate change. The study findings note implications for public trust, and discuss possibilities for innovation in the interests of better public understanding of climate change. This article explores the importance of issue politicisation and mediation for the reporting of climate change in UK elite newspapers. Specifically, this investigates how journalistic logic mediates political framing to produce commentaries on and discussion about climate change in the news. In analysing elite newspaper coverage over time in this case, the article shows that (1) various frames introduce the issue as a legitimate problem within coverage and that (2) the news stories these inform are opened to specific commentaries according to elite journalistic logic'. This configuration of coverage orders the speaking opportunities of established voices of science, politics and industry as well as those less established voices that enter to explain and qualify these elite accounts. The article concludes that the ingrained combination of issue politicisation and journalistic logic observed here will likely shape future elite reporting and those voices that it will include. This study examines non-editorial news coverage in leading US newspapers as a source of ideological differences on climate change. A quantitative content analysis compared how the threat of climate change and efficacy for actions to address it were represented in climate change coverage across The New York Times, The Wall Street Journal, The Washington Post, and USA Today between 2006 and 2011. Results show that The Wall Street Journal was least likely to discuss the impacts of and threat posed by climate change and most likely to include negative efficacy information and use conflict and negative economic framing when discussing actions to address climate change. The inclusion of positive efficacy information was similar across newspapers. Also, across all newspapers, climate impacts and actions to address climate change were more likely to be discussed separately than together in the same article. Implications for public engagement and ideological polarization are discussed. Skepticism toward climate change has a long tradition in the United States. We focus on mass media as the conveyors of the image of climate change and ask: Is climate change skepticism still a characteristic of US print media coverage? If so, to what degree and in what form? And which factors might pave the way for skeptics entering mass media debates? We conducted a quantitative content analysis of US print media during one year (1 June 2012 to 31 May 2013). Our results show that the debate has changed: fundamental forms of climate change skepticism (such as denial of anthropogenic causes) have been abandoned in the coverage, being replaced by more subtle forms (such as the goal to avoid binding regulations). We find no evidence for the norm of journalistic balance, nor do our data support the idea that it is the conservative press that boosts skepticism. This paper is concerned with finite element approximations of W-2,W-p strong solutions of second-order linear elliptic partial differential equations (PDEs) in non-divergence form with continuous coefficients. A nonstandard (primal) finite element method, which uses finite-dimensional sub-spaces consisting of globally continuous piecewise polynomial functions, is proposed and analyzed. The main novelty of the finite element method is to introduce an interior penalty term, which penalizes the jump of the flux across the interior element edges/faces, to augment a non-symmetric piecewise defined and PDE-induced bilinear form. Existence, uniqueness and error estimate in a discrete W-2,W- p energy norm are proved for the proposed finite element method. This is achieved by establishing a discrete Calderon-Zygmund-type estimate and mimicking strong solution PDE techniques at the discrete level. Numerical experiments are provided to test the performance of proposed finite element methods and to validate the convergence theory. We construct H(curl) and H(div) conforming finite elements on convex polygons and polyhedra with minimal possible degrees of freedom, i.e., the number of degrees of freedom is equal to the number of edges or faces of the polygon/polyhedron. The construction is based on generalized barycentric coordinates and the Whitney forms. In 3D, it currently requires the faces of the polyhedron be either triangles or parallelograms. Formulas for computing basis functions are given. The finite elements satisfy discrete de Rham sequences in analogy to the well-known ones on simplices. Moreover, they reproduce existing H(curl)-H(div) elements on simplices, parallelograms, parallelepipeds, pyramids and triangular prisms. The approximation property of the constructed elements is also analyzed by showing that the lowest-order simplicial Nedelec-Raviart-Thomas elements are subsets of the constructed elements on arbitrary polygons and certain polyhedra. In this paper we give new results on domain decomposition preconditioners for GMRES when computing piecewise linear finite-element approximations of the Helmholtz equation -Delta u - (k(2) + i epsilon)u = f, with absorption parameter epsilon is an element of R. Multigrid approximations of this equation with epsilon not equal 0 are commonly used as preconditioners for the pure Helmholtz case (epsilon = 0). However a rigorous theory for such (so-called "shifted Laplace") preconditioners, either for the pure Helmholtz equation, or even the absorptive equation (epsilon not equal 0), is still missing. We present a new theory for the absorptive equation that provides rates of convergence for (left-or right-) preconditioned GMRES, via estimates of the norm and field of values of the preconditioned matrix. This theory uses a k-and epsilon-explicit coercivity result for the underlying sesquilinear form and shows, for example, that if |epsilon| similar to k(2), then classical overlapping Additive Schwarz will perform optimally for the damped problem, provided the subdomain and coarse mesh diameters are carefully chosen. Extensive numerical experiments are given that support the theoretical results. While the theory applies to a certain weighted variant of GMRES, the experiments for both weighted and classical GMRES give comparable results. The theory for the absorptive case gives insight into how its domain decomposition approximations perform as preconditioners for the pure Helmholtz case epsilon - 0. At the end of the paper we propose a (scalable) multilevel preconditioner for the pure Helmholtz problem that has an empirical computation time complexity of about O(n(4/3)) for solving finite-element systems of size n - O(k(3)), where we have chosen the mesh diameter h similar to k(-3/2) to avoid the pollution effect. Experiments on problems with h similar to k(-1), i.e., a fixed number of grid points per wavelength, are also given. The numerical simulation of time-harmonic waves in heterogeneous media is a tricky task which consists in reproducing oscillations. These oscillations become stronger as the frequency increases, and high-order finite element methods have demonstrated their capability to reproduce the oscillatory behavior. However, they keep coping with limitations in capturing fine scale heterogeneities. We propose a new approach which can be applied in highly heterogeneous propagation media. It consists in constructing an approximate medium in which we can perform computations for a large variety of frequencies. The construction of the approximate medium can be understood as applying a quadrature formula locally. We establish estimates which generalize existing estimates formerly obtained for homogeneous Helmholtz problems. We then provide numerical results which illustrate the good level of accuracy of our solution methodology. In this work, we develop and analyze a Hybrid High-Order (HHO) method for steady nonlinear Leray-Lions problems. The proposed method has several assets, including the support for arbitrary approximation orders and general polytopal meshes. This is achieved by combining two key ingredients devised at the local level: a gradient reconstruction and a high-order stabilization term that generalizes the one originally introduced in the linear case. The convergence analysis is carried out using a compactness technique. Extending this technique to HHO methods has prompted us to develop a set of discrete functional analysis tools whose interest goes beyond the specific problem and method addressed in this work: (direct and) reverse Lebesgue and Sobolev embeddings for local polynomial spaces, L-p-stability and W-s,W-p-approximation properties for L-2-projectors on such spaces, and Sobolev embeddings for hybrid polynomial spaces. Numerical tests are presented to validate the theoretical results for the original method and variants thereof. The scalar and vector Laplacians are basic operators in physics and engineering. In applications, they frequently show up perturbed by lower-order terms. The effect of such perturbations on mixed finite element methods in the scalar case is well understood, but that in the vector case is not. In this paper, we first show that, surprisingly, for certain elements there is degradation of the convergence rates with certain lower-order terms even when both the solution and the data are smooth. We then give a systematic analysis of lower-order terms in mixed methods by extending the Finite Element Exterior Calculus (FEEC) framework, which contains the scalar, vector Laplacian, and many other elliptic operators as special cases. We prove that stable mixed discretization remains stable with lower-order terms for sufficiently fine discretization. Moreover, we derive sharp improved error estimates for each individual variable. In particular, this yields new results for the vector Laplacian problem which are useful in applications such as electromagnetism and acoustics modeling. Further, our results imply many previous results for the scalar problem and thus unify them all under the FEEC framework. It is shown that the h-adaptive mixed finite element method for the discretization of eigenvalue clusters of the Laplace operator produces optimal convergence rates in terms of nonlinear approximation classes. The results are valid for the typical mixed spaces of Raviart-Thomas or Brezzi-Douglas-Marini type with arbitrary fixed polynomial degree in two and three space dimensions. We consider three types of subdiffusion models, namely single-term, multi-term and distributed order fractional diffusion equations, for which the maximum-principle holds and which, in particular, preserve nonnegativity. Hence the solution is nonnegative for nonnegative initial data. Following earlier work on the heat equation, our purpose is to study whether this property is inherited by certain spatially semidiscrete and fully discrete piecewise linear finite element methods, including the standard Galerkin method, the lumped mass method and the finite volume element method. It is shown that, as for the heat equation, when the mass matrix is nondiagonal, nonnegativity is not preserved for small time or time-step, but may reappear after a positivity threshold. For the lumped mass method nonnegativity is preserved if and only if the triangulation in the finite element space is of Delaunay type. Numerical experiments illustrate and complement the theoretical results. We develop and analyze strategies to couple the discontinuous Petrov-Galerkin method with optimal test functions to (i) least-squares boundary elements and (ii) various variants of standard Galerkin boundary elements. An essential feature of our method is that, despite the use of boundary integral equations, optimal test functions have to be computed only locally. We apply our findings to a standard transmission problem in full space and present numerical experiments to validate our theory. In this paper, we present an hp-version Legendre-Jacobi spectral collocation method for Volterra integro-differential equations with smooth and weakly singular kernels. We establish several new approximation results of the Legendre/Jacobi polynomial interpolations for both smooth and singular functions. As applications of these approximation results, we derive hp-version error bounds of the Legendre-Jacobi collocation method under the H-1-norm for the Volterra integro-differential equations with smooth solutions on arbitrary meshes and singular solutions on quasi-uniform meshes. We also show the exponential rates of convergence for singular solutions by using geometric time partitions and linearly increasing polynomial degrees. Numerical experiments are included to illustrate the theoretical results. We construct a symplectic, globally defined, minimal-variable, equivariant integrator on products of 2-spheres. Examples of corresponding Hamiltonian systems, called spin systems, include the reduced free rigid body, the motion of point vortices on a sphere, and the classical Heisenberg spin chain, a spatial discretisation of the Landau Lifshitz equation. The existence of such an integrator is remarkable, as the sphere is neither a vector space, nor a cotangent bundle, has no global coordinate chart, and its symplectic form is not even exact. Moreover, the formulation of the integrator is very simple, and resembles the geodesic midpoint method, although the latter is not symplectic. Given a sparse Hermitian matrix A and a real number it, we construct a set of sparse vectors, each approximately spanned only by eigenvectors of A corresponding to eigenvalues near it. This set of vectors spans the column space of a localized spectrum slicing (LSS) operator, and is called an LSS basis set. The sparsity of the LSS basis set is related to the decay properties of matrix Gaussian functions. We present a divide-and-conquer strategy with controllable error to construct the LSS basis set. This is a purely algebraic process using only submatrices of A, and can therefore be applied to general sparse Hermitian matrices. The LSS basis set leads to sparse projected matrices with reduced sizes, which allows the projected problems to be solved efficiently with techniques using sparse linear algebra. As an example, we demonstrate that the LSS basis set can be used to solve interior eigenvalue problems for a discretized second order partial differential operator in onedimensional and two-dimensional domains, as well as for a matrix of general sparsity pattern. Cauchy problems with SPDEs on the whole space are localized to Cauchy problems on a ball of radius R. This localization reduces various kinds of spatial approximation schemes to finite dimensional problems. The error is shown to be exponentially small. As an application, a numerical scheme is presented which combines the localization and the space and time discretization, and thus is fully implementable. We can find in the literature several convergent and/or asymptotic expansions of the Pearcey integral P(x, y) in different regions of the complex variables x and y, but they do not cover the whole complex x and y planes. The purpose of this paper is to complete this analysis giving new convergent and/or asymptotic expansions that, together with the known ones, cover the evaluation of the Pearcey integral in a large region of the complex x and y planes. The accuracy of the approximations derived in this paper is illustrated with some numerical experiments. Moreover, the expansions derived here are simpler compared with other known expansions, as they are derived from a simple manipulation of the integral definition of P(x, y). In this paper we extend the recent results of H. Wang et al. [Math. Comp. 81 (2012) and 83 (2014), pp. 861-877 and 2893-2914, respectively], on barycentric Lagrange interpolation at the roots of Hermite, Laguerre and Jacobi orthogonal polynomials, not only to all classical distributions, but also to osculatory Fejer and Hermite interpolation at the roots (x)7 of orthogonal polynomials p (x), generated by these distributions. More precisely, we present comparatively simple unified proofs of representations for barycentric weights of Fejer, Hermite and Lagrange type in terms of values pi (x), p(n) (x) and Christoffel numbers A, without any additional assumptions on the classical distributions. The first two representations enable us to design a general O (n2)-algorithm to simultaneous computations of barycentric weights and Christoffel numbers, which is based on the stable and efficient divide-and conquer O(n(2))-algorithm for the symmetric tridiagonal eigenproblem due to M. Gu and S. C. Eisenstat [SIAM J. Matrix Anal. Appl. 16 (1995), pp. 172191]. On the other hand, the third representations can be used to compute all classical barycentric weights in the faster O (n) way proposed for the Lagrange interpolation at the roots of Hermite, Laguerre and Jacobi orthogonal polynomials by H. Wang et al. in the second cited paper. Such an essential accelaration requires one to use the O (n(2))-algorithm of A. Glaser et al. [SIAM J. Sci. Comput. 29 (2007), pp. 1420-1438] to compute the roots x, and Christoffel numbers A, by applying the Runge-Kutta and Newton methods to solve the Sturm-Liouville differential problem, which is generic for classical orthogonal polynomials. Finally, in the four special important cases of Jacobi weights w w(x) = (1-x)(alpha) (1 + x)beta with alpha = +/-2 and beta=+/-1/2, that is, of the Chebyshev and Szego weights of the first and second kind, we present explicit representations of the Fejer and Hermite barycentric weights, which yield an O (1)-algorithm. Studying the factorization theory of numerical monoids relies on understanding several important factorization invariants, including length sets, delta sets, and w-primality. While progress in this field has been accelerated by the use of computer algebra systems, many existing algorithms are computationally infeasible for numerical monoids with several irreducible elements. In this paper, we present dynamic algorithms for the factorization set, length set, delta set, and w-primality in numerical monoids and demonstrate that these algorithms give significant improvements in runtime and memory usage. In describing our dynamic approach to computing w-primality, we extend the usual definition of this invariant to the quotient group of the monoid and show that several useful results naturally extend to this broader setting. We describe a rigorous algorithm to compute Riemann's zeta function on the half line and its use to isolate the non-trivial zeros of zeta with imaginary part <= 30, 610, 046,000 to an absolute precision of +/-2(-102). In the process, we provide an independent verification of the Riemann Hypothesis to this height. It is shown that a quintic form over a p-adic field with at least 26 variables has a non-trivial zero, providing that the cardinality of the residue class field exceeds 9. We evaluate each Stieltjes constant-gamma m as a finite sum involving the first m+1 Bernoulli numbers B-k and the first m+1 derivatives (-1)(k)alpha(k) of the alternating zeta function at the point 1. In turn, we compute each a k in an efficient way by means of a series with geometric rate 1/3. The coefficients of such a series are bounded and slowly decrease to zero. The computational significance of the preceding results is also discussed. Let E be an elliptic curve over a number field K. Descent calculations on E can be used to find upper bounds for the rank of the Mordell-Weil group, and to compute covering curves that assist in the search for generators of this group. The general method of 4-descent, developed in the PhD theses of Siksek, Womack and Stamminger, has been implemented in Magma (when K = Q) and works well for elliptic curves with sufficiently small discriminant. By extending work of Bremner and Cassels, we describe the improvements that can be made when E has a rational 2torsion point. In particular, when E has full rational 2torsion, we describe a method for 8-descent that is practical for elliptic curves E/Q with large discriminant. Braces were introduced by Rump to study non-degenerate involutive set-theoretic solutions of the Yang Baxter equation. We generalize Rump's braces to the non-commutative setting and use this new structure to study not necessarily involutive non-degenerate set-theoretical solutions of the Yang Baxter equation. Based on results of Bachiller and Catino and Rizzo, we develop an algorithm to enumerate and construct classical and non-classical braces of small size up to isomorphism. This algorithm is used to produce a database of braces of small size. The paper contains several open problems, questions and conjectures. Conditionally on the Generalized Riemann Hypothesis (GRH), we prove the following results: (1) a cyclic number field of degree 5 is norm Euclidean if and only if Delta = 11(4), 31(4), 41(4); (2) a cyclic number field of degree 7 is norm-Euclidean if and only if Delta = 29(6), 43(6); (3) there are no norm-Euclidean cyclic number fields of degrees 19, 31, 37, 43, 47, 59, 67, 71, 73, 79, 97. Our proofs contain a large computational component, including the calculation of the Euclidean minimum in some cases; the correctness of these calculations does not depend upon the GRH. Finally, we improve on what is known unconditionally in the cubic case by showing that any norm-Euclidean cyclic cubic field must have conductor f <= 157 except possibly when f E (2 center dot 10(14), 10(50)). We correct an error in one of the lemmas of "Conditional bounds for the least quadratic non-residue and related problems", Math. Comp. 84 (2015), no. 295. A colorimetric method for determining cyanuric chloride (CC) and for monitoring its polysaccharide gel activation, before and after ligand binding, was developed. The method is based on the reaction of CC or its activated gel with pyridine and barbituric acid or dimethylbarbituric acid. The product formed yields a purple red color with lambda max at 595 nm, and an E-M value of approximately 300,000 after 1 h at room temperature. Due to its high sensitivity, this method can detect traces of CC (1 mu M) under the above conditions. The method is fast, reliable and devoid of any side reactions. (C) 2017 Elsevier Inc. All rights reserved. In the present study, a graphite electrode (GE) modified by conductive film (containing functionalized multi-walled carbon nanotubes (f-MWCNTs), poly methylene blue p(MB) and gold nanoparticles (AuNPs)) was introduced for determination of nevirapine (NVP) as an anti-HIV drug by applying the differential pulse anodic stripping voltammetry (DPASV) technique. Modification of the electrode was investigated by scanning electron microscopy (SEM) and impedance electrochemical spectroscopy (EIS). All electrochemical effective parameters on detection of NVP were optimized and the oxidation peak current of drug was used for its monitoring. The obtained results confirmed that the oxidation peak currents increased linearly by increasing in NVP concentrations in the range of 0.1-50 mu M and a detection limit of 53 nM was achieved. The proposed sensor (AuNPs/p(MB)/f-MWCNTs/GE) was successfully applied for the determination of NVP in blood serum and pharmaceutical samples. It revealed the excellent stability, repeatability and reproducibility as well. (C) 2017 Elsevier Inc. All rights reserved. Flow cytometric analysis of calcium mobilisation has been in use for many years in the study of specific receptor engagement or isolated cell:cell communication. However, calcium mobilisation/signaling is key to many cell functions including apoptosis, mobility and immune responses. Here we combine multiplex surface staining of whole spleen with Indo-1 AM to visualise calcium mobilisation and examine calcium signaling in a mixed immune cell culture over time. We demonstrate responses to a TRPV1 agonist in distinct cell subtypes without the need for cell separation. Multi parameter staining alongside Indo-1 AM to demonstrate calcium mobilization allows the study of real time calcium signaling in a complex environment. (C) 2017 Published by Elsevier Inc. Poly(ADP-ribose) polymerases (PARPs) have been implicated in responses of plants to DNA damage and numerous stresses, whereby the mechanistic basis of the interference is often unclear. Therefore, the identification of specific inhibitors and potential interactors of plant PARPs is desirable. For this purpose, we established an assay based on heterologous expression of PARP genes from the model plant Arabidopsis thaliana in yeast. Expression of AtPARPs caused an inhibition of yeast growth to different extent, which was alleviated by inhibitors targeted at human PARPs. This assay provides a fast and simple means to identify target proteins and pharmacological inhibitors of AtPARP1. (C) 2017 Elsevier Inc. All rights reserved. Post-Translational Modification (PTM) is a biological reaction which contributes to diversify the proteome. Despite many modifications with important roles in cellular activity, lysine succinylation has recently emerged as an important PTM mark. It alters the chemical structure of lysines, leading to remarkable changes in the structure and function of proteins. In contrast to the huge amount of proteins being sequenced in the post-genome era, the experimental detection of succinylated residues remains expensive, inefficient and time-consuming. Therefore, the development of computational tools for accurately predicting succinylated lysines is an urgent necessity. To date, several approaches have been proposed but their sensitivity has been reportedly poor. In this paper, we propose an approach that utilizes structural features of amino acids to improve lysine succinylation prediction. Succinylated and non-succinylated lysines were first retrieved from 670 proteins and characteristics such as accessible surface area, backbone torsion angles and local structure conformations were incorporated. We used the k-nearest neighbors cleaning treatment for dealing with class imbalance and designed a pruned decision tree for classification. Our predictor, referred to as SucStruct (Succinylation using Structural features), proved to significantly improve performance when compared to previous predictors, with sensitivity, accuracy and Mathew's correlation coefficient equal to 0.7334-0.7946, 0.7444-0.7608 and 0.4884-0.5240, respectively. (C) 2017 Elsevier Inc. All rights reserved. An in-line size-exclusion (SE) ultra-high-performance liquid chromatography (UHPLC)- 5,5-dithio-bis(2-nitrobenzoic acid) (DTNB) method to quantify thiols in monoclonal antibodies (mAb) when manufacturing antibody-drug conjugates (ADCs) was developed. The mAbs are separated on an SEUHPLC column and monitored with a UV detector at a wavelength of 280 nm. Eluents are channeled into a reaction coil and mixed with DTNB to form 5-thio-2-nitrobenzoic acid (TNB). Thiol concentration is calculated using absorption at 412 nm. Using optimized conditions, partially reduced mAbs can be separated from low-molecular weight contaminants and undergo the DTNB reaction. The standard curve of L-cysteine had good linearity between 100 and 1000 mu M. The selectivity, linearity, repeatability, and robustness of this method were evaluated. The calculated free-SH:protein ratios of partially reduced mAbs were consistent between in-line SE-UHPLC-DTNB and conventional methods. The SE-UHPLC-DTNB method showed time- and temperature-dependent changes in the free-SH:protein ratio of mAbs during reduction. The changes in drug-antibody ratio (DAR) of ADCs during the conjugation reaction were also evaluated. This method is an inexpensive and versatile alternative to conventional methods of estimating the free-SH:protein ratio of mAbs and the DAR of ADCs. This method also minimizes assay time. (C) 2017 Elsevier Inc. All rights reserved. Metabolic flux analysis is particularly complex in plant cells because of highly compartmented metabolism. Analysis of free sugars is interesting because it provides data to define fluxes around hexose, pentose, and triose phosphate pools in different compartment. In this work, we present a method to analyze the isotopomer distribution of free sugars labeled with carbon 13 using a liquid chromatography high resolution mass spectrometry, without derivatized procedure, adapted for Metabolic flux analysis. Our results showed a good sensitivity, reproducibility and better accuracy to determine isotopic enrichments of free sugars compared to our previous methods [5, 6]. (C) 2017 Elsevier Inc. All rights reserved. Thyme as a perennial herb has been recognized globally for its antimicrobial, antiseptic and spasmolytic effects. In this investigation, we have used non-targeted metabolite and volatile profiling combined with the morpho-physiological parameters in order to understand the responses at the metabolite and physiological level in drought sensitive and tolerant thyme plant populations. The results at the metabolic level identified the significantly affected metabolites. Significant metabolites belonging to different chemical classes consisting amino acids, carbohydrates, organic acids and lipids have been compared in tolerant and sensitive plants. These compounds may take a role through mechanisms including osmotic adjustment, ROS scavenging, cellular components protection and membrane lipid changes, hormone inductions in which the key metabolites were proline, betain, mannitol, sorbitol, ascorbate, jasmonate, unsaturated fatty acids and tocopherol. Regarding with volatile profiling, sensitive plants showed an increased-then-decreased trend at major terpenes apart from alpha-cubebene and germacrene-D. In contrast, tolerant populations had unchanged terpenes during the water stress period with an elevation at last day. These results suggesting that the two populations are employing different strategies. The combination of metabolite profiling and physiological parameters assisted to understand precisely the mechanisms of plant response at volatile metabolome level. (C) 2017 Elsevier Inc. All rights reserved. This paper provides an overview of the current state of educational research on multilingualism as well as the controversies within the field. The starting point for this paper is an historical review that traces the development of research focused on the consequences of multilingualism (caused by migration) on socialization, integration and education. Four aspects of the current state of research are presented: data sources for multilingualism research; the plurilingual development of the individual; the role of home languages in the educational system; the facilitation and support for multilingualism. This review reveals that, over the course of recent years, there has been an increasing amount of research on the coexistence of several languages at the individual, educational and societal level, while old controversies regarding the benefits of learning through the home language still endure. The paper concludes with perspectives on future research challenges in the field of multilingualism and education. Given the increased influx of migrants into the European Union, the German education system is faced with catering to increasing numbers of migrant children who have already acquired a first language or multiple languages in their home countries. Helping these children successfully develop language and literacy skills in the new majority language, German, and in the first foreign language taught at German schools, English, will be an important challenge, as will the support of these children's heritage languages. Ultimately, assisting these children in successfully becoming multilingual would substantially benefit the development of their executive control and in turn boost their chances of long-term academic success. This study analyses the relationship between mono- and bilingual language skills and the metalinguistic awareness of primary school children. It examines the question of whether bilingual students use their heritage languages when solving and talking about metalinguistic tasks with a tandem partner in interactive learning settings. In a study of a cohort of school students (N = 400) in years 3 and 4, we first focus on the students' language skills in German by analyzing language profiles and on bilingual students' skills in their heritage languages Turkish and Russian. At the second measurement point, we analyse the metalinguistic interactions of mono- and bilingual students and correlate the findings of this analysis with the data about their language and cognitive skills. The results show that bilingual students living with two languages have advantages: This group of students produces metalinguistic comments more frequently and achieves a higher level of metalinguistic proficiency in comparison with the monolingual student group. This study analyses students' self-reported acceptance or rejection of offers for Turkish-German bilingual interactions within the context of a peer-learning intervention. Bilingual trainers explicitly provided repeated opportunities to communicate bilingually (e. g., they talked Turkish during the training). Bilingual speakers of Turkish and German (N = 37) in grades three and four were asked answered in the middle and at the end of an eight-week peer-learning intervention whether they used Turkish during the sessions. The students' reported reasons for their language choice were analyzed using content-analysis (Mayring, 2015). At both measurement points, students reported more acceptances of language offers than rejections. The given reasons referred primarily to the individuals themselves (e. g., their language competencies, task-solving competencies, emotions and attitudes). Peers and trainers played a minor role, while the context, an intervention study, was only an argument for acceptance of bilingual interactions, not for rejection. In this article, attitudes of preschool teachers and the relevance of these attitudes are examined in light of the multilingualism in everyday life in preschools. In models of professionalization, attitudes are regarded as a dimension which substantially impacts the thinking and actions of people and are therefore important in educational contexts. Beside attitudes, other factors from models of professionalization are examined for their relevance concerning dispositions and educational performance. For this purpose, a path model and a multilevel model are calculated based on data from 127 preschool teachers from 54 groups in 19 preschools in Germany. The findings reveal that attitudes are significant for educational practice. Furthermore, the knowledge of the preschool teachers about multilingualism as well as institutional conditions appear as independent factors. The results are summarized into a model and discussed with respect to the professionalization of preschool teachers. Normative and empirical research in pedagogy suggests that a monolingual approach at school may result in underperformance of students from immigrant families. This calls for changes in educational concepts. The key question however remains: What would a change towards multilingualism imply? This article focuses on the ways in which a respective transformation was intended by the Israeli "New Language Education Policy" in 1995, introducing multilingualism as an educational concept. The present study is based on qualitative data. The analysis illustrates how these changes were perceived by the interviewed multilinguals. The findings indicate the central role of peers' and teachers' reaction towards the development of self-perception. Multilinguals who were socialized during the monolingual context remember being insulted and beaten, and mention their wish to assimilate as reason for having changed their names and speaking only Hebrew to their family. By contrast, no particular negative experience is narrated by those interview partners who experienced the multilingual context. Strikingly, not all of the interviewed subjects who went to school after 1995 remember having experienced the shift, which opens a further question on limitations of language policy making. Mixed-age teaching often occurs in small, rural schools due to small numbers of students. Recently, however, larger schools are also choosing this teaching configuration for pedagogical reasons. This teaching arrangement necessitates an expanded view of assessment. In multi-grade classes, the teacher focuses on individually-based formative assessments and applies differentiated learning. Our research question examines whether teachers differ with respect to their practice of formative assessment. As part of the project "Schools in Alpine Regions 2", we collected data from a voluntary sample of 280 teachers with multi-grade classes in two regions in Eastern Switzerland and one in Western Austria. We applied a mixed-method research design. A structural equation model reveals that teachers with multi-grade classes use a combination of formative assessment practices and tools in order to individualize and differentiate their teaching and instruction. The frequency with which formative assessment is practiced relates positively to the number of grades in a class and the teacher's self-concept regarding diagnostic observation competency. Results from two latent class analyses show two groups of assessment literacy, with only a small number using the full potential of formative assessment for learning. As part of the discussion, the results will be discussed with respect to their relevance for professional development and research. Dual study programs have grown rapidly in the last years. In addition to universities of cooperative education (German: Berufsakademie, BA) and the Baden-Wuerttemberg Cooperative State University (German: Duale Hochschule Baden-Wurttemberg, DHBW), universities of applied sciences in particular are increasingly offering dual study programs. These programs allow for practice-oriented learning in cooperation with companies. To understand who enrolls in a dual study program, we investigated how beginning university undergraduates in dual study programs differ from those in conventional study programs with regards to university entrance scores, self-concept and interdisciplinary competencies. We compared 1612 university novices from 17 state universities of applied sciences in Bavaria. With regard to the dual study programs offered in Bavaria, there were differences between the combined studies program (training + studies) and studies with intensified practice (studies + integrated practical/internship phases). Undergraduates in the dual study program tended to have better university entrance scores, higher self-concept, and were more convinced of their independence and motivation for learning than regular students. Aims and objectivesTo examine patient experiences of hospital-based discharge preparation for referral for follow-up home care services. To identify aspects of discharge preparation that will assist patients with their transition from hospital-based care to home-based follow-up care. BackgroundTo improve patients' transitions from hospital-based care to community-based home care, hospitals incorporate home care referral processes into discharge planning. This includes patient preparation for follow-up home care services. While there is evidence to support that such preparation needs to be more patient-centred to be effective, there is little knowledge of patient experiences of preparation that would guide improvements. DesignQualitative descriptive study. MethodsThe study was carried out at a supra-regional hospital in Eastern Canada. Findings are based on thematic content analysis of 13 semi-structured interviews of patients requiring home care after hospitalisation on a medical or surgical unit. Most interviews were held within one week of discharge. ResultsPatient experiences were associated with patient attitudes and levels of engagement in preparation. Attitudes and levels of engagement were seen as related to one another. Those who didn't really think about it', had low engagement, while those with the attitude guide me', looked for partnership. Those who had an attitude of this is what I want', had a very high level of engagement. ConclusionsPrevious experience with home care services influenced patients' level of trust in the health care system, and ultimately shaped their attitudes towards and levels of engagement in preparation. Relevance to clinical practicePatient preparation for follow-up home care can be improved by assessing their knowledge of and previous experiences with home care. Patients recognised as using a proactive approach may be highly vulnerable. Aim and objectiveTo understand the stressors related to life post kidney transplantation, with a focus on medication adherence, and the coping resources people use to deal with these stressors. BackgroundAlthough kidney transplantation offers enhanced quality and years of life for patients, the management of a kidney transplant post surgery is a complex process. DesignA descriptive exploratory study. MethodParticipants were recruited from five kidney transplant units in Victoria, Australia. From March-May 2014, patients who had either maintained their kidney transplant for 8months or had experienced a kidney graft loss due to medication nonadherence were interviewed. All audio-recordings of interviews were transcribed verbatim and underwent Ritchie and Spencer's framework analysis. ResultsParticipants consisted of 15 men and 10 women aged 26-72years old. All identified themes were categorised into: (1) Causes of distress and (2) Coping resources. Post kidney transplantation, causes of distress included the regimented routine necessary for graft maintenance, and the everlasting fear of potential graft rejection, contracting infections and developing cancer. Coping resources used to manage the stressors were first, a shift in perspective about how easy it was to manage a kidney transplant than to be dialysis-dependent and second, receiving external help from fellow patients, family members and health care professionals in addition to using electronic reminders. ConclusionAn individual well-equipped with coping resources is able to deal with stressors better. It is recommended that changes, such as providing regular reminders about the lifestyle benefits of kidney transplantation, creating opportunities for patients to share their experiences and promoting the usage of a reminder alarm to take medications, will reduce the stress of managing a kidney transplant. Relevance to clinical practiceUsing these findings to make informed changes to the usual care of a kidney transplant recipient is likely to result in better patient outcomes. Aims and objectivesTo assess disease-related knowledge among patients with inflammatory bowel disease and to identify the factors that are possibly associated with the knowledge level. BackgroundDisease-related knowledge can positively influence the acceptance of the disease, increase treatment compliance and improve the quality of life in patients with inflammatory bowel disease. DesignAn observational, cross-sectional study was conducted and prospectively included patients from the inflammatory bowel disease programme between October 2014-July 2015. MethodsA Spanish-translated version of the 24-item Crohn's and Colitis Knowledge score was used to assess disease-related knowledge. Patients also completed a demographic and clinical questionnaire. ResultsA total of 203 patients were included, 62% were female, and 66% were diagnosed with ulcerative colitis; the median age was 34years (range 18-79), and the median disease duration was fouryears. The median disease-related knowledge score was 9 (range 1-20). Only 29% of the patients answered more than 50% of the questions correctly. Lower disease-related knowledge was observed in questions related to pregnancy/fertility and surgery/complications. Patients older than 50years, with ulcerative colitis, with disease durations less than fiveyears and patients without histories of surgery exhibited lower disease-related knowledge. There was no association between the knowledge scores and the educational levels. ConclusionsThe patients who attended our inflammatory bowel disease programme exhibited poor disease-related knowledge that was similar to the knowledge levels that have been observed in developed countries. It is necessary to assess patient knowledge to develop educational strategies and evaluate the influences of these strategies on patient compliance and quality of life. Relevance to clinical practiceThese results will allow the inflammatory bowel disease team to develop educational programmes that account for the disease-related knowledge of each patient. Inflammatory bowel disease nurses should evaluate their interventions to provide evidence that educating our patients contributes to improving their treatment outcomes and overall health statuses. Aims and objectivesTo identify self-acceptance and associated socio-demographic and disease factors among Chinese women with breast cancer. BackgroundAlthough it is recognised that breast cancer can affect a woman's feelings of self-acceptance, there are few studies concerning the level of self-acceptance among women with breast cancer and factors associated with self-acceptance in this population. DesignCross-sectional research design. MethodsData were collected using the convenience sampling method. A total of 308 women with breast cancer were investigated using the Self-Acceptance Questionnaire. ResultsThe mean score on the Self-Acceptance Questionnaire was 3979514, indicating that the women in this study had low levels of self-acceptance. Multiple regression analysis indicated that self-acceptance was positively associated with the time since diagnosis, household income and the presence of medical insurance/government-funded medical treatment, while Tumour, Lymph Node, Metastasis stage was negatively associated with self-acceptance. With respect to work status, retired patients had the highest levels of self-acceptance, those who had returned to work had moderate levels of self-acceptance and those who had not yet returned to work had the lowest levels of self-acceptance. ConclusionsThis study demonstrates that the level of self-acceptance among women with breast cancer in China is low, and suggests that there is room to improve. Several factors are significantly associated with the self-acceptance of women with breast cancer. Relevance to clinical practiceMedical staff should realise that the level of self-acceptance among women with breast cancer in China is low and has room to improve. It is important to conduct appropriate interventions to improve self-acceptance among these women, based on an understanding of the factors associated with self-acceptance. Aims and objectivesTo identify the contribution of hospital, unit, staff characteristics, staffing adequacy and teamwork to missed nursing care in Iceland hospitals. BackgroundA recently identified quality indicator for nursing care and patient safety is missed nursing care defined as any standard, required nursing care omitted or significantly delayed, indicating an error of omission. Former studies point to contributing factors to missed nursing care regarding hospital, unit and staff characteristics, perceptions of staffing adequacy as well as nursing teamwork, displayed in the Missed Nursing Care Model. DesignThis was a quantitative cross-sectional survey study. MethodsThe samples were all registered nurses and practical nurses (n=864) working on 27 medical, surgical and intensive care inpatient units in eight hospitals throughout Iceland. Response rate was 693%. Data were collected in March-April 2012 using the combined MISSCARE Survey-Icelandic and the Nursing Teamwork Survey-Icelandic. Descriptive, correlational and regression statistics were used for data analysis. ResultsMissed nursing care was significantly related to hospital and unit type, participants' age and role and their perception of adequate staffing and level of teamwork. The multiple regression testing of Model 1 indicated unit type, role, age and staffing adequacy to predict 16% of the variance in missed nursing care. Controlling for unit type, role, age and perceptions of staffing adequacy, the multiple regression testing of Model 2 showed that nursing teamwork predicted an additional 14% of the variance in missed nursing care. ConclusionsThe results shed light on the correlates and predictors of missed nursing care in hospitals. This study gives direction as to the development of strategies for decreasing missed nursing care, including ensuring appropriate staffing levels and enhanced teamwork. Relevance to clinical practiceBy identifying contributing factors to missed nursing care, appropriate interventions can be developed and tested. Aim and objectivesTo describe how nurses in a rural hospital in a low-income country experience working with visiting nurses from high-income countries. BackgroundNurses in low-income countries work with visiting nurses from high-income countries in various health projects. However, there is a paucity of studies examining how nurses in low-income countries experience working with nurses from such different backgrounds. DesignThis study is descriptive, explorative and qualitative. MethodsThe data were collected from 10 semi-structured interviews in 2015 and were analysed using qualitative content analysis. The study was conducted with ward nurses in a rural hospital in Tanzania, a sub-Saharan African country characterised as low-income country. FindingsThe data analysis revealed two themes related to the local nurses' experiences of working with visiting nurses from high-income countries: (1) To do it our way and (2) Different expectations, benefits and limitations. ConclusionThe findings strongly indicate that the local nurses expected foreign nurses to follow the local system and work under supervision. The local nurses appreciated opportunities to learn from working and sharing knowledge with foreign nurses, but simultaneously expressed that the gained knowledge should be adapted and implemented according to their local health system. Relevance to clinical practiceThe findings can inform nurses, humanitarian organisations, hospitals and universities working in international collaborations. Aim and objectivesTo illuminate nurses' experiences and opportunities to discuss sexual health with patients in primary health care. BackgroundSexual health is a concept associated with many taboos, and research shows that nurses feel uncomfortable talking to patients about sexual health and therefore avoid it. This avoidance forms a barrier between patient and nurse which prevents nurses from giving satisfactory health care to patients. DesignA qualitative descriptive design. MethodSemi-structured interviews were conducted with nine nurses in primary health care in Sweden. Data were analysed using qualitative content analysis. ResultsDuring the analysis phase, five subcategories and two main categories were identified. The two main categories were as follows: factors that influence nurses' opportunities of talking to patients about sexual health' and nurses' experiences of talking to patients about sexual health'. Social norms in society were an obstacle for health professionals' opportunities to feel comfortable and act professionally. The nurses' personal attitude and knowledge were of great significance in determining whether they brought up the topic of sexual health or not. The nurses found it easier to bring up the topic of sexual health with middle-aged men with, for example, diabetes. One reason for this is that they found it easier to talk to male patients. A further reason is the fact that they had received training in discussing matters of sexual health in relation to diabetes and other conditions affecting sexual health. ConclusionNurses in primary care express the necessity of additional education and knowledge on the subject of sexual health. The healthcare organisation must be reformed to put focus on sexual health. Relevance for clinical practiceGuidelines for addressing the topic of sexual health must be implemented to establish conditions that will increase nurse's knowledge and provide them with the necessary tools for discussing sexual health with patients. Aims and objectivesThis study aims to investigate the problems experienced by nurses and doctors as a result of exposure to surgical smoke and the precautions that need to be taken. BackgroundElectrosurgery is carried out in almost all operating rooms, and all of those who work in these rooms are exposed to surgical smoke, especially doctors and nurses. A review of the literature reveals that there are very few studies that have been carried out on surgical smoke, and there are no studies researched on the problems experienced by those working in operating rooms. DesignThis descriptive study was conducted between April and June 2015. MethodsThe study was carried out in the operating rooms of Training and Research Hospital with 81 nurses and doctors. Descriptive statistical analyses were performed using the IBM SPSS Statistics 23 (Windows), Hacettepe University, Ankara. ResultsThe problems experienced by the nurses and doctors as a result of exposure to surgical smoke included: headache (nurses: 489%, doctors: 583%), watering of the eyes (nurses: 400%, doctors: 417%), cough (nurses: 489%, doctors: 278%), sore throat, bad odours absorbed in the hair, nausea, drowsiness, dizziness, sneezing and rhinitis. Regarding the precautions taken to protect themselves from surgical smoke, 911% of the nurses and 861% of the doctors reported using surgical masks. ConclusionsIt was found that they did not report taking any effective protective measures, and only a few of the nurses reported using special filtration masks. It was observed that the participants widely used surgical masks, which are ineffective in protecting from the effects of surgical smoke. Relevance to clinical practiceAttention brought to the effects of surgical smoke. Presentation of the harmful effects of surgical smoke reported by doctors and nurses. Identification of the precautions that can be taken against surgical smoke. Aims and objectivesThe purpose of this study was to determine nurses' perceptions about caring for patients with traumatic brain injury. BackgroundAnnually, it is estimated that over 10 million people sustain a traumatic brain injury around the world. Patients with traumatic brain injury and their families are often concerned with expectations about recovery and seek information from nurses. Nurses' perceptions of care might influence information provided to patients and families, particularly if inaccurate knowledge and perceptions are held. Thus, nurses must be knowledgeable about care of these patients. MethodsA cross-sectional survey, the Perceptions of Brain Injury Survey (PBIS), was completed electronically by 513 nurses between October and December 2014. Data were analysed with structural equationmodelling, factor analysis, and pairwise comparisons. ResultsUsing latent class analysis, authors were able to divide nurses into three homogeneous sub-groups based on perceived knowledge: low, moderate and high. Findings showed that nurses who care for patients with traumatic brain injury the most have the highest perceived confidence but the lowest perceived knowledge. Nurses also had significant variations in training. ConclusionsAs there is limited literature on nurses' perceptions of caring for patients with traumatic brain injury, these findings have implications for training and educating nurses, including direction for development of nursing educational interventions. Relevance to clinical practiceAs the incidence of traumatic brain injury is growing, it is imperative that nurses be knowledgeable about care of patients with these injuries. The traumatic brain injury PBIS can be used to determine inaccurate perceptions about caring for patients with traumatic brain injury before educating and training nurses. Aims and objectivesTo develop and test feasibility and acceptability of an interactive ICT platform integrated in a tablet for collecting and managing patient-reported concerns of older adults in home care. BackgroundUsing different ICT applications, for example interactive tablets for self-assessment of health and health issues based on health monitoring as well as other somatic and psychiatric monitoring systems may improve quality of life, staff and patient communication and feelings of being reassured. The European Commission hypothesises that introduction of ICT applications to the older population will enable improved health. However, evidence-based and user-based applications are scarce. DesignThe design is underpinned by the Medical Research Council's complex intervention evaluation framework. A mixed-method approach was used combining interviews with older adults and healthcare professionals, and logged quantitative data. MethodsIn cooperation with a health management company, a platform operated by an interactive application for reporting and managing health-related problems in real time was developed. Eight older adults receiving home care were recruited to test feasibility. They were equipped with the application and reported three times weekly over four weeks, and afterwards interviewed about their experiences. Three nurses caring for them were interviewed. The logged data were extracted as a coded file. ResultsThe older adults reported as instructed, in total 107 reports (Mean 13). The most frequent concerns were pain, fatigue and dizziness. The older adults experienced the application as meaningful with overall positive effects as well as potential benefits for the nurses involved. ConclusionsThe overall findings in this study indicated high feasibility among older adults using the ICT platform. The study's results support further development of the platform, as well as tests in full-scale studies and in other populations. Relevance to clinical practiceAn ICT platform increased the older adults' perception of involvement and facilitated communication between the patient and nurses. Aims and objectivesTo explore whether and how spatial aspects of children's hospital wards (single and shared rooms) impact upon family-centred care. BackgroundFamily-centred care has been widely adopted in paediatric hospitals internationally. Recent hospital building programmes in many countries have prioritised the provision of single rooms over shared rooms. Limited attention has, however, been paid to the potential impact of spatial aspects of paediatric wards on family-centred care. DesignQualitative, ethnographic. MethodsPhase 1; observation within four wards of a specialist children's hospital. Phase 2; interviews with 17 children aged 5-16years and 60 parents/carers. Sixty nursing and support staff also took part in interviews and focus group discussions. All data were subjected to thematic analysis. ResultsTwo themes emerged from the data analysis: role expectations' and family-nurse interactions'. The latter theme comprised three subthemes: family support needs', monitoring children's well-being' and survey-assess-interact within spatial contexts'. ConclusionSpatial configurations within hospital wards significantly impacted upon the relationships and interactions between children, parents and nurses, which played out differently in single and shared rooms. Increasing the provision of single rooms within wards is therefore likely to directly affect how family-centred care manifests in practice. Relevance to clinical practiceNurses need to be sensitive to the impact of spatial characteristics, and particularly of single and shared rooms, on families' experiences of children's hospital wards. Nurses' contribution to and experience of family-centred care can be expected to change significantly when spatial characteristics of wards change and, as is currently the vogue, hospitals maximise the provision of single rather than shared rooms. Aims and objectivesTo provide an insight into the views of healthcare professionals on the presence of family members during brainstem death testing. BackgroundBrainstem death presents families with a paradoxical death that can be difficult to define. International research suggests families should be given the choice to be present at brainstem death testing, yet it appears few units offer families the choice to be present and little attention has been paid to developing practice to enable effective facilitation of choice. DesignA qualitative, exploratory design was adopted to understand the perceptions of healthcare professionals. Individual semi-structured interviews were audio-taped and carried out over two months. MethodsA purposive sample of 10 nurses and 10 doctors from two tertiary intensive care units in the United Kingdom was interviewed, and transcripts were analysed using content analysis to identify emergent categories and themes. ResultsHealthcare professionals indicated different perceptions of death in the context of catastrophic brainstem injury. The majority of participants favoured offering families the choice to be present while acknowledging the influence of organisational culture. Identified benefits included acceptance, closure and better understanding. Suggested challenges involved the assumption of trauma or disruption and sense of obligation for families to accept if choice was offered. Key issues involved improving knowledge and communication skills to individually tailor support for families involved. ConclusionsIf families are to be offered the choice of witnessing brainstem death testing, considering that needs and conventions will differ according to global cultural backgrounds, then key needs must be met to ensure that effective care and support is provided to families and clinicians. Relevance to clinical practiceA proactive approach to facilitating family choice to be present at testing requires the development of guidelines that accommodate cultural and professional variations to provide excellence in end-of-life care. Aims and objectivesTo report a study protocol and the theoretical framework normalisation process theory that informs this protocol for a case study investigation of private sector nurse practitioners. BackgroundMost research evaluating nurse practitioner service is focused on public, mainly acute care environments where nurse practitioner service is well established with strong structures for governance and sustainability. Conversely, there is lack of clarity in governance for emerging models in the private sector. In a climate of healthcare reform, nurse practitioner service is extending beyond the familiar public health sector. Further research is required to inform knowledge of the practice, operational framework and governance of new nurse practitioner models. DesignThe proposed research will use a multiple exploratory case study design to examine private sector nurse practitioner service. MethodsData collection includes interviews, surveys and audits. A sequential mixed method approach to analysis of each case will be conducted. Findings from within-case analysis will lead to a meta-synthesis across all four cases to gain a holistic understanding of the cases under study, private sector nurse practitioner service. Normalisation process theory will be used to guide the research process, specifically coding and analysis of data using theory constructs and the relevant components associated with those constructs. ConclusionsThis article provides a blueprint for the research and describes a theoretical framework, normalisation process theory in terms of its flexibility as an analytical framework. Relevance to clinical practiceConsistent with the goals of best research practice, this study protocol will inform the research community in the field of primary health care about emerging research in this field. Publishing a study protocol ensures researcher fidelity to the analysis plan and supports research collaboration across teams. Aims and objectivesTo evaluate a self-management programme collaborating with communities and mobilising peer leaders for patients with diabetes in mainland China. BackgroundThe rapidly increasing diabetic epidemic is an overwhelming burden in China. Diabetic self-management programmes that are both effective and feasible despite health resource shortages should be developed and evaluated. DesignQuasi-experimental. MethodsIn total, 181 patients with diabetes completed the study (89 in the experimental group and 92 in the control group). Diabetic instruction and peer-led group activities were two major parts of the programme. Outcome variables, including self-efficacy, social support, self-management behaviours and quality of life, were measured. Participants' perceptions towards the programme were also collected. anova/ancova and content analysis were used for data analysis. ResultsSocial support, self-efficacy and self-management behaviours significantly improved during the study period. Although quality of life did not change significantly, the participants provided positive feedback for the programme. ConclusionsThe effectiveness of this programme was partially verified. The delivery mode, through trained peer leaders and collaboration with communities, appeared to be feasible. Using a cluster randomised controlled design with full cost-effectiveness analysis would provide a more rigorous examination for this programme. Relevance to clinical practiceThis study adds to the growing evidence of the importance of self-efficacy and social support as a mechanism for achieving behavioural change. This programme appears to be promising in promoting diabetic self-management in China and may be applied to individuals with other chronic diseases and dwelling in other regions. Aims and objectivesTo evaluate the efficacy of applying manual pressure before intramuscular injection and compare it with the standard injection technique in terms of reducing the young adult student's postinjection pain. BackgroundThe administration of intramuscular injections is a procedure performed by nurses and one that causes anxiety and pain for the patient. Nurses have ethical and legal obligations to mitigate injection-related pain and the nurses' use of effective pain management not only provides physical comfort to the patients, but also improves the patients' experience. DesignComparative experimental study. MethodsThis study was conducted with first-year university students (n=123) who were scheduled for hepatitis A and hepatitis B vaccination via deltoid muscle injection. Students were randomly assigned to the groups. Comparison group students (n=60) were given an injection using the conventional method, that is without manual pressure being applied prior to the injection. The experimental group students (n=63) received manual pressure at the vaccination site immediately before injection for a period of 10seconds. The two techniques were used randomly. The subjects were given pressure to the injection site, and perceived pain intensity was measured using Numerical Rating Scale. ResultsFindings demonstrate that students experienced significantly less pain when they received injections with manual pressure compared with the standard injection technique. The postinjection average pain score in the comparison group was higher than that in the experimental group (p<005). ConclusionsThis study's results show that the application of manual pressure to the injection site before intramuscular injections reduces postinjection pain intensity in young adult students (p<005). Based on these results before the injection, applying manual pressure to the adult's intramuscular injection site is recommended. Relevance to clinical practiceApplying pressure to the injection area is a simple and cost-effective method to reduce the pain associated with injection. AimTo investigate whether a nurse-led care was more beneficial for implementing chronic kidney disease guidelines and improved multiple risk factors compared with the usual care. BackgroundSeveral independent clinical trials have been carried out to demonstrate the efficiency of a nurse-led care to improve the outcomes for patients with chronic kidney disease and address the risk factors for renal function decline. However, their results and conclusions were inconsistent. MethodsA meta-analysis was carried out on September 2015 based on previous studies that evaluated the efficiency of nurse-led care model for patients with chronic kidney disease. Following quality appraisal, four randomised clinical trials that allocated patients with chronic kidney disease to usual care and nurse-coordinated care were included. Primary outcomes, such as kidney failure and cardiovascular events, were analysed. ResultsCompared with the usual care group, a nurse-coordinated care model reduced the risks of composite death, decreased the occurrence rate of end-stage renal disease and doubled serum creatinine. On the contrary, a slight propulsive effect of nurse-led interventional care occurred on acute myocardial infarction and heart failure. LimitationsOnly five studies were included and conducted nonstandard evaluating endpoints, causing fewer studies were categorised in each outcome events in this meta-analysis, subsequently leading to heterogeneity and less persuasive. ConclusionsIntensive interventions delivered by nurse coordinators are expected to benefit patients to attain longer life expectancy and higher life quality as well as to improve controlling risk factors implicated in chronic kidney disease progression. Relevance to clinical practiceMore government and hospitals should modify the traditional nursing routine based on this study, providing a more intensive nurse-coaching care model for patients with chronic kidney disease, even aged or other chronic diseases, which shall further help to better control the risk factors and delay disease progression. Aims and objectivesTo explore how couples with Parkinson's disease discuss their needs, concerns and preferences at the advanced stages of illness. BackgroundThe majority of care for people with Parkinson's disease is provided at home by family members. Parkinson's disease is characterised by a slow progressive decline with care needs often exceeding a decade. DesignA descriptive qualitative study with 14 couples. MethodsData were collected on two occasions over a one-month period using semi-structured interviews, with both individual and couple interviews. Data were analysed thematically by the research team. ResultsAll participants discussed the strong desire to remain in their homes for as long as possible. For the people with Parkinson's disease, placement to long-term facilities was not an option to be considered. For spouses, there was an acknowledgement there may come a time when they could no longer continue to provide care. Concerns regarding falls, choking, voice production, financial strain and need for prognostic information from providers were influences on what they believed the future would hold and the decisions they would need to make. ConclusionsThe need for improved communication between providers and Parkinson's disease couples is evident. Interventions to support the couple in their discussions and decision-making regarding remaining in the home or not, and options to support advanced care needs are required. Relevance to clinical practiceNurses can help support decision-making by providing tangible information regarding the advanced stages of Parkinson's disease including adequate prognostic information. Aims and objectivesTo explore the experiences of family members of patients treated with extracorporeal membrane oxygenation. BackgroundSudden onset of an unexpected and severe illness is associated with an increased stress experience of family members. Only one study to date has explored the experience of family members of patients who are at high risk of dying and treated with extracorporeal membrane oxygenation. DesignA qualitative descriptive research design was used. MethodsA total of 10 family members of patients treated with extracorporeal membrane oxygenation were recruited through a convenient sampling approach. Data were collected using open-ended semi-structured interviews. A six-step process was applied to analyse the data thematically. Four criteria were employed to evaluate methodological rigour. ResultsFamily members of extracorporeal membrane oxygenation patients experienced psychological distress and strain during and after admission. Five main themes (Going Downhill, Intensive Care Unit Stress and Stressors, Carousel of Roles, Today and Advice) were identified. These themes were explored from the four roles of the Carousel of Roles theme (decision-maker, carer, manager and recorder) that participants experienced. ConclusionNurses and other staff involved in the care of extracorporeal membrane oxygenation patients must pay attention to individual needs of the family and activate all available support systems to help them cope with stress and strain. Relevance to clinical practiceAn information and recommendation guide for families and staff caring for extracorporeal membrane oxygenation patients was developed and needs to be applied cautiously to the individual clinical setting. Aims and objectivesTo explore new graduate registered nurses' reflections of bioscience courses during their nursing programme and the relationship between bioscience content and their clinical practice. BackgroundUndergraduate nursing students internationally find bioscience courses challenging, which may be due to the volume of content and level of difficulty of these courses. Such challenges may be exacerbated by insufficient integration between bioscience theory and nursing clinical practice. DesignA descriptive, cross-sectional mixed methods study was conducted. MethodsA 30-item questionnaire with five written response questions which explored recently registered nurses' reflections on bioscience courses during their nursing degree was employed. Descriptive analyses were reported for individual items. Thematic analysis of qualitative responses was grouped to reveal emerging themes. ResultsRegistered nurses' (n=22) reflections revealed that bioscience courses were a significant challenge during their undergraduate programme, and they lacked confidence explaining the biological basis of nursing. Participants would like improved knowledge of the relevant bioscience for nursing and agreed that bioscience courses should be extended into the undergraduate final year. The importance of relating bioscience content to nursing practice was elaborated extensively throughout written responses. ConclusionsAlthough registered nurses reflected that bioscience courses were difficult with large volumes of content, having more bioscience with greater relevance to nursing applications was considered important in their current clinical practice. It is suggested that bioscience academics develop greater contextual links between bioscience content and clinical practice relevant to nursing. Relevance to clinical practiceAfter working as a registered nurse, there was appreciation of bioscience relevance for clinical practice, and the nurses believed they would have benefitted from more nursing-related bioscience during their undergraduate programme. Focussed integration of bioscience with clinical nursing courses should be driven by academics, nurse educators and clinical nurses to provide a biological basis for patient care to nursing students. Aims and objectivesTo explore specific cultural and religious beliefs and values concerning death and dying, truth telling, and advance care planning, and the preferences for end-of-life care among older persons from culturally and linguistically diverse backgrounds. BackgroundWhilst literature indicates that culture impacts on end-of-life decision-making significantly, there is limited evidence on the topic. DesignA cross-sectional survey. MethodsA total of 171 community older persons who make regular visits to 17day care centres expressed in a questionnaire their; (1) beliefs about death and dying, truth telling, and advance care planning, and (2) preferences for end-of-life care. ResultsMore than 92% of respondents believed that dying is a normal part of life, and more than 70% felt comfortable talking about death. Whilst respondents accepted dying as a normal part of life, 64% of Eastern Europeans and 53% of Asia/Pacific groups believed that death should be avoided at all costs. People from the Asia/Pacific group reported the most consensual view against all of the life-prolonging measures. ConclusionCultural and religious beliefs and values may have an impact on preferences for treatment at end-of-life. The study offers nurses empirical data to help shape conversations about end-of-life care, and thus to enhance their commitment to help people die well'. Relevance to clinical practiceInformation acquisition to extend understanding of each individual before proceeding with documentation of advance care planning is essential and should include retrieval of individuals' cultural and religious beliefs and values, and preferences for care. An institutional system and/or protocol that promote conversations about these among nurses and other healthcare professionals are warranted. Aims and objectivesTo report the findings from a unique analysis of naturally occurring data regarding self-harm behaviour generated through the global social media site, Twitter. BackgroundSelf-harm behaviours are of global concern for health and social care practice. However, little is known about the experiences of those who harm and the attitudes of the general public towards such behaviours. A deeper, richer and more organic understanding of this is vital to informing global approaches to supporting individuals through treatment and recovery. DesignExploratory, qualitative design. MethodsThree hundred and sixty-two Twitter messages were subject to inductive thematic analysis. ResultsFive themes were identified: (1) celebrity influence, (2) self-harm is not a joke (with subthemes of you wouldn't laugh if you loved me and you think it's funny, I think it's cruel), (3) support for and from others, (4) eating disorders and self-harm and (5) videos and personal stories. ConclusionsThe findings indicate that self-harm behaviour continues to be largely misunderstood by the general public and is often the source of ridicule which may contribute to delays in accessing treatment. Whilst Twitter may also provide a source of valuable support for those who self-harm, the sense of community, relatedness and understanding generated by such support may contribute to normalising self-harm and perpetuating the behaviours. Relevance to clinical practiceOur understanding of the complexity of and aetiology and most effective treatment options for self-harm behaviours is still unclear. The findings demonstrate that there is a critical opportunity to conduct further qualitative research to better understand self-harm and to use these valuable and internationally relevant data to support the development of effective public education campaigns and personally tailored treatment options. Aims and objectivesTo evaluate the effect of an insulin introduction' group visit on insulin initiation and A1C in adults with type 2 diabetes. BackgroundThe clinical course of type 2 diabetes involves eventual beta-cell failure and the need for insulin therapy. Patient psychological insulin resistance, provider-related delays and system barriers to timely initiation of insulin are common. Group visits are widely accepted by patients and represent a potential strategy for improving insulin initiation. DesignA single two-hour group visit in English or Spanish, facilitated by advanced practice nurses, addressed psychological insulin resistance and encouraged mock injections to overcome needle anxiety. MethodsA retrospective review of 273 patients referred from 2008-2012, determined characteristics of group attenders, rates of mock self-injection, rates of insulin initiation and changes in A1C from baseline to 2-6 and 7-12months postgroup. Change in A1C was compared to patients referred to the group who did not attend (nonattenders'). ResultsOf 241 patients eligible for analysis, 876% were racial/ethnic minorities with an average A1C of 999%. Group attendance rate was 66%; 92% performed a mock injection, 55% subsequently started insulin. By 2-6months, A1C decreased by 137% among group attenders, and by 16% in those who did a mock injection and started insulin. Fewer nonattenders started insulin in primary care (40%), experiencing an A1C reduction of 056% by 2-6months. A1C improvements were sustained by 7-12months among group attenders and nonattenders who started insulin. Relevance to clinical practiceNurses can effectively address patient fears and engage patients in reframing insulin therapy within group visits. ConclusionsThis one-time nurse-facilitated group visit addressing psychological barriers to insulin in a predominantly minority patient population resulted in increased insulin initiation rates and clinically meaningful A1C reductions. Aims and objectivesTo explore the lived experience of the meaning of being diagnosed with multiple sclerosis on the individual's sense of self. BackgroundThe time leading up to and immediately following the diagnosis of multiple sclerosis has been identified as a time period shrouded by uncertainty and one where individuals have a heightened desire to seek accurate information and support. The diagnosis brings changes to the way one views the self which has consequences for biographical construction. DesignA hermeneutic phenomenological study. MethodsIn-depth qualitative interviews were conducted with 10 people recently diagnosed with multiple sclerosis. The data were analysed using interpretative phenomenological analysis. FindingsThis study presents the three master themes: the road to diagnosis',the liminal self' and learning to live with multiple sclerosis'. The diagnosis of multiple sclerosis may be conceptualised as a threshold moment' where the individual's sense of self is disrupted from the former taken-for-granted way of being and propose a framework which articulates the transition. ConclusionThe findings highlight the need for healthcare professionals to develop interventions to better support people affected by a new diagnosis of multiple sclerosis. The conceptual framework which has been developed from the data and presented in this study provides a new way of understanding the impact of the diagnosis on the individual's sense of self when affected by a new diagnosis of multiple sclerosis. This framework can guide healthcare professionals in the provision of supportive care around the time of diagnosis. Relevance for Clinical PracticeThe findings provide practitioners with a new way of understanding the impact of the diagnosis on the individual's sense of self and a framework which can guide them in the provision of supportive care around the time of diagnosis. Aims and objectivesTo describe the variation of conceptions of being ill and hospitalised, from the perspective of healthcare-professional patients. BackgroundPrevious literature focuses on either physicians' or nurses' experiences of being a patient, without aiming at determining a variation of ways of understanding that phenomena. Nor have we been able to identify any study reporting other healthcare professionals' experiences. DesignThis study has an inductive descriptive design. MethodsQualitative interviews with healthcare professionals (n=16), who had been hospitalised for at least twodays. Phenomenographic data analysis was conducted. ResultsThe feelings of security were based on knowledge, insight and trust, and acceptance of the healthcare system. Being exposed and totally dependent due to illness provoked feelings of vulnerability and insecurity. The patients used their knowledge to achieve participation in the care. The more severe they perceived their illness to be, the less they wanted to participate and the more they expressed a need for being allowed to surrender control. The patients' ideal picture of care was sometimes disrupted, and based on their experience, they criticised care and made suggestions that could contribute to general care improvements. ConclusionsHealthcare-professional patients have various conceptions of being ill and hospitalised. Based on the general nature of the many needs expressed, we believe that the insights provided in this study can be transferred so as to also be valid for lay patients. Possibly, an overhaul of routines for discharge planning and follow-up, and adopting a person-centred approach to care, can resolve some of the identified shortcomings. Relevance to clinical practiceThe results can be used for the purpose of developing knowledge for healthcare professions and for educational purposes. Aim and objectiveTo report an analysis of the concept of safety climate in healthcare providers. BackgroundCompliance with safe work practices is essential to patient safety and care outcomes. Analysing the concept of safety climate from the perspective of healthcare providers could improve understanding of the correlations between safety climate and healthcare provider compliance with safe work practices, thus enhancing quality of patient care. DesignConcept analysis. Data sourcesThe electronic databases of CINAHL, MEDLINE, PubMed and Web of Science were searched for literature published between 1995-2015. Searches used the keywords safety climate' or safety culture' with hospital' or healthcare'. MethodThe concept analysis method of Walker and Avant analysed safety climate from the perspective of healthcare providers. ResultsThree attributes defined how healthcare providers define safety climate: (1) creation of safe working environment by senior management in healthcare organisations; (2) shared perception of healthcare providers about safety of their work environment; and (3) the effective dissemination of safety information. Antecedents included the characteristics of healthcare providers and healthcare organisations as a whole, and the types of work in which they are engaged. Consequences consisted of safety performance and safety outcomes. Most studies developed and assessed the survey tools of safety climate or safety culture, with a minority consisting of interventional measures for improving safety climate. ConclusionMore prospective studies are needed to create interventional measures for improving safety climate of healthcare providers. This study is provided as a reference for use in developing multidimensional safety climate assessment tools and interventional measures. Relevance to clinical practiceThe values healthcare teams emphasise with regard to safety can serve to improve safety performance. Having an understanding of the concept of and interventional measures for safety climate allows healthcare providers to ensure the safety of their operations and their patients. The 21st century work environment calls for team members to be more engaged in their work and exhibit more creativity in completing their job tasks. The purpose of this study was to examine whether team performance pressure and individual goal orientation would moderate the relationships between individual autonomy in teams and individual engagement and creativity. A sample consisting of 209 team members and 45 team managers from 45 work teams in 14 companies completed survey measures. To test our hypotheses, we used multilevel modeling with random intercepts and slopes because the individual-level data were nested within the team-level data. Hierarchical linear modeling showed that team-level performance pressure attenuated the positive relations between job autonomy and three dimensions of engagement. There were also 3-way interactions between job autonomy, psychological performance pressure, and learning goal orientation in predicting three dimensions of engagement and creativity. This study highlights the importance of exploring the moderating effect of team-level task characteristics and individual differences on the relationships between job autonomy and individual engagement and creativity. Organizations need to carefully consider both individual learning goals and performance pressure when empowering team members with job autonomy. This is one of the first studies to explore the association between individual job autonomy in teams and individual outcomes in a contingency model. We first introduced team performance pressure as a moderator of job autonomy and examined the 3-way interaction effects of performance pressure, individual job autonomy, and learning goal orientation. Our objective was to generate, define, and evaluate behavioral dimensions of ethical performance at work that are common across United States occupations. This project involved three studies. Study 1 involved (a) qualitative review of published literature, professional codes of ethics, and critical incidents of (un)ethical performance and resulted in (b) behavioral dimensions and ethical performance rating scales. The second and third studies used a retranslation methodology to evaluate the ethical performance dimensions from Study 1. The behavioral dimensions were linked to the performance determinants (personal attributes) in Study 3. Study 1 resulted in draft dimension definitions and rating scales for 10 ethical performance dimensions. In Studies 2 and 3, retranslation data provided strong support for 10 behavioral dimensions of ethical performance at work. Results from Study 3 shed light on possible relationships among the performance dimensions based on their underlying performance determinants. Communicating an organization's ethical standards to employees is important because some ethical breakdowns can be attributed to simply failing to recognize an ethical matter (in: DeCremer, Managerial ethics: Managing the psychology of morality, Routledge, New York, 2011). Definitions of ethical behavior in the workplace provide a tool for researchers, employers, and employees to communicate about ethical situations and a foundation for folding ethics into employee training and performance management. These studies provide a taxonomy of ethical performance at work that generalizes to a diverse array of occupations and industries, and dimensions and rating scales have value for performance management, training/curriculum development, job analysis, predictor development and/or validation, and additional research. The purpose of this study was to investigate relationships between dimensions of work ethic and dimensions of organizational citizenship behavior (OCB) and counterproductive work behavior (CWB). Data were collected from employed individuals in MBA and undergraduate management courses and their work supervisors (N = 233). Participants represented diverse occupations with respect to job levels and industries. Participants completed the work ethic inventory, and participants' managers completed ratings of OCB and CWB. The work ethic dimension of centrality of work was positively related to both dimensions of OCB (i.e., OCB-I and OCB-O), and the work ethic dimension of morality/ethics was negatively related to one of the dimensions of CWB (i.e., CWB-I). Modern perspectives on job performance recognize the multidimensional nature of the domain (i.e., the expanded criterion domain). In addition, noncognitive predictors such as work ethic have value as individual differences that are associated with performance outcomes. The assessment of such constructs can help inform selection and placement activities where a focus on OCB and CWB is important to managers. This study provides additional evidence on the relationship between work ethic and performance outcomes. Previous research has provided limited information on the relationship between dimensions of work ethic and dimensions of OCB, and no information existed on the relationship between work ethic dimensions and CWB. This study investigated the convergence of knowledge, skills, abilities, and other characteristics (KSAOs) required for either face-to-face (FtF) or text-based computer-mediated (CM) communication, the latter being frequently mentioned as core twenty-first century competencies. In a pilot study (n = 150, paired self- and peer reports), data were analyzed to develop a measurement model for the constructs of interest. In the main study, FtF and CM communication KSAOs were assessed via an online panel (n = 450, paired self- and peer reports). Correlated-trait-correlated-method minus one models were used to examine the convergence of FtF and CM communication KSAOs at the latent variable level. Finally, we applied structural equation modeling to examine the influence of communication KSAOs on communication outcomes within (e.g., CM KSAOs on CM outcomes) and across contexts (e.g., CM KSAOs on FtF outcomes). Self-reported communication KSAOs showed only low to moderate convergence between FtF and CM contexts. Convergence was somewhat higher in peer reports, but still suggested that the contextualized KSAOs are separable. Communication KSAOs contributed significantly to communication outcomes; context-incongruent KSAOs explained less variance in outcomes than context-congruent KSAOs. The results imply that FtF and CM communication KSAOs are distinct, thus speaking to the consideration of CM KSAOs as twenty-first century competencies and not just a derivative of FtF communication competencies. This study is the first to examine the convergence of context-specific communication KSAOs within a correlated-trait-correlated-method minus one framework using self- and peer reports. Even though stereotypes suggest that older generational cohorts (e.g., Baby Boomers) endorse higher levels of work ethic than younger generations (e.g., Millennials), both the academic literature and popular press have found mixed evidence as to whether or not generational differences actually exist. To examine whether generational differences exist in work ethic, a dataset was compiled (k = 105) of all published studies that provided an average sample age and average work ethic score, with each sample becoming an observation, and being assigned a generational cohort based upon the average age of the sample. Three hierarchical multiple regressions found no effect of generational cohort on work ethic endorsement. In two of the three phases, results found a main effect of sample type, such that industry samples had higher work ethic endorsement than student samples. Implications for applied practitioners and future research streams for generational and work ethic research are discussed. This research advances understanding of empirical time modeling techniques in self-regulated learning research. We intuitively explain several such methods by situating their use in the extant literature. Further, we note key statistical and inferential assumptions of each method while making clear the inferential consequences of inattention to such assumptions. Using a population model derived from a recent large-scale review of the training and work learning literature, we employ a Monte Carlo simulation fitting six variations of linear mixed models, seven variations of latent common factor models, and a single latent change score model to 1500 simulated datasets. The latent change score model outperformed all six of the linear mixed models and all seven of the latent common factor models with respect to (1) estimation precision of the average learner improvement, (2) correctly rejecting a false null hypothesis about such average improvement, and (3) correctly failing to reject true null hypothesis about between-learner differences (i.e., random slopes) in average improvement. The latent change score model is a more flexible method of modeling time in self-regulated learning research, particularly for learner processes consistent with twenty-first-century workplaces. Consequently, defaulting to linear mixed or latent common factor modeling methods may have adverse inferential consequences for better understanding self-regulated learning in twenty-first-century work. Ours is the first study to critically, rigorously, and empirically evaluate self-regulated learning modeling methods and to provide a more flexible alternative consistent with modern self-regulated learning knowledge. The purpose of these studies was to evaluate the effectiveness of a modified introductory engineering class that used scrum practices to develop students' twenty-first-century skills related to self-awareness, collaboration, and problem-solving. We conducted an evaluation of modified engineering courses in two universities. In Study 1, 250 students completed end-of-semester surveys about the impact of the course on student development. In Study 2, we collected survey data and course grades from 125 students completing the modified course and 109 completing the standard course. In Study 1, students reported the class increased their excitement about pursuing a career in engineering, and reported improvement in all leadership skills assessed. In Study 2, students in the modified course enjoyed the course more than those in the standard course. In all individual and team behaviors assessed, students in the modified course reported more improvement than students in the standard courses, although none reached statistical significance. The future of engineering is likely to be shaped by our ability to bolster twenty-first-century skills in engineering education. These studies provide initial evidence that scrum practices that infuse leadership development into engineering curriculum is effective at helping engineering students develop critical twenty-first-century skills. Integrating leadership development into engineering curriculum is not yet commonplace, with many institutions separating it from engineering coursework. This paper describes an approach for integrating the two, and provides initial evidence that it can be done effectively, without sacrificing students' experience or mastery of engineering content. We investigate the effect of banks' corporate control on the cost of bank lending. We exploit the fact that the shares banks hold in a fiduciary capacity can give them control over firms' voting rights that are separate from firms' cash flow rights. We find that banks offer firms an interest rate discount that is increasing with the bank's voting stake in the firm. This finding is robust and appears to be driven by the bank's voting rights. Banks offer larger discounts when they have more authority to exercise their voting rights, and no discount when they hold an equity stake that does not give them voting authority. Additionally, banks offer larger interest rate discounts to riskier borrowers, and firms that borrow from banks with voting rights experience lower stock volatility after they take out loans. We investigate the impact of information and communication technologies (ICT) on local branch managers' (LBMs) autonomy in small business lending. Using a unique dataset of nearly 300 Italian banks, we show that banks holding more ICT capital delegate more decision-making power to their LBMs. Evidence from a variety of identification strategies suggests that our results are not driven by unobserved heterogeneity. We also find that the positive effect of ICT on delegation is stronger for banks resorting more to soft information (i.e. those specialized in small business lending and with a longer permanence of LBMs in the same branch). While the literature on debt restructuring usually assumes that banks behave uniformly towards distressed firms, we demonstrate that banks follow different strategies when they decide whether to take part in the workout process or not. Using a survey of Italian banks, we link this heterogeneity to banks' internal organization and lending technologies (transactional versus relationship lending). The probability of debt restructuring is higher when the bank is geographically closer to the borrowing firm, relies more on soft information than on credit scoring (except when credit scoring is used for monitoring purposes) or adopts a decentralized structure. In this paper, we analyze the performance persistence and survivorship bias of Islamic funds. The remarkable growth of these types of ethical funds raises the question of how non-financial attributes, including beliefs and value systems, influence performance and its persistence. A procedure commonly used in prior literature to assess persistence is the measuring of the performance of investment strategies based on past performance. In this context, we propose a refined version of this methodology that controls the cross-sectional significance of the performance of these strategies. This procedure correctly identifies whether abnormal performance is due to a dynamic investment strategy based on past performance, or whether it is obtained by investing in a particular set of mutual funds. The significance of the persistence varies depending on the time horizon (yearly/half-yearly), survivorship, or the tail of the distribution. In particular, we find that persistence only exists for the best funds, whereas for the worst funds, the results are not significant. We examine the intraday informational and liquidity effects of unexpected duration between trades on bid-ask spreads and depths. The difference between realized duration and the predicted duration from an autoregressive conditional duration model is used as a proxy for unexpected duration. We find that unexpected short duration alone permanently increases the quoted spread and positively correlates with the adverse-selection component of the effective spread, despite the presence of a liquidity component in the spread adjustment. Unexpected duration for a buyer-initiated trade has a stronger impact on the quoted spread than that for a seller-initiated trade. These results support the implications of information uncertainty in Easley and O'Hara (J Financ 47(2):577-605 1992) and short-sales constraints in Diamond and Verrecchia (J Financ Econ 18:277-311 1987) for the price adjustment behavior. Moreover, we show that unexpected short duration for a seller-initiated (buyer-initiated) trade permanently increases (slightly reduces) the bid (ask) depth and that there is also a liquidity component in the adjustment in depths. We attribute the asymmetric effects on depths to the differential informativeness of buyer- and seller-initiated trades. In fin-de-siecle France, hypnotism enjoyed an unprecedented level of medico-scientific legitimacy. Researchers studying hypnotism had nonetheless to manage relations between their new 'science' and its widely denigrated precursor, magnetisme animal, because too great a resemblance between the two could damage the reputation of 'scientific' hypnotism. They did so by engaging in the rhetorical activity of boundary-work. This paper analyses such demarcation strategies in major texts from the Salpetriere and Nancy Schools - the rival groupings that dominated enquiry into hypnotism in the 1880s. Researchers from both Schools depicted magnetisme as 'unscientific' by emphasizing the magnetizers' tendency to interpret phenomena in wondrous or supernatural terms. At the same time, they acknowledged and recuperated the 'portions of truth' hidden within the phantasmagoria of magnetisme; these 'portions' function as positive facts in the texts on hypnotism, immutable markers of an underlying natural order that accounts for similarities between phenomena of magnetisme and hypnotism. If this strategy allows for both continuities and discontinuities between the two fields, it also constrains the scope for theoretical speculation about hypnotism, as signalled, finally, by a reading of one fictional study of the question, Anatole France's 'Monsieur Pigeonneau'. During the late nineteenth century, Spanish physicians had few chances to observe how hypnosis worked within a clinical context. However, they had abundant opportunities to watch lay hypnotizers in action during private demonstrations or on stage. Drawing on the exemplary cases of the magnetizers Alberto Santini Sgaluppi (a.k.a. Alberto Das) and Onofroff, in this paper I discuss the positive influence of stage magnetizers on medical hypnosis in Spain. I argue that, owing to the absence of medical training in hypnosis, the stage magnetizers' demonstrations became practical hypnosis lessons for many physicians willing to learn from them instead of condemning them. I conclude that Spain might be no exception in this regard, and that further research should be undertaken into practices in other countries. In the late 1870s, a small group of Italian psychiatrists became interested in hypnotism in the wake of the studies conducted by the French neurologist Jean-Martin Charcot. Eager to engage in hypnotic research, these physicians referred to the scientific authority of French and German scientists in order to overcome the scepticism of the Italian medical community and establish hypnotism as a research subject based on Charcot's neuropathological model. In the following years, French studies on hypnotism continued to exert a strong influence in Italy. In the mid 1880s, studies on hypnotic suggestion by the Salpetriere and Nancy Schools of hypnotism gave further impetus to research and therapeutic experimentation and inspired the emergence of an interpretative framework that combined theories by both hypnotic schools. By the end of the decade, however, uncertainties had arisen around both hypnotic theory and the therapeutic use of hypnotism. These uncertainties, which were linked to the crisis of the neuropathological paradigm that had to a large extent framed the understanding of hypnotism in Italy and the theoretical disagreements among the psychiatrists engaged in hypnotic research, ultimately led to a decline in interest in hypnotism in Italy. In May 1892, Belgium adopted a law on the exercise of hypnotism. The signing of the law constituted a temporary endpoint to six years of debate on the dangers and promises of hypnotism, a process of negotiation between medical doctors, members of parliament, legal professionals and lay practitioners. The terms of the debate were not very different from what happened elsewhere in Europe, where, since the mid 1880s, hypnotism had become an object of public concern. The Belgian law was nevertheless unique in its combined effort to regulate the use of hypnosis in public and private, for purposes of entertainment, research and therapy. My analysis shows how the making of the law was a process of negotiation in which local, national and transnational networks and allegiances each played a part. While the transnational atmosphere of moral panic had created a seedbed for the law, its eventual outlook owed much to the powerful lobby work of an essentially local network of lay magnetizers, and to the renown of Joseph Delboeuf, professor at the University of Liege, whose work in the field of hypnotism stimulated several liberal doctors and members of Parliament from the Liege region to defend a more lenient law. In the late nineteenth century, German-speaking physicians and psychiatrists intensely debated the benefits and risks of treatment by hypnotic suggestion. While practitioners of the method sought to provide convincing evidence for its therapeutic efficacy in many medical conditions, especially nervous disorders, critics pointed to dangerous side effects, including the triggering of hysterical attacks or deterioration of nervous symptoms. Other critics claimed that patients merely simulated hypnotic phenomena in order to appease their therapist. A widespread concern was the potential for abuses of hypnosis, either by giving criminal suggestions or in the form of sexual assaults on hypnotized patients. Official inquiries by the Prussian Minister for Religious, Educational and Medical Affairs in 1902 and 1906 indicated that relatively few doctors practised hypnotherapy, whereas the method was increasingly used by lay healers. Although the Ministry found no evidence for serious harm caused by hypnotic treatments, whether performed by doctors or by lay healers, many German doctors seem to have regarded hypnotic suggestion therapy as a problematic method and abstained from using it. Lurid tales of the criminal use of hypnosis captured both popular and scholarly attention across Europe during the closing decades of the nineteenth century, culminating not only in the invention of fictional characters such as du Maurier's Svengali but also in heated debates between physicians over the possibilities of hypnotic crime and the application of hypnosis for forensic purposes. The scholarly literature and expert advice that emerged on this topic at the turn of the century highlighted the transnational nature of research into hypnosis and the struggle of physicians in a large number of countries to prise hypnotism from the hands of showmen and amateurs once and for all. Making use of the 1894 Czynski trial, in which a Baroness was putatively hypnotically seduced by a magnetic healer, this paper will examine the scientific, popular and forensic tensions that existed around hypnotism in the German context. Focusing, in particular, on the expert testimony about hypnosis and hypnotic crime during this case, the paper will show that, while such trials offered opportunities to criminalize and pathologize lay hypnosis, they did not always provide the ideal forum for settling scientific questions or disputes. Clustering by fast search and finding of Density Peaks ( called as DPC) introduced by Alex Rodriguez and Alessandro Laio attracted much attention in the field of pattern recognition and artificial intelligence. However, DPC still has a lot of defects that are not resolved. Firstly, the local density rho(i) of point i is affected by the cutoff distance dc, which can influence the clustering result, especially for small real-world cases. Secondly, the number of clusters is still found intuitively by using the decision diagram to select the cluster centers. In order to overcome these defects, this paper proposes an automatic density peaks clustering approach using DNA genetic algorithm optimized data field and Gaussian process (referred to as ADPC-DNAGA). ADPC-DNAGA can extract the optimal value of threshold with the potential entropy of data field and automatically determine the cluster centers by Gaussian method. For any data set to be clustered, the threshold can be calculated from the data set objectively rather than the empirical estimation. The proposed clustering algorithm is benchmarked on publicly available synthetic and real-world datasets which are commonly used for testing the performance of clustering algorithms. The clustering results are compared not only with that of DPC but also with that of several well-known clustering algorithms such as Affinity Propagation, DBSCAN and Spectral Cluster. The experimental results demonstrate that our proposed clustering algorithm can find the optimal cutoff distance d(c), to automatically identify clusters, regardless of their shape and dimension of the embedded space, and can often outperform the comparisons. Sequence Assembly is one of the important topics in bioinformatics research. Sequence assembly algorithm has always met the problems of poor assembling precision and low efficiency. In view of these two problems, this paper designs and implements a precise assembling algorithm under the strategy of finding the source of reads based on the MapReduce (SA-BR-MR) and Eulerian path algorithm. Computational results show that SA-BR-MR is more accurate than other algorithms. At the same time, SA-BR-MR calculates 54 sequences which are randomly selected from animals, plants and microorganisms with base lengths from hundreds to tens of thousands from NCBI. All matching rates of the 54 sequences are 100%. For each species, the algorithm summarizes the range of K which makes the matching rates to be 100%. In order to verify the range of K value of hepatitis C virus (HCV) and related variants, the randomly selected eight HCV variants are calculated. The results verify the correctness of K range of hepatitis C and related variants from NCBI. The experiment results provide the basis for sequencing of other variants of the HCV. In addition, Spark platform is a new computing platform based on memory computation, which is featured by high efficiency and suitable for iterative calculation. Therefore, this paper designs and implements sequence assembling algorithm based on the Spark platform under the strategy of finding the source of reads (SA-BR-Spark). In comparison with SA-BR-MR, SA-BR-Spark shows a superior computational speed. Into the data mining field, frequent approximate subgraph (FAS) mining has become an important technique with a broad spectrum of real-life applications. This fact is because several real-life phenomena can be modeled by graphs. In the literature, several algorithms have been reported for mining frequent approximate patterns on simple-graph collections; however, there are applications where more complex data structures, as multi-graphs, are needed for modeling the problem. But to the best of our knowledge, there is no FAS mining algorithm designed for dealing with multi-graphs. Therefore, in this paper, a canonical form (CF) for simple-graphs is extended to allow representing multi-graphs and a state-of-the-art algorithm for FAS mining is also extended for processing multi-graph collections by using the extended CF. Our experiments over different synthetic and real-world multi-graph collections show that the proposed algorithm has a good performance in terms of runtime and scalability. Additionally, we show the usefulness of the patterns computed by our algorithm in an image classiffication problem where images are represented as multi-graphs. Race identification is an essential ability for human eyes. Race classification by machine based on face image can be used in some practical application fields. Employing holistic face analysis, local feature extraction and 3D model, many race classification methods have been introduced. In this paper, we propose a novel fusion feature based on periocular region features for classifying East Asian from Caucasian. With the periocular region landmarks, we extract five local textures or geometrical features in some interesting regions which contain available discriminating race information. And then, these er effective features are fused into a remarkable feature by Adaboost training. On the composed OFD-FERET face database, our method gets perfect performance on average accuracy rate. Meanwhile, we do a plenty of additional experiments to discuss the effect on the performance caused by gender, landmark detection, glasses and image size. There exist a large number of distance functions that allow one to measure similarity between feature vectors and thus can be used for ranking purposes. When multiple representations of the same object are available, distances in each representation space may be combined to produce a single similarity score. In this paper, we present a method to build such a similarity ranking out of a family of distance functions. Unlike other approaches that aim to select the best distance function for a particular context, we use several distances and combine them in a convenient way. To this end, we adopt a classical similarity learning approach and face the problem as a standard supervised machine learning task. As in most similarity learning settings, the training data are composed of a set of pairs of objects that have been labeled as similar/dissimilar. These are first used as an input to a transformation function that computes new feature vectors for each pair by using a family of distance functions in each of the available representation spaces. Then, this information is used to learn a classifier. The approach has been tested using three different repositories. Results show that the proposed method outperforms other alternative approaches in high-dimensional spaces and highlight the benefits of using multiple distances in each representation space. In order to improve the H.264/AVC compressed video stream error resilience in wireless channel transmission, this paper presents a spatial error concealment algorithm based on adaptive edge threshold and directional weight. Firstly, this algorithm makes use of Sobel gradient operator of image edge detection to detect the edge of adjacent macro blocks; secondly, according to specific information of adjacent macro-block of damaged macro-block, it can set gradient adaptive threshold; thirdly, it makes the direction weighted interpolation to damaged macro-block with the Sobel gradient operator of image edge detection. Experiments show that the image reconstruction quality is greatly improved by using this algorithm, which has higher application value for different video sequence as compared to the traditional spatial error concealment algorithms. This algorithm not only improves the quality of image restoration, but also has higher application value. To remove image noise without considering the noise model, a dual-tree wavelet thresholding method (CDOA-DTDWT) is proposed through noise variance optimization. Instead of building a noise model, the proposed approach using the improved chaotic drosophila optimization algorithm (CDOA), to estimate the noise variance, and the estimated noise variance is utilized to modify wavelet coefficients in shrinkage function. To verify the optimization ability of the improved CDOA, the comparisons with basic DOA, GA, PSO and VCS are performed as well. The proposed method is tested to remove addictive noise and multiplicative noise, and denoising results are compared with other representative methods, e.g. Wiener filter, median filter, discrete wavelet transform-based thresholding (DWT), and nonoptimized dual-tree wavelet transform-based thresholding (DTDWT). Moreover, CDOA-DTDWT is applied as pre-processing utilization for tracking roller of mining machine as well. The experiment and application results prove the effectiveness and superiority of the proposed method. Agriculture robot by mechanical harvesting requires automatic detection and counting of fruits in tree canopy. Because of color similarity, shape irregularity, and background complex, fruit identification turns to be a very difficult task and not to mention to execute pick action. Therefore, green cucumber detection within complex background is a challenging task due to all the above-mentioned problems. In this paper, a technique based on texture analysis and color analysis is proposed for detecting cucumber in greenhouse. RGB image was converted to grayscale image and HSI image to perform algorithm, respectively. Color analysis was carried out in the first stage to remove background, such as soil, branches, and sky, while keeping green fruit pixels presented cucumbers and leaves as many as possible. In parallel, MSER and HOG were applied to texture analysis in gray-scale image. We can obtain some candidate regions by MSER to obtain the candidate including cucumber. The support vector machine is the classifier used for the identification task. In order to further remove false positives, key points were detected by a SIFT algorithm. Then, the results of color analysis and texture analysis were merged to get candidate cucumber regions. In the last stage, the mathematical morphology operation was applied to get complete cucumber. To improve the rate stability and make a balance for different viewpoints in distributed multi-view video coding (DMVC) system, a novel symmetric DMVC (SDMVC) scheme is proposed in this paper. In the proposed scheme, every frame from all views adopts the same encoding mode and stable output rates are achieved, which are significant to improve the transmission efficiency in the channel. Both temporal and spatial correlations are exploited, in addition, a novel side information (SI) generation algorithm aiming at better exploring the correlations of proposed scheme has been proposed to obtain better performance. The simulation results show that the proposed SDMVC scheme gets a much more stable rate than the asymmetric scheme, only with neglectable bit-rate increasing. Meanwhile, the proposed SI generation algorithm significantly improves the coding performance. Face recognition is an important aspect of the biometric surveillance system. Generally, face recognition is a type of biometric system that can identify a specific individual by analyzing and comparing patterns in the facial image. Face recognition has distinct advantage over other biometrics is noncontact process. It has a wide variety of applications in both the law enforcement and nonlaw enforcement. While using the low resolution face images, the resolution of the image gets degraded. In this paper, to enhance the performance rate for low resolution image, the fractional Bat algorithm and multi-kernel-based spherical SVM classifier is proposed. Initially, the low resolution image is converted into the high resolution images by the kernel regression method. The GWTM process is utilized for the feature extraction by the Gabor filter, wavelet transform and local binary pattern (texture descriptors). Then, the super resolution images are applied to the feature level fusion by using the fractional Bat algorithm which comprises of fractional theory and Bat algorithm. Finally, the multi-kernel-based spherical SVM classifier is introduced for the recognition of feature images. The experimental results and performance analysis evaluated by the comparison metrics are FAR, FRR and Accuracy with existing systems. Thus, the outcome of our proposed system achieves the highest accuracy of 95% based on the training data samples, stopping criterion and number of draw attempts. The a priori signal-to-noise ratio (SNR) plays an essential role in many speech enhancement systems. Most of the existing approaches to estimate the a priori SNR only exploit the amplitude spectra while making the phase neglected. Considering the fact that incorporating phase information into a speech processing system can significantly improve the speech quality, this paper proposes a phase-sensitive decision-directed (DD) approach for the a priori SNR estimate. By representing the short-time discrete Fourier transform (STFT) signal spectra geometrically in a complex plane, the proposed approach estimates the a priori SNR using both the magnitude and phase information while making no assumptions about the phase differerence between clean speech and noise spectra. Objective evaluations in terms of the spectrograms, segmental SNR, log-spectral distance (LSD) and short-time objective intelligibility (STOI) measures are presented to demonstrate the superiority of the proposed approach compared to several competitive methods at different noise conditions and input SNR levels. Domain knowledge of hierarchical task network (HTN) usually involves logical expressions with predicates. One needs to master two different languages which are used to describe domain knowledge and implement planner. This has presented enormous challenges for most programmers who are not familiar with logical expressions. To solve the problem a method of state variable representation from the programmer's point of view is introduced. This method has powerful expressivity and can unify the representations of domain knowledge and planning algorithm. In Pyhop a HTN planner written in Python, methods and operators are all as ordinary Python functions rather than using a special-purpose language. Pyhop uses a Python object that contains variable bindings and does not include a horn-clause inference engine for evaluating preconditions of operators and methods. By taking a simple travel-planning problem, it shows that the method is easy to understand and integrate planning into ordinary programming. Recently big data have become a research hotspot and been successfully exploited in a few applications such as data mining and business modeling. Although big data contain a plenty of treasures for all the fields of computer science, it is very difficult for the current computing paradigms and computer hardware to efficiently process and utilize big data to attain what are looked forward to. In this work, we explore the possibility of employing big data in recommendation systems. We have proposed a simple recommendation system framework BDRSF (Big Data Recommendation System Framework), which is based on big data with social context theories and has abilities in obtaining the Recommender based on the idea of supervised learning through big data training. Its main idea can be divided into three parts: (1) reduce the scale of the current recommendation problems according to the essence of recommending; (2) design a rational Recommender and propose a novel supervised learning algorithm to get it; (3) utilize the Recommender to deal with the later recommendation problems. Experimental results show that BDRSF outperforms conventional recommendation systems, which clearly indicates the effectiveness and efficiency of big data with social context in personalized recommendation. Family and nonfamily firms both must align owner and employee interests. However, family firms may experience lower labor productivity because of adverse selection problems from labor market sorting and attenuation. Incentive compensation reduces alignment of interest problems in family and nonfamily firms. Importantly, incentive compensation signals to potential employees that performance will be rewarded, which should improve the relative labor productivity in family firms by reducing adverse selection. Analysis of matched data on 216,768 firms supports our hypotheses, implying that incentive compensation has a broader impact on firm performance than commonly recognized in the family firm or human resource literatures. We draw on the socioemotional wealth perspective to examine the influence of family ownership on firms' noncompliance with corporate governance codes. Our results yield an inverted U-shaped effect of family ownership on noncompliance. While the family influence and control dimension leads to high levels of noncompliance, socioworthiness stemming from image and reputation dimension lessens noncompliance. In the presence of potential agency conflict, the control dimension prevails over reputation, even in countries with strong governance institutions. Our findings have critical implications for family business theory, for governance policy making and also for better understanding corporate governance in family firms. Small family firms have many unique relational qualities with implications for how knowledge is passed between individuals. Extant literature posits leadership approach as important in explaining differences in knowledge-sharing climate from one firm to another. This study investigates how leadership approaches interact with family influence to inform perceptions of knowledge sharing. We utilize survey data (n = 110) from owner-managers of knowledge-intensive small family firms in Scotland. Our findings present a choice in leadership intention, contrasting organization-focused participation against family-influenced guidance. Insight is offered on the implications of this leadership choice at both organizational and familial levels. Family firms are distinguished theoretically from nonfamily firms due to their pursuit of unique, family-related aspirations and goals. The pursuit of these aspirations and goals leads many family firms to define success or failure in terms of a broader set of outcomes than nonfamily firms. Despite this, family firm research has generally taken a constricted view of family firm outcomes by concentrating on narrowly defined financial performance as measured by accounting and/or market-based indicators. We contend that this somewhat myopic focus has slowed the field's development to some degree, by constraining our ability to test its fundamental tenets. To address this, we draw on several disciplines to systematically order family firm outcomes within a family firm(s) outcomes model that encompasses both financial and nonfinancial dimensions. While financial performance is important in research and practice, herein we refer to both financial and nonfinancial outcomes and explain how these outcomes map on the family unit and the family firm. Furthermore, we suggest measures that can be used and explain how the model can be applied when researchers select financial and nonfinancial outcomes important to family members as the family firm's success or failure is gauged. Background: Few articles in the literature identify and describe the instruments that are regularly used by scholars to measure cultural competence in healthcare providers. Purpose: This study reviews the psychometric properties of the several instruments that are used regularly to assess the cultural competence of healthcare providers. Methods: Researchers conducted a systematic review of the relevant articles that were published between 1983 and 2013 and listed on academic and government Web sites or on one or more of the following databases: CINAHL, MEDLINE, ERIC, PsycINFO, Psyc ARTICLES, PubMed, Cochrane, Pro Quest, Google Scholar, CNKI (China), and the National Digital Library of Theses and Dissertations (Taiwan). Results: This study included 57 articles. Ten instruments from these articles were identified and analyzed. These instruments included five that were presented in English and five that were presented in Chinese. All were self-administered and based on respondent perceptions. Five of the 10 instruments were designed to measure cultural competence, two were designed to measure cultural sensitivity, two were designed to measure transcultural self-efficacy, and one was designed to measure cultural awareness. The six cultural dimensions addressed by these instruments were attitudes, knowledge, skills, behaviors, desires, and encounters. An expert panel validated the content of the 10 instruments. The subscales explained 33%-90% of the variance in scores for eight of the instruments. The reliability of the 10 instruments was estimated based on the internal consistency, which ranged from .57 to .97. Conclusions: This systematic review may assist researchers to choose appropriate instruments to assess the cultural competence of healthcare providers. The findings of this review indicate that no single instrument is adequate to evaluate cultural competence in all contexts. Background: Mild cognitive impairment (MCI) is characterized by a decrease in cognitive abilities that does not affect the ability to perform activities of daily living (ADLs). Therefore, this condition is easily overlooked. The prevalence and factors of influence for MCI in older people living in publicly managed congregate housing are currently unknown. Purpose: This study investigated the prevalence and distribution of MCI in older people living in publicly managed congregate housing and assessed the correlations among quality of life (QoL), ADL, and MCI. Methods: This study applied a correlational study design. The participants were older people who met the study criteria and who lived in public housing in Wanhua District, Taipei City, Taiwan. One-on-one interviews were conducted to measure the cognitive abilities of the participants, and 299 valid samples were collected. Results: The prevalence of MCI in older people living in publicly managed congregate housing was 16.1%. The chi(2) test was employed to evaluate the distribution of MCI prevalence and indicated that the group with higher MCI prevalence exhibited the following characteristics: older than 81 years; married; lived in public housing for more than 20 years; cohabiting; had a history of drinking; and exhibited severe memory regression, physical disabilities, psychological distress, and low QoL. The difference between the groups achieved statistical significance (p < .05). After performing logistical regression analysis to control demographic variables, we found that QoL and ADL were critical for predicting MCI. Conclusions/Implications for Practice: This study confirmed that QoL and ADL correlate significantly with MCI in older people. Maintaining an open and supportive community enables older people to maintain sufficient mental activity, which has been shown to reduce MCI. These findings may provide an important reference for policy makers, educators, researchers, and community practitioners in their development of service strategies for older people. Background: Many institutions have conducted research on the subject of bullying. The literature includes many studies of the effects of widespread bullying among primary and secondary school students. Bullying against hospital nurses and also bullying against university students are well-known and frequently discussed research topics. Yet, the exposure of nursing students to bullying has not been sufficiently explored, and few studies have focused on the issue of bullying against nursing students. Purpose: The aim of this study is to examine bullying against nursing students, including the rate of bullying, types of bullying, and responses to the negative effects of bullying. Methods: This study was conducted on 202 nursing students (including sophomores, juniors, and seniors) during the 2013-2014 academic year. The participation rate was 88.5%. The Negative Attitudes Scale was used to collect data, and descriptive statistics were used in data analysis. Results: Participants were evenly distributed between women (49.5%) and men (50.5%). The median age of participants was 21.58 +/- 2.28 years; the frequency of bullying was 78.1%. The types of bullying were pejorative statements about the nursing profession (11.3%); low grades used as a form of punishment (9.9%); work, homework, and job rotation used as punishment in lieu of training (9.4%); impossible workloads (9.0%); and the spreading of rumors and gossip (7%). Conclusions/Implications for Practice: This study indicates that the participants were exposed to high levels of bullying. As exposure to bullying negatively affects the job attitudes of nursing students, further studies are necessary to develop strategies to prevent horizontal bullying. Background: Few studies have evaluated completely the changes in quality of life (QOL) that occur from pretreatment through the first four consecutive cycles of chemotherapy or determined its determinants in patients with advanced non-small-cell lung cancer (NSCLC). Purpose: The aim of this study was to explore the changes to and determinants of QOL in patients with advanced NSCLC under initial chemotherapy from pretreatment through Cycle 4 of chemotherapy. Methods: The QOL of 139 patients with advanced NSCLC was assessed from prechemotherapy through Cycle 4 of chemotherapy. Changes to and determinants of QOL were evaluated using multivariate linear regression, which used generalized estimation equation models. Results: No significant changes were observed in the global QOL or the physical, role, emotional, or cognitive functional domains of QOL during the course of chemotherapy. However, the social functional domain of QOL improved significantly at Cycle 3 in comparison with the prechemotherapy values. Better global QOL was determined as better performance status, less frequent physical symptoms, and less severe anxiety and depressive symptoms. Important determinants of better QOL in the five functional domains included younger age, better performance status, less frequent physical symptoms, less severe anxiety and depressive symptoms, and weaker perceived social support. Furthermore, patients who achieved a partial response after chemotherapy enjoyed stronger improvements in global QOL and the QOL emotional functional domain than those who did not. Conclusions: To help patients with advanced NSCLC optimize their QOL, healthcare professionals should enhance their ability to identify patients who are at elevated risk of poor QOL throughout the course of chemotherapy and to appropriately detect and manage the related physical symptoms and side effects, strengthen patients' social support, and lessen the anxiety and depressive symptoms of patients. Background: Social engagement is known to be an important factor that affects the quality of life and the psychological well-being of residents in long-term care settings. Few studies have examined social engagement in long-term care facilities in non-Western countries. Purpose: This study aimed to evaluate the validity and reliability of the revised index for social engagement (RISE), which was derived from the Korean version of the interRAI Long Term Care Facilities instrument. Methods: Three hundred fourteen older adults from 10 nursing homes in Korea were included in the study. Convergent and discriminant validities were tested using correlation analysis and t tests, respectively. Factor analysis was adopted to examine the factor structure. The reliability of the RISE was tested using Cronbach's alpha values for internal consistency, and interrater reliability was tested using item kappa values and intraclass correlation coefficients. Results: The RISE showed excellent convergent validity with the average time involved in activities (r = .58). The known-group comparison showed a significant difference in the means of RISE between the group with cognitive impairment and the group without cognitive impairment, indicating satisfactory discriminant validity. Factor analysis showed a good model fit for two factors in the RISE: group involvement and interaction with others. The RISE showed satisfactory internal consistency (alpha >= .70) and adequate interrater reliability (>=. 40). Conclusions/Implications for Practice: The RISE is a valid and reliable tool for measuring the social engagement of nursing home residents in Korea. Furthermore, this tool may be a useful instrument for assessing older ethnic Korean residents who reside in nursing homes that are located outside Korea. Background: The concept of quality of life (QOL) has increasingly attracted the interest of healthcare providers and is considered a valid end point for assessing the overall mental health of patients and their caregivers. Instruments with psychometric and cross-cultural validity are recommended for making accurate QOL assessments. Purpose: The aim of this study was to provide further validation of the Arabic World Health Organization (WHO) QOL-BREF for use among family caregivers of relatives with psychiatric illnesses in Jordan. Of the 26 items that constitute the scale, 24 are in the domains of physical health, psychological health, social interactions, and environment. Method: Of the 328 family caregivers approached, data for 266 respondents were kept for analysis. The Arabic WHOQOL-BREF internal consistency, item internal consistency, item discriminant validity, and construct validity were evaluated. Results: The Cronbach's alpha coefficient was >= 0.7. The 24 items constituting the evaluated domains reported an item internal consistency of >= 0.4 and met the item discriminant validity criterion of having a higher correlation with its corresponding domain than with other domains. Factor analysis revealed four strong factors that constituted the same constructs as in the WHO report. Conclusions: This study ascertains further validity of the Arabic WHOQOL-BREF scale for use among family caregivers of relatives with psychiatric illnesses in Jordan. Background: Developing interventions that improve deep sleep and quit awake is important to improve the quality of care that is provided to preterm infants. Purpose: The aim of this study was to compare the effects of kangaroo care and in-arms-holding on the sleep and wake states of preterm infants. Methods: A randomized controlled trial design was employed in 2011-2012. Seventy-two stable preterm infants with gestational ages of 32-37 weeks and their mothers were recruited from the neonatal intensive care unit of Valiasr Hospital in Tehran, Iran. Seventy participants completed the trial. In the preintervention phase, nurses placed all of the infants, clad only in diapers, in supine position in their incubator for 20 minutes. Next, the infants in the kangaroo care group were placed onto their mothers' bare chest, whereas those infants in the in-arms-holding group were cradled in their mothers' arms, with the head and back supported by the mother's left arm. The intervention period lasted for 70 minutes. In the postintervention phase, the infants were returned to their incubators and placed in supine position for 20 minutes. The observer recorded the status of the infants during the three phases of study. Results: There were no significant differences between the two groups in terms of state distribution in the preintervention phase. However, the kangaroo care group had longer periods in deep sleep (p < .001) and in the quiet awake/alert state (p = .004) during the intervention phase and less time in the light sleep or drowsy state (p < .001) and in the actively awake state (p = .02) than the in-arms-holding group. No significant group differences were found in terms of crying. Conclusions: Kangaroo care appears to increase the length of time that preterm infants spend in deep sleep and quiet awake states as compared with simply being held in their mothers' arms. Replication of this research will strengthen the results. Background: Most prenatal preventive interventions for unmarried mothers do not integrate fathers or help the parents plan for the development of a functional coparenting alliance after the baby's arrival. Furthermore, properly trained professionals have only rarely examined the fidelity of these interventions. Purpose: This report examines whether experienced community interventionists (home visitors, health educators, fatherhood service personnel) with no formal couples' therapy training are capable of pairing together to deliver with adequate fidelity a manualized dyadic intervention designed for expectant unmarried mothers and fathers. Methods: Three male and four female mentors (home visitors, health educators, fatherhood personnel) working in paired male-female co-mentor teams delivered a seven-session "Figuring It Out for the Child" curriculum (six prenatal sessions, one booster) to 14 multirisk, unmarried African American families (parent age ranging from 14 to 40). Parental well-being and views of fatherhood were assessed before the intervention and again 3 months after the baby's birth. Quality assurance analysts evaluated mentor fidelity (adherence to the curriculum, competence in engaging couples with specified curricular content) through a review of the transcripts and audiotapes from the sessions. Mentors also rated their own adherence. Results: Although the mentors overestimated adherence, quality assurance analyst ratings found acceptable levels of adherence and competence, with no significant male-female differences in fidelity. Adherence and competence were marginally higher in sessions that required fewer direct couples' interventions. Parents reported satisfaction with the interventions and showed statistically significant improvement in the family dimensions of interest at 3-4 months posttreatment. Conclusions/Implications for Practice: Findings support the wisdom of engaging men both as interventionists and as recipients of prenatal coparenting interventions-even in families where the parents are uncoupled and non-co-residential. Background: Patients with multiple sclerosis practice help-seeking behaviors largely because of the progression of this physically exhausting disease, which has far reaching psychosocial consequences and requires hospitalization during severe disease exacerbations. Purpose: The aim of this study was to describe the perspectives and experiences of Iranian patients with multiple sclerosis regarding help-seeking behaviors. Methods: A qualitative design, based on the content analysis approach, was used. Data were drawn from unstructured interviews that were held with 23 participants, who were referred from two teaching hospitals and from the Multiple Sclerosis Society in Ahvaz, Iran. Results: During the data analysis, four main themes emerged, including "reliance on God and recourse to the imams," "striving to gain caring knowledge," "a need for comprehensive support," and "attention to spiritual care." Conclusions/Implications for Practice: Healthcare team members, especially nurses, should pay attention to perspectives of patients with multiple sclerosis and try to address these patients' help-seeking behaviors to provide high-quality care. The authors hope that the findings of this study will inform the construction of interventional strategies to improve nursing care and facilitate the provision of better support services for people with MS. Background: Caring for the bereaved is an intrinsic part of intensive care practice with family bereavement support an important aspect of the nursing role at end of life. However, reporting on provision of intensive care family bereavement support at a national level has not been well reported since an Australian paper published ten years ago. Objectives: The objective was to investigate provision of family bereavement support in intensive care units (ICU) across New Zealand (NZ) and Australia. Method: A cross-sectional exploratory descriptive web-based survey was used. All ICUs [public/private, neonatal/pediatrics/adults] were included. The survey was distributed to one nursing leader from each identified ICU (n = 229; 188 in Australia, 41 in NZ). Internal validity of the survey was established through piloting. Descriptive statistics were used to analyse the data. Ethical approval was received by the ethics committees of two universities. Results: One-hundred and fifty-three (67%) responses were received from across New Zealand and Australia with 69.3% of respondents from the public sector. Whilst respondents reported common bereavement practices to include debriefing for staff after a traumatic death (87.9%), there was greater variation in sending a sympathy card to families (NZ 54.2%, Australia 20.8%). Fifty percent of responding New Zealand units had a bereavement follow-up service compared to 28.3% of Australian unit respondents. Of those with follow-up services, 92.3% of New Zealand units undertook follow-up calls to families compared to 76.5% of Australian units. Bereavement follow-up services were mainly managed by social workers in Australia and nursing staff in New Zealand. Conclusions: This is the first Australia and New Zealand-wide survey on ICU bereavement support services. Whilst key components of family bereavement support remain consistent over the past decade, there were fewer bereavement follow-up services in responding Australian ICUs in 2015. As a quality improvement initiative, support for this area of family care remains important with rigorous evaluation essential. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Objectives: To provide a snapshot of the prevalence of abnormal body mass index (BMI) in a sample of intensive care unit (ICU) patients; to identify if any medical specialty was associated with abnormal BMI and to explore associations between BMI and ICU-related outcomes. Background: Obesity is an escalating public health issue across developed nations but there is little data pertaining to critically ill patients who require care that is expensive. Methods: Retrospective observational audit of 735 adult patients (median age 58 years) admitted to the Sir Charles Gairdner Hospital 23 bed tertiary ICU between November 2012 and June 2014. Primary outcome measure was patient BMI: underweight (<18.5 kg/m(2)), normal weight (18.5-24.99 kg/m(2)), overweight (25-29.99 kg/m(2)), obese (30-39.99 kg/m(2)) or extreme obese (40 kg/m(2) or above). Other measures included gender, acute physiology and chronic health evaluation II score, admission specialty, length of mechanical ventilation (MV), length of stay (LOS) and mortality. Results: Compared to the general population there was a higher proportion of obese patients within the cohort with the majority of patients overweight (33.9%) or obese (36.5%) and median BMI of 27.9 (IQR 7.9). There were no significant differences between specialties for BMI (p = 0.103) and abnormal BMI was not found to impact negatively on mortality (ICU, p = 0.373; hospital, p = 0.330). Normal BMI patients had shorter length of MV than other BMI categories and the impact of BMI on ICU LOS was dependent on length of MV. Overweight patients ventilated for five days or more had a shorter LOS, and extremely obese non-ventilated patients had a longer LOS, compared to normal weight patients. Conclusions: Although the obesity-disease relationship is increasingly complex and data presented reflects categorical BMI for patients admitted to a single ICU site it may be important to consider the cost implications of caring for this cohort especially with regard to MV and LOS. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Objectives: To explore how intensive care physicians conceptualise and prioritise patient health-related quality of life in their decision-making. Research methodology/design: General qualitative inquiry using elements of Grounded Theory. Six ICU physicians participated. Setting: A large, closed, mixed ICU at a university-affiliated hospital, Australia. Results: Three themes emerged: (1) Multi-dimensionality of HRQoL-HRQoL was described as difficult to understand; the patient was viewed as the best informant. Proxy information on HRQoL and health preferences was used to direct clinical care, despite not always being trusted. (2) Prioritisation of HRQoL within decision-making-this varied across the patient's health care trajectory. Premorbid HRQoL was prioritised when making admission decisions and used to predict future HRQoL. (3) Role of physician in decision-making-the physicians described their role as representing society with peers influencing their decision-making. All participants considered their practice to be similar to their peers, referring to their practice as the "middle of the road". This is a novel finding, emphasising other important influences in high-stakes decision-making. Conclusion: Critical care physicians conceptualised HRQoL as a multi-dimensional subjective construct. Patient (or proxy) voice was integral in establishing patient HRQoL and future health preferences. HRQoL was important in high stakes decision-making including initiating invasive and burdensome therapies or in redirecting therapeutic goals. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Background: Patients admitted to Australian intensive care units are often critically unwell, and present the challenge of increasing mortality due to an ageing population. Several of these patients have terminal conditions, requiring withdrawal of active treatment and commencement of end-of-life (EOL) care. Objectives: The aim of the study was to explore the perspectives and experiences of physicians and nurses providing EOL care in the ICU. In particular, perceived barriers, enablers and challenges to providing EOL care were examined. Methods: An interpretative, qualitative inquiry was selected as the methodological approach, with focus groups as the method for data collection. The study was conducted in Melbourne, Australia in a 24-bed ICU. Following ethics approval intensive care physicians and nurses were recruited to participate. Focus group discussions were discipline specific. All focus groups were audio-recorded then transcribed for thematic data analysis. Results: Five focus groups were conducted with 11 physicians and 17 nurses participating. The themes identified are presented as barriers, enablers and challenges. Barriers include conflict between the ICU physicians and external medical teams, the availability of education and training, and environmental limitations. Enablers include collaboration and leadership during transitions of care. Challenges include communication and decision making, and expectations of the family. Conclusions: This study emphasised that positive communication, collaboration and culture are vital to achieving safe, high quality care at EOL. Greater use of collaborative discussions between ICU clinicians is important to facilitate improved decisions about EOL care. Such collaborative discussions can assist in preparing patients and their families when transitioning from active treatment to initiation of EOL care. Another major recommendation is to implement EOL care leaders of nursing and medical backgrounds, and patient support coordinators, to encourage clinicians to communicate with other clinicians, and with family members about plans for EOL care. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Background: Patients who are intubated in the ICU are at risk of developing pressure injuries to the mouth and lips from endotracheal tubes. Clear documentation is important for pressure wound care; however, no validated instruments currently exist for the staging of pressure injuries to the oral mucosa. Instruments designed for the assessment of pressure injuries to other bodily regions are anatomically unsuited to the lips and mouth. Objectives: This study aimed to develop and then assess the reliability of a novel scale for the assessment of pressure injuries to the mouth and oral mucosa. Methods: The Reaper Oral Mucosa Pressure Injury Scale (ROMPIS) was developed in consultation with ICU nurses, clinical nurse educators, Intensivists, and experts in pressure wound management. ICU nurses and portfolio-holders in pressure wound care from Peninsula Health (Victoria, Australia) were invited to use the ROMPIS to stage 19 de-identified clinical photographs of oral pressure injuries via secure online survey. Inter-rater reliability (IRR) was calculated using Krippendorff's alpha (alpha). Results: Among ICU nurses (n = 52), IRR of the ROMPIS was alpha = 0.307; improving to alpha = 0.463 when considering only responses where injuries were deemed to be stageable using the ROMPIS (i.e. excluding responses where respondents considered an injury to be unstageable). Among a cohort of experts in pressure wound care (n = 8), IRR was alpha = 0.306; or alpha = 0.443 excluding responses indicating that wounds were unstageable. Conclusions: An instrument for the assessment and monitoring of pressure injuries to the mouth and lips has practical implications for patient care. This preliminary study indicates that the ROMPIS instrument has potential to be used clinically for this purpose; however, the performance of this scale may be somewhat reliant on the confidence or experience of the ICU nurse utilising it. Further validation is required. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Background: Observational work to develop the ACCCN Competency Standards was undertaken more than 20 years ago. Since then the landscape of critical care nursing as a specialty has changed and it is not known if the Competency Standards reflected contemporary practice. Objectives: To revise the ACCCN Competency Standards for Specialist Critical Care Nurses to ensure they continue to meet the needs of critical care nurses and reflect current practice. Methods: A two-phased project was undertaken. In Phase I focus groups were held in all states. Thematic analysis was conducted using two techniques. The standards were revised based on the main themes. Phase II consisted of an eDelphi technique. A national panel of critical care nurses responded to three survey rounds using a 7 point likert-type scale to indicate their level of agreement with the revised standards. A 70% agreement level for each statement was determined a priori. Results: Phase I: 12 focus groups (79 participants) were conducted. Phase II: A panel of specialist critical care nurses (research, management, clinical practice and education) responded to round 1 (n = 64), round 2 (n = 56), and round 3 (n = 40). Fifteen practice standards with elements and performance criteria were grouped into four domains (professional practice, provision and coordination of care, critical thinking and analysis, collaboration and leadership). The revised Practice Standards for Specialist Critical Care Nurses build upon and are additional to the Nursing & Midwifery Board of Australia National Competency Standards for Registered Nurses. The standards reflect contemporary critical care nurse practices using an expanded range of technologies to care for complex critically ill patients across the lifespan in diverse settings. Conclusion: The national study has resulted in the 3rd edition of the Practice Standards for Specialist Critical Care Nurses. There was input from stakeholders and agreement that the revised standards capture contemporary Australian critical care nursing practice. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. Evidence suggests that when an immediate family member of a spouse is hospitalised, the partner's risk of death significantly increases. Hospitalisation can represent a time of great vulnerability and imposed stress for both the patient and their family members. Family members have been reported to give priority to the welfare of their ill relative and in their heightened emotional state, often adversely put their own health at risk. The paper presents a case study highlighting how an intensive care hospitalisation and discharge to rehabilitation experience for a patient's mother triggered an episode of myocardial infarction for her adult son. Discussion focuses on the caregiving burden and potential mechanisms for how hospitalisation may contribute to the health risk of immediate family members of hospitalised patients. Discussion also highlights the importance of family members receiving clear, continuous and consistent information from a limited number of clinicians to help reduce the stress associated with caregiving during acute hospitalisation. (C) 2016 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved. This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth. (C) 2017 Elsevier B.V. All rights reserved. The investigation of complex time series properties through graph theoretical tools was greatly benefited from the recently developed method of visibility graph (VG) which acts as a hub between nonlinear dynamics, graph theory and time series analysis. In this work, earthquake time series, a representative example of a complex system, was studied by using VG method. The examined time series extracted from the Corinth rift seismic catalogue. By using a sliding window approach the temporal evolution of exponent gamma of the degree distribution was studied. It was found that the time period of the most significant event (seismic swarm after major earthquake) in the examined seismic catalogue coincides with the time period where the exponent gamma presents its minimum. (C) 2017 Elsevier B.V. All rights reserved. In a realistic scenario, the evolution of the rotational dynamics of a celestial or artificial body is subject to dissipative effects. Time-varying non-conservative forces can be due to, for example, a variation of the moments of inertia or to tidal interactions. In this work, we consider a simplified model describing the rotational dynamics, known as the spin-orbit problem, where we assume that the orbital motion is provided by a fixed Keplerian ellipse. We consider different examples in which a non-conservative force acts on the model and we propose an analytical method, which reduces the system to a Hamiltonian framework. In particular, we compute a time parametrisation in a series form, which allows us to transform the original system into a Hamiltonian one. We also provide applications of our method to study the rotational motion of a body with time-varying moments of inertia, e.g. an artificial satellite with flexible components, as well as subject to a tidal torque depending linearly on the velocity. (C) 2017 Elsevier B.V. All rights reserved. During recent years it has been shown that hidden oscillations, whose basin of attraction does not overlap with small neighborhoods of equilibria, may significantly complicate simulation of dynamical models, lead to unreliable results and wrong conclusions, and cause serious damage in drilling systems, aircrafts control systems, electromechanical systems, and other applications. This article provides a survey of various phase-locked loop based circuits (used in satellite navigation systems, optical, and digital communication), where such difficulties take place in MATLAB and SPICE. Considered examples can be used for testing other phase-locked loop based circuits and simulation tools, and motivate the development and application of rigorous analytical methods for the global analysis of phase-locked loop based circuits. (C) 2017 Elsevier B.V. All rights reserved. Nonlinear interaction of electromagnetic solitons leads to a plethora of interesting physical phenomena in the diverse area of science that include magneto-optics based data storage industry. We investigate the nonlinear magnetization dynamics of a one-dimensional anisotropic ferromagnetic nanowire. The famous Landau-Lifshitz-Gilbert equation (LLG) describes the magnetization dynamics of the ferromagnetic nanowire and the Maxwell's equations govern the propagation dynamics of electromagnetic wave passing through the axis of the nanowire. We perform a uniform expansion of magnetization and magnetic field along the direction of propagation of electromagnetic wave in the framework of reductive perturbation method. The excitation of magnetization of the nanowire is restricted to the normal plane at the lowest order of perturbation and goes out of plane for higher orders. The dynamics of the ferromagnetic nanowire is governed by the modified Korteweg-de Vries (mKdV) equation and the perturbed modified Korteweg-de Vries (pmKdV) equation for the lower and higher values of damping respectively. We invoke the Hirota bilinearization procedure to mKdV and pmKdV equation to construct the multi-soliton solutions, and explicitly analyze the nature of collision phenomena of the co-propagating EM solitons for the above mentioned lower and higher values of Gilbert-damping due to the precessional motion of the ferromagnetic spin. The EM solitons appearing in the higher damping regime exhibit elastic collision thus yielding the fascinating state restoration property, whereas those of lower damping regime exhibit inelastic collision yielding the solitons of suppressed intensity profiles. The propagation of EM soliton in the nanoscale magnetic wire has potential technological applications in optimizing the magnetic storage devices and magneto-electronics. (C) 2017 Elsevier B.V. Allrights reserved. This study is concerned with the Lie symmetry group analysis of Fractional Integro-Differential Equations (FIDEs) with nonlocal structures based on a new development of prolongation formula. A new prolongation for FIDEs is extracted and invariant solutions are finally presented for some illustrative examples. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we introduce symbolic phase transfer entropy (SPTE) to infer the direction and strength of information flow among systems. The advantages of the proposed method are investigated by simulations on synthetic signals and real-world data. We demonstrate that symbolic phase transfer entropy is a robust and efficient tool to infer the information flow between complex systems. Based on the study of the synthetic data, we find a significant advantage of SPTE is its reduced sensitivity to noise. In addition, SPTE requires less amount of data than symbolic transfer entropy(STE). We analyze the direction and strength of information flow between six stock markets during the period from 2006 to 2016. The results indicate that the information flow among stocks varies over different periods. We also find that the interaction network pattern among stocks undergoes hierarchial reorganization with transition from one period to another. It is shown that the clusters are mainly classified according to period, and then by region. The stocks during the same time period are shown to drop into the same cluster. (C) 2017 Elsevier B.V. All rights reserved. In this paper the dynamics of the nonlinear mass-in-mass system as the basic subsystem of the acoustic metamaterial is investigated. The excitation of the system is in the form of the Jacobi elliptic function. The corresponding model to this forcing is the mass-in-mass system with cubic nonlinearity of the Duffing type. Mathematical model of the motion is a system of two coupled strong nonlinear and nonhomogeneous second order differential equations. Particular solution to the system is obtained. The analytical solution of the problem is based on the simple and double integral of the cosine Jacobi function. In the paper the integrals are given in the form of series of trigonometric functions. These results are new one. After some modification the simplified solution in the first approximation is obtained. The result is convenient for discussion. Conditions for elimination of the motion of the mass 1 by connection of the nonlinear dynamic absorber (mass - spring system) are defined. In the consideration the effective mass ratio is introduced in the nonlinear mass-in-mass system. Negative effective mass ratio gives the absorption of vibrations with certain frequencies. The advantage of the nonlinear subunit in comparison to the linear one is that the frequency gap is significantly wider. Nevertheless, it has to be mentioned that the amplitude of vibration differs from zero for a small value. In the paper the analytical results are compared with numerical one and are in agreement. (C) 2017 Published by Elsevier B.V. In this paper we explore the stability of an inverted pendulum with generalized parametric excitation described by a superposition of N sines with different frequencies and phases. We show that when the amplitude is scaled with the frequency we obtain the stabilization of the real inverted pendulum, i.e., with values of g according to planet Earth (g approximate to 9.8 m/s(2)) for high frequencies. By randomly sorting the frequencies, we obtain a critical amplitude in light of perturbative theory in classical mechanics which is numerically tested by exploring its validity regime in many alternatives. We also analyse the effects when different values of N as well as the pendulum size l are taken into account. (C) 2017 Elsevier B.V. All rights reserved. Weighted rating networks are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the complex networks domain. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the nonlinear rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between the different given values is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used. (C) 2017 Elsevier B.V. All rights reserved. In the normal dispersion regime of optical fibers with third-order dispersion and self-steepening higher-order effects, the propagation of ultrashort pulses is govern by the multi-component defocusing Hirota system. The general N-dark vector soliton solution is firstly constructed by the binary Darboux transformation. From obtained multi-dark soliton solutions, it is found that the collisions between or among vector dark solitons in this multi-component system are elastic, and vector dark solitons retain their shape and intensity only with a slight change in their phase. The results might be useful for applications about vector dark soliton in fiber directional couplers, optical switching and quantum information processors. (C) 2017 Elsevier B.V. All rights reserved. The wave-particle resonant interaction plays an important role in the charged particle energization by trapping (capture) into resonance. For the systems with waves propagating through inhomogeneous plasma, the key small parameter is the ratio of the wave wavelength to a characteristic spatial scale of inhomogeneity. When that parameter is very small, the asymptotic methods are applicable for the system description, and the resultant energy distribution of trapped particle ensemble has a typical Gaussian profile around some mean value. However, for moderate values of that parameter, the energy distribution has a fine structure including several maxima, each corresponding to the discrete number of oscillations a particle makes in the trapped state. We explain this novel effect which can play important role for generation of unstable distributions of accelerated particles in many space plasma systems. (C) 2017 Elsevier B.V. All rights reserved. The separation properties of self similar sets are discussed in this article. An open set condition for the (n, m)- iterated function system is introduced and the concepts of self similarity, similarity dimension and Hausdorff dimension of the attractor generated by an (n, m) - iterated function system are studied. It is proved that the similarity dimension and the Hausdorff dimension of the attractor of an (n, m) - iterated function system are equal under this open set condition. Further a necessary and sufficient condition for a set to satisfy the open set condition is established. (C) 2017 Elsevier B.V. All rights reserved. The study of the axisymmetric ionizing gas flows in a channel of the quasi-steady plasma accelerator is presented. Model is based on the MHD and radiation transport equations. The MHD model for a three-component medium consisting of atoms, ions and electrons takes into account the basic mechanisms of the electrical conductivity and heat transport. The model of the radiation transport includes the basic mechanisms of emission and absorption for the different parts of the spectrum. Results of the numerical studies of ionization process and radiation transport are obtained in the approximation of the local thermodynamic equilibrium. (C) 2017 Elsevier B.V. All rights reserved. Lagrangian simulations of unsteady vortical flows are accelerated by the multi-level fast multipole method, FMM. The combination of the FMM algorithm with a discrete vortex method, DVM, is discussed for free domain and periodic problems with focus on implementation details to reduce numerical dissipation and avoid spurious solutions in unsteady inviscid flows. An assessment of the FMM-DVM accuracy is presented through a comparison with the direct calculation of the Biot-Savart law for the simulation of the temporal evolution of an aircraft wake in the Trefftz plane. The role of several parameters such as time step restriction, truncation of the FMM series expansion, number of particles in the wake discretization and machine precision is investigated and we show how to avoid spurious instabilities. The FMM-DVM is also applied to compute the evolution of a temporal shear layer with periodic boundary conditions. A novel approach is proposed to achieve accurate solutions in the periodic FMM. This approach avoids a spurious precession of the periodic shear layer and solutions are shown to converge to the direct Biot-Savart calculation using a cotangent function. (C) 2017 Elsevier B.V. All rights reserved. A non-autonomous version of the standard map with a periodic variation of the perturbation parameter is introduced and studied via an autonomous map obtained from the iteration of the nonautonomous map over a period. Symmetry properties in the variables and parameters of the map are found and used to find relations between rotation numbers of invariant sets. The role of the nonautonomous dynamics on period-one orbits, stability and bifurcation is studied. The critical boundaries for the global transport and for the destruction of invariant circles with fixed rotation number are studied in detail using direct computation and a continuation method. In the case of global transport, the critical boundary has a particular symmetrical horn shape. The results are contrasted with similar calculations found in the literature. (C) 2017 Elsevier B.V. All rights reserved. We analyze the effect of central bank transparency on cross-border bank activities. Based on a panel gravity model for cross-border bank claims for 21 home and 47 destination countries from 1998 to 2010, we find strong empirical evidence that a rise in central bank transparency in the destination country, on average, increases cross-border claims. Using interaction models, we find that the positive effect of central bank transparency on cross-border claims is only significant if the central bank is politically independent and operates in a stable economic environment. Central bank transparency and credibility are thus considered complements by banks investing abroad. (C) 2017 Elsevier Ltd. All rights reserved. Can political interference deconstruct credibility that was hardly-earned through successful stabilization policy? We analyze the recent switch in the conduct of monetary policy by the Central Bank of Brazil (BCB). Brazil is the largest Emerging Market Economy to formally target inflation, having adopted the Inflation Targeting (IT) regime in 1999. In the early years of IT, the BCB engaged in constructing credibility with price setting agents and succeeded to anchor inflation expectations to its target even under adverse conditions such as exchange rate crises. We argue that this effort to maintain IT rules-based policy ended in 2011, as a new country president and BCB board came to power. We then discuss the consequences of this credibility loss. Our main results can be summarized as follows: (i) we provide strong empirical evidence of the BCB's shift toward looser, discretionary policy after 2011; (ii) preliminary evidence suggests that this shift has affected agents' inflation expectations generating social and economic costs. (C) 2017 Elsevier Ltd. All rights reserved. Gold is widely perceived as a good diversification or safe haven tool for general financial markets, especially in market turmoil. To fully understand the potential, this study constructs an asymmetric multivariate range-based volatility model to investigate the dependence and volatility structures of gold, stock, and bond markets and further to compare the difference between the financial crisis and post-financial crisis periods. We find a striking explanatory ability to volatility structures provided by the price range information and significant evidence of asymmetric dependence across gold, stock, and bond markets. We implement an asset-allocation strategy incorporating asymmetric dependence and price range information to explore their economic importance. The out-of-sample results show that between 35 and 517 basis points and between 90 and 1111 basis points are earned annually when acknowledging asymmetric dependence and price range information, respectively. These economic benefits are inversely related to the level of investors' risk aversion and are particularly significant in the period of the global financial crisis. (C) 2017 Elsevier Ltd. All rights reserved. Stock splits have long presented financial puzzles: Why are they undertaken? Why are they associated with abnormal returns? Abnormal returns, particularly those coming shortly before a split's announcement date, should raise strong suspicions of insider trading, particularly in nations with weak regulatory structures. We examined the 718 split events in the emerging stock market of Vietnam from 2007 through 2011. We found evidence consistent with illegal insider trading, particularly in firms that were vulnerable to insider manipulation and, therefore, more likely to split their stocks. When vulnerable firms' stocks did split, they provided significant excess short-term returns. Tellingly, the abnormal returns on those stocks prior to the split announcements were also extremely high, indeed higher than their abnormal post-announcement returns. Moreover, trading volume increased prior to the split announcement date. This suspicious pattern is what we would expect if insiders were trading on their knowledge. We propose that illegal insider trading in contexts where it is possible to escape serious penalty provides a previously undiscussed and cogent explanation for both stock splits and abnormal short-term returns. (C) 2017 Elsevier Ltd. All rights reserved. We evaluate three alternative predictors of house price corrections: anticipated tightenings of monetary policy, deviations of house prices from fundamentals, and rapid credit growth. A new cross-country measure of monetary policy expectations based on an international term structure model with time-varying risk premiums is constructed. House price overvaluation is estimated via an asset pricing model. The variables are incorporated into a panel logit regression model that estimates the likelihood of a large house price correction in 18 OECD countries. The results show that corrections are predicted by increases in the market's forecast of higher policy rates. The estimated degree of house price overvaluation also contains significant information about subsequent price reversals. In contrast to the financial crisis literature, credit growth is less important. All of these variables help forecast recessions. (C) 2017 Elsevier Ltd. All rights reserved. Has financial globalisation compromised central banks' effectiveness in managing domestic financial conditions? This paper tackles this question by studying the dynamics of bond yields encompassing 31 advanced and emerging market economies. To gauge the extent to which external financial conditions complicate the conduct of monetary policy, we isolate a "contagion" component by focusing on comovements in measures of bond return risk premia that are unrelated to domestic economic fundamentals. Our contagion measure is designed to more accurately capture, relative to simple yield correlation, spillovers driven by exogenous global shifts in risk preference or appetite. In contrast to what simple comovements in bond yields suggest, emerging market economies appear to be much less susceptible to global contagion than advanced economies and the overall sensitivities to contagion have not increased after the global financial crisis. The extent to which financial spillovers have compromised policy traction thus appears to be lower relative to studies based on common variation in bond yields. (C) 2017 Elsevier Ltd. All rights reserved. This paper investigates for the first time the effects of oil demand shocks and oil supply shocks on stock order flow imbalances leading to changes in stock returns. Through the estimation of a structural VAR model, positive oil demand shocks are able to explain almost 36% of the observed variation in the daily average stock order flow imbalances measured by the buy/sell trades ratio; which consequently lead to a negative rather than positive stock returns reaction. In contrast, oil supply shocks exhibit a negative and marginally significant effect on stock order flow imbalances. Our aggregate analysis suggests that positive shocks on stock order flow imbalances are negatively related to stock returns. These effects are stronger for oil-related sectors when compared with the rest of the equities sectors. (C) 2017 Elsevier Ltd. All rights reserved. This paper studies the effects of fiscal consolidation on the debt-to-GDP ratio of 11 Euro area countries over the period 2000Q1-2012Q1. Using a quarterly Panel VAR allows us to trace out the dynamics of the debt-to-GDP ratio following a fiscal shock and to disentangle the main channels through which fiscal consolidation affects the debt ratio. We define a fiscal consolidation episode as self-defeating if the level of the debt-to-GDP ratio does not decrease compared to the pre-shock level. Our main finding is that when consolidation is implemented via a cut in government primary spending, the debt ratio, after an initial increase, falls to below its pre-shock level. When instead the consolidation is implemented via an increase in government revenues, the initial increase in the debt ratio is stronger and, eventually, the debt ratio reverts to its pre-shock level, resulting in what we call self-defeating consolidation. (C) 2017 Published by Elsevier Ltd. I analyze the recent experience of unconventional monetary policy in Sweden to study the interest rate transmission mechanisms of government bond purchases when interest rates are away from the lower bound. Using dynamic term structure models and event study regressions I find that government bond purchases have important portfolio balance and signaling effects. The signaling channel operates mainly by lowering short-rate expectations in the intermediate segment of the yield curve, while the portfolio balance channel is effective in lowering longer maturity term premia. In addition, I find that target interest rate policy and government bond purchases operate in different segments of the yield curve. This suggests that a combination of the two policies can be used to lower interest rates across the whole maturity spectrum, making monetary policy more expansionary. (C) 2017 Elsevier Ltd. All rights reserved. We sort currencies into portfolios by countries' past consumption growth. The excess return of the highest-over the lowest-consumption-growth portfolio - our consumption carry factor - compensates for negative returns during world-wide downturns and prices the cross-section of portfolio-sorted and of bilateral currency returns. Empirically, sorting currencies on consumption growth is very similar to sorting currencies on interest rates. We interpret these stylized facts in a habit formation model: sorting currencies on past consumption growth approximates sorting on risk aversion. Low (high) risk-aversion currencies have high (low) interest rates and depreciate (appreciate) in times of global turmoil. (C) 2017 Elsevier Ltd. All rights reserved. We use the demise of silver-based standards in the 19th century to explore price dynamics when a commodity-based money ceases to function as a global unit of account. We develop a general equilibrium model of the global economy with gold and silver money. Calibration of the model shows that silver ceased functioning as a global price anchor in the mid-1890s-the price of silver is positively correlated with agricultural commodities through the mid-1890s, but not thereafter. In contrast to Fisher (1911) and Friedman (1990), both of whom predict greater price stability under bimetallism, our model suggests that a global bimetallic system, in which the gold price of silver fluctuates, has higher price volatility than a global monometallic system. We confirm this result using agricultural commodity price data for 1870-1913. (C) 2017 Published by Elsevier Ltd. The objective of this paper is three-fold. First, the monetary and exchange rate regimes of the Asian countries are described and analyzed. The degrees of flexibility in exchange rates and capital controls vary across countries. Some countries have adopted a flexible inflation targeting framework, while others have pursued exchange rate targeting. The paper presents a new result of a tradeoff between price stability and exchange rate stability in the hyperbolic relationship of Asian countries. Second, a framework that analyzes and quantifies the degree of currency internationalization is proposed and applied to the RMB. In every indicator, the RMB's weight in private-sector international finance has grown in the last several years, both in the private and public sectors. In the settlement role of currency, the RMB is ranked 8th in the BIS survey and 7th in SWIFT usage. This paper exploits data of a recent period when the RMB became de-pegged from the USD and show some of the emerging Asian currencies co-moving with the RMB, more so than the USD. In the official sector, RMB is also increasing its weight. The Chinese central bank has extended the currency-swap agreements with 30-some countries, so that the RMB can be used for trade finance and liquidity assistance. The RMB is adopted as a composition currency of the Special Drawing Rights (SDR), effective in October 2016, with 10.92%, ranking number 3, surpassing the JPY and GBP. Finally, potential impending changes in the Asian monetary and exchange rate regimes in Asia are discussed. Projecting the growth of the Chinese economy into the future, the weight of the RMB in the financial markets will increase globally as well as in Asia. (C) 2017 Elsevier Ltd. All rights reserved. We study how the financial conditions in the Center Economies [the U.S., Japan, and the Euro area] impact other countries over the period 1986 through 2015. Our methodology relies upon a two-step approach. We focus on five possible linkages between the center economies (CEs) and the non-Center economics, or peripheral economies (PHs), and investigate the strength of these linkages. For each of the five linkages, we first regress a financial variable of the PHs on financial variables of the CEs while controlling for global factors. Next, we examine the determinants of sensitivity to the CEs as a function of country-specific macroeconomic conditions and policies, including the exchange rate regime, currency weights, monetary, trade and financial linkages with the CEs, the levels of institutional development, and international reserves. Extending our previous work (Aizenman et al., 2016), we devote special attention to the impact of currency weights in the implicit currency basket, balance sheet exposure, and currency composition of external debt. We find that for both policy interest rates and the real exchange rate (REER), the link with the CEs has been pervasive for developing and emerging market economies in the last two decades, although the movements of policy interest rates are found to be more sensitive to global financial shocks around the time of the emerging markets' crises in the late 1990s and early 2000s, and since 2008. When we estimate the determinants of the extent of connectivity, we find evidence that the weights of major currencies, external debt, and currency compositions of debt are significant factors. More specifically, having a higher weight on the dollar (or the euro) makes the response of a financial variable such as the REER and exchange market pressure in the PHs more sensitive to a change in key variables in the U.S. (or the euro area) such as policy interest rates and the REER. While having more exposure to external debt would have similar impacts on the financial linkages between the CEs and the PHs, the currency composition of international debt securities does matter. Economies more reliant on dollar-denominated debt issuance tend to be more vulnerable to shocks emanating from the U.S. (C) 2017 Elsevier Ltd. All rights reserved. We analyze and evaluate novel data on exchange rate expectations after the collapse of Lehman Brothers for more than 60 economies over different horizons. At a first stage, we establish a potential discrepancy between statistical and economic measures. Market expectations are superior compared to trend and carry trade strategies based on economic evaluation criteria despite a weak statistical performance. We then turn to determinants of both expectations and resulting forecast errors. We find that monetary policy effects on expectations are time-varying and identify substantial international spillovers over the recent period of unconventional monetary policy. Our results also indicate that markets have been surprised by monetary policy effects on the exchange rates and point to an unexpected safe haven status of the US dollar after 2009. (C) 2017 Elsevier Ltd. All rights reserved. After the global financial crisis (GFC), most major currencies had higher interest rates than the US dollar on forward contract because of increased demand for the US dollar as international liquidity. However, unlike the other major currencies, the Australian dollar and the NZ dollar had lower interest rates than the US dollar on forward contract in the post GFC period. The purpose of this paper is to explore why this happened through estimating the covered interest parity (CIP) condition. In the analysis, we focus on a unique feature of Australia and New Zealand where short-term interest rates remained significantly positive even after the GFC. The paper first constructs a theoretical model where increased liquidity risk causes deviations from the CIP condition. It then tests this theoretical implication by using daily data of six major currencies. We find that both money market risk measures and policy rates had significant effects on the CIP deviations. The result implies that unique monetary policy feature in Australia and New Zealand made deviations from the CIP condition distinct on the forward contract. (C) 2017 Elsevier Ltd. All rights reserved. This paper develops a Bayesian Global VAR (GVAR) model to track the international transmission dynamics of two stylized shocks, namely a supply and demand shock to US-based safe assets. Our main findings can be summarized as follows. First, we find that (positive) supply-sided shocks lead to pronounced increases in economic activity which spills over to foreign countries. The impact of supply-sided shocks can also be seen for other quantities of interest, most notably equity prices and exchange rates in Europe. Second, a demand-sided shock leads to an appreciation of the US dollar and generally lower yields on US securities, forcing investors to shift their portfolios towards foreign fixed income securities. This yields sizable positive effects on US output, equity prices and a general decrease in financial market volatility. (C) 2017 Elsevier Ltd. All rights reserved. This paper reconsiders the successful currency outcome of the first arrow of Abenomics. The Japanese yen depreciation against the U.S. dollar after the introduction of the first arrow co-moves tightly with long-term yield differentials between Japan and the United States. The estimated term structure of the sensitivity of the currency return of the Japanese yen to the two-country interest rate differential indeed shifts up and becomes steeper after the onset of Abenomics. To explain this structural change in the term structure of the Fama regression coefficient, we employ a long-run risk model endowed with real and nominal conditional volatilities as in Bansal and Shaliastovich (2013). Under a plausible calibration, the model replicates the structural change when nominal uncertainty dominates real uncertainty in the U.S. bond market. We conjecture that the arrow was shot off from the U.S. side, not the Japan side. (C) 2017 Elsevier Ltd. All rights reserved. Exchange rate shocks have mixed effects on economic activity in both theory and empirical VAR models. In this paper, we extend the empirical literature by considering the implications of a positive shock to the U.S. dollar in a factor-augmented vector autoregression (FAVAR) model for the U.S. and three large Asian economies: Korea, Japan and China. The FAVAR framework allows us to represent a country's aggregate economic activity by a latent factor, generated from a broad set of underlying observable economic indicators. To control for global conditions, we also include in the FAVAR a "global conditions index," which is another latent factor generated from the economic indicators of major trading partners. We find that a dollar appreciation shock reduces economic activity and inflation not only for the U.S. economy, but also for all three Asian economies. This result, which is robust to a number of alternative specifications, suggests that in spite of their disparate economic structures and policy regimes, the dollar appreciation shock affects the Asian economies primarily through its impact on U.S. aggregate demand; and this demand channel dominates the expenditure-switching channel that affects a country's export competitiveness. (C) 2017 Published by Elsevier Ltd. In this paper we study the relationship between foreign currency international reserve holdings and global interest rates. To guide empirical work we solve a simple, small open-economy model with money, where the central bank manages international reserves to smooth inflation over time. This model shows that changes in interest rates are positively related to the target level of reserves. As a consequence interest rate hikes increase reserve transfers, defined as the change in international reserves net of the interest earned on reserves. Using quarterly data for 75 countries between 2000 and 2013, we document a positive relationship between interest-rate changes and reserve transfers as a share of GDP, that is consistent with the model. (C) 2017 Elsevier Ltd. All rights reserved. Teachers' efficacy beliefs exert a significant influence on their practice and their students' learning. This study investigates the contribution of two six-month Professional Learning Community (PLC) interventions to 10 English as a Foreign Language (EFL) novice and experienced teachers' self-efficacy. The data were collected through pre and post-interviews with the participants, their reflective journals, and recordings of the PLC meetings. The findings suggest that the experienced teachers' self-efficacy improved in terms of employing innovative instructional strategies and language proficiency. An increase was also observed in the novice teachers' self-efficacy for classroom management, their autonomy, and their perceived language proficiency. Finally, the participants in both groups developed a stronger sense of professional community membership as reflected in their focus on their collective efficacy toward the end of the PLCs. (C) 2017 Elsevier Ltd. All rights reserved. To assist in the research of second language (L2) listeners, especially for novice researchers in the area, this article outlines and examines techniques which can be used to study some of the key cognitive processes specific to L2 listening. The article also highlights studies which have used many of these techniques, studies which researchers may refer to in order to help guide their own listener research. Through the article, I also infuse insights from my own experiences as a developing researcher of L2 listeners to show how I have grappled personally with many of the issues involving the various techniques which are discussed. The article concludes with a brief sketch of the directions in which L2 listener research is likely heading, and lists some learner attributes and associated techniques likely to assist in those endeavours. (C) 2017 Elsevier Ltd. All rights reserved. This study examined the interrelationship among L2 Chinese learners' use of reading strategies, L1 background, and L2 proficiency. Sixty-eight L2 Chinese learners of three different proficiency levels (i.e., elementary, intermediate and advanced) participated in the study. They were categorized further into two L1 groups: those within the Chinese cultural sphere (CCS) and those from the non-Chinese cultural sphere (NCCS). The results, based on analyses of think-aloud reports during reading, are as follows: (a) The use of L2 Chinese reading strategies was affected by L2 proficiency, since there was notable variation in reading strategy type frequency between the elementary level and the intermediate level, but no significant improvement from the intermediate level to the advanced level. (b) The application of decoding strategies remained for a long time regardless of readers' L2 proficiency level. (c) Readers of CCS background appeared to have an advantage in decoding compared to NCCS readers at the elementary level; yet, such an advantage vanished as readers' L2 proficiency level increased, as both CCS and NCCS readers adopted similar types of decoding strategies at the intermediate and advanced levels. Discussion is provided regarding the intricate relationships among L1 background, L2 proficiency, and reading strategy use in L2 Chinese. (C) 2017 Elsevier Ltd. All rights reserved. Plagiarism is a major problem for universities worldwide and has been a constant cause of concern in higher education. Previous research has focused on Chinese students' attitudes toward, knowledge of, and engagement in plagiarism in Chinese and overseas educational contexts, and there is also a growing body of research on Chinese teachers' understandings of and stance on plagiaristic practices. However, little research attention has been given to institutional policies on plagiarism in the Chinese context, though similar research has been conducted in other settings. This paper reports on a study that examines the plagiarism policies made publicly available by eight major universities of foreign studies in mainland China. Both the structure and content of these universities' policy documents are analyzed to identify institutional understandings of, attitudes toward, and sanctions on plagiarism. The analysis reveals that despite inter-institutional variations, the policy documents are dominated by moralistic and regulatory discourses and are characterized by the conspicuous lack of an educative approach to plagiarism. We argue that such an institutional approach to plagiarism is unlikely to be effective because it largely fails to support students' acquisition of academic literacy and legitimate intertextual practices. (C) 2017 Elsevier Ltd. All rights reserved. Empirical studies in first language (L1) research support the use of inserted adjunct questions to facilitate L1 reading comprehension. The status of this comprehension technique for second language (L2) readers, however, remains unclear. Given the possibility that adjunct questions augment the cognitive demands of the task, the current study investigated the relationship between working memory capacity (WMC) and text adjuncts, as well as the effect of inserted adjuncts on L2 reading comprehension. Seventy learners of intermediate Spanish read two texts that contained either targeted segment ("what") questions inserted into both passages, elaborative interrogation ("why") questions inserted into both passages, or no questions in either of the two passages. Participants were administered an L1 working memory (WM) test-the Reading Span-and three comprehension assessments. Although the "why" questions were slightly more facilitative than the "what" questions and no questions, results indicate no significant effect of adjunct condition. When interactions with WM surfaced as significant, the pattern was apparent: the greater the WMC, the more beneficial the adjunct questions were for L2 readers. These findings suggest that, for intermediate learners of Spanish, there is no advantage to including inserted adjuncts in L2 expository texts, but that WM may explain performance differences in some cases. (C) 2017 Elsevier Ltd. All rights reserved. Recently, there has been considerable research concerning the effect of CLIL on English language learners' competence. However, it remains unclear if the positive effects found are due to CLIL or to time. To clarify this issue, this paper focuses on the vocabulary output of CLIL and non-CLIL EFL learners after an equal number of hours of English exposure. The objectives were twofold: (1) to ascertain whether the CLIL group retrieves a higher number of English words than the non-CLIL group; (2) to determine whether the two groups produce the same or different words. The sample comprised 70 Spanish EFL learners in their 8th and 10th year of secondary education. The data collection instrument was a lexical availability task consisting of ten prompts. The data were edited, coded, and subjected to quantitative and qualitative analyses. The results showed that the CLIL group retrieved a higher number of words than the non-CLIL group. However, both groups exhibited similarities concerning most and least productive prompts, first word responses, word frequency, and word level. The findings suggest a need to conduct equal comparisons of CLILs and non-CLIL groups as well as to examine the task effect, and the vocabulary input received by learners. (C) 2017 Elsevier Ltd. All rights reserved. This article reports on a four-year-long ethnographic study on a curriculum innovation project introducing a weak form of communicative language teaching (CLT) at a Chinese secondary school. A total of ten teachers, who taught twelve project classes were observed across five stages of the project: the pre-project stage, the top-down stage, the bottom-up stage; the exam preparation stage, and the post-project stage, in an attempt to explore the changes that took place in the teacher's receptivity and classroom behaviors. Focusing on a focal informant (Marian, pseudonym), this paper illustrates how teacher cognition changed in accordance with the project goal and highlights how the trajectory of change was much more tangled and complicated than what was initially expected. Changes in the project teacher's teaching practices reflect the consistency between teacher cognition and classroom practices at the pre-project, the bottom-up and the post-project stages. In contrast, at the top-down and the exam stages of the innovation project, changes in teachers' cognition did not conform to changes in her classroom practices. These findings suggest the external pressure caused by top-down imperatives and high-stake exams might have caused the cognition-practices incongruence, which deserves language teacher educators' and administrators' further attention when promoting curriculum innovation. (C) 2017 Published by Elsevier Ltd. This article reports three trials of a pen-and-paper experiment where adult L2 learners' recollection of glossed words was tested after they had read a text with or without pictures included in the glosses. Unlike previous studies in which a superiority of multimodal glosses over text-only glosses was claimed, the experiment furnished no evidence that the addition of pictures helped the learners to retain the glossed words any better than providing glosses containing only verbal explanations. When learners were prompted to recall of the written form of the words, the gloss condition with pictures in fact led to the poorest performance. The results suggest that the provision of pictures alongside textual information to elucidate the meaning of novel words may reduce the amount of attention that L2 readers give to the words proper. (C) 2017 Elsevier Ltd. All rights reserved. This study aims to explore the validity of syntactic, lexical, and morphological complexity measures in capturing topic and proficiency differences in L2 writing. The additional purpose of this study is to examine how these measures gauge distinct dimensions of complexity. To these ends, this study examined a corpus of 1198 argumentative essays on two different topics written by college-level Chinese EFL learners. The essays were analyzed for topic effects (within-subjects) and for development across proficiency levels (between-subjects), as well as for the multidimensional construct of complexity. The result indicated strong topic effects on the majority of complexity measures (i.e., more complex language in a topic more relevant to writers' experiences). There were significant changes across proficiency levels in phrase-level syntactic, lexical, and morphological measures but not in clause-level measures. Last, a factor analysis result showed that lexical and morphological dimensions of complexity loaded on one construct and that the unit-length measures with different base units loaded on different constructs. The results of this study are interpreted in terms of topic relevance and the validity of multidimensional dimensions of complexity. (C) 2017 Elsevier Ltd. All rights reserved. High-variability phonetic training is effective in the acquisition of foreign language sounds. Previous studies have largely focused on small sets of contrasts, and have not controlled for the quantity of prior or simultaneous exposure to new sounds. The current study examined the effectiveness of phonetic training in full-inventory foreign language consonant acquisition by listeners with no previous exposure to the language. Chinese adult listeners underwent an intensive training programme, bracketed by tests that measured both assimilation of foreign sounds to native categories, and foreign category identification rates and confusions. Very rapid learning was evident in the results, with initial misidentification rates halving by the time of the mid-test, and continuing to fall in subsequent training sessions. Changes as a result of training in perceptual assimilation together with improved identifications and reduced response dispersion suggest an expansion of listeners' native categories to accommodate the foreign sounds and an incipient process of foreign language category formation. (C) 2017 Elsevier Ltd. All rights reserved. This study examines the effects of a receptive-productive integration task on the development of collocation knowledge of form, form and meaning, and grammar in comparison with the effects of productive and receptive tasks. In addition, the study investigates how prior knowledge of component nouns in verb-noun collocations affects collocation learning. Four intact classes of Chinese English as a foreign language (EFL) sophomores were randomly assigned to a productive group, a receptive group, a receptive-productive integration group, and a control group. The 12 target verb-noun collocations fell into two categories, one containing a known noun and one containing an unknown noun. The study found that (1) in light of both immediate and long-term gains, the integration group performed better than both the receptive and productive groups and ( 2) participants produced significantly more correct responses for collocations without unknown words than for collocations with unknown words across two post-test sessions. (C) 2017 Elsevier Ltd. All rights reserved. This study examines the impact of L1 literacy and reading habits on the L2 achievement of two groups of bilingual adult learners of EFL (52 beginners, 88 intermediate) in a language school with low L1 literacy students. Participants were tested on two L1 literacy measures (L1 reading comprehension, L1 spelling), on L2 achievement, and reported on two reading habits measures: reading quantity and enjoy reading. Results for the beginner group suggest that L1 literacy acts as a threshold to L2 achievement for academically disadvantaged learners, and provide evidence of the enduring influence of early L1 literacy skills on L2 achievement in adulthood. Conversely, for intermediate students, reading habits is the only literacy-related factor impacting L2 outcomes. The study concludes that educators need some awareness of adult EFL learners' L1 literacy level to help them achieve their language learning goals. (C) 2017 Elsevier Ltd. All rights reserved. Air traffic management (ATM) performance and the metrics used in its assessment are investigated for the first time across the three largest ATM world regions: Europe, the US and China. The market structure and flow management practices of each region are presented. A wide range of performance data across these three regions is synthesised. For topological and performance assessment, the notion of a 'sufficient' sample is often non-intuitive: many metrics may behave non-monotonically as a function of sampling fraction. Missing and under-developed metrics are identified, and the need for a balance between standardisation and flexibility is proposed. Longitudinal and cross-sectional metric trade-offs are identified. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The Chinese air transport system has witnessed an important evolution in the last decade, with a strong increase in the number of flights operated and a consequent reduction of their punctuality. In this contribution, we propose modelling the process of delay propagation by using complex networks, in which nodes are associated to airports, and links between pairs of them are assigned when a delay propagation is detected. Delay time series are analysed through the well-known Granger Causality, which allows detecting if one time series is causing the dynamics observed in a second one. Results indicate that delays are mostly propagated from small and regional airports, and through flights operated by turbo-prop aircraft. These insights can be used to design strategies for delay propagation dampening, as for instance by including small airports into the system's Collaborative Decision Making. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Robustness of transportation networks is one of the major challenges of the 21st century. This paper investigates the resilience of global air transportation from a complex network point of view, with focus on attacking strategies in the airport network, i.e., to remove airports from the system and see what could affect the air traffic system from a passenger's perspective. Specifically, we identify commonalities and differences between several robustness measures and attacking strategies, proposing a novel notion of functional robustness: unaffected passengers with rerouting. We apply twelve attacking strategies to the worldwide airport network with three weights, and evaluate three robustness measures. We find that degree and Bonacich based attacks harm passenger weighted network most. Our evaluation is geared toward a unified view on air transportation network attack and serves as a foundation on how to develop effective mitigation strategies. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Analyzing airports' role in global air transportation and monitoring their development over time provides an additional perspective on the dynamics of network evolution. In order to understand the different roles airports can play in the network an integrated and multidimensional approach is needed. Therefore, an approach to airport classification through hierarchical clustering considering several parameters from network theory is presented in this paper. By applying a 29 year record of global flight data and calculating the conditional transition probabilities the results are displayed as an evolution graph similar to a discrete-time Markov chain. With this analytical concept the meaning of airports is analyzed from a network perspective and a new airport taxonomy is established. The presented methodology allows tracking the development of airports from certain categories into others over time. Results show that airports of equal classes run through similar stages of development with a limited number of alternatives, indicating clear evolutionary patterns. Apart from giving an overview of the results the paper illustrates the exact data-driven approach and suggests an evaluation scheme. The methodology can help the public and industry sector to make informed strategy decisions when it comes to air transportation infrastructure. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The objective of this study is to improve the methods of determining unimpeded (nominal) taxiing time, which is the reference time used for estimating taxiing delay, a widely accepted performance indicator of airport surface movement. After reviewing existing methods used widely by different air navigation service providers (ANSP), new methods relying on computer software and statistical tools, and econometrics regression models are proposed. Regression models are highly recommended because they require less detailed data and can serve the needs of general performance analysis of airport surface operations. The proposed econometrics model outperforms existing ones by introducing more explanatory variables, especially taking aircraft passing and over-passing into the considering of queue length calculation and including runway configuration, ground delay program, and weather factors. The length of the aircraft queue in the taxiway system and the interaction between queues are major contributors to long taxi-out times. The proposed method provides a consistent and more accurate method of calculating taxiing delay and it can be used for ATM-related performance analysis and international comparison. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Airports are being developed and expanded rapidly in China to accommodate and promote a growing aviation market. The future Beijing Daxing International Airport (DAX) will serve as the central airport of the JingJinJi megaregion, knitting the Beijing, Tianjin, and Hebei regions together. DAX will be a busy airport from its inception, relieving congestion and accommodating growth from Beijing Capital International Airport (PEK), currently the second busiest airport in the world in passengers moved. We aim to model terminal airspace designs and possible conflicts in the future Beijing Multi-Airport System (MAS). We investigate standard arrival procedures and mathematically model current and future arrival trajectories into PEK and DAX by collecting large quantities of publicly available track data from historical arrivals operating within the Beijing terminal airspace. We find that (1) trajectory models constructed from real data capture aberrations and deviations from standard arrival procedures, validating the need to incorporate data on historical trajectories with standard procedures when evaluating the airspace and (2) given all existing constraints, DAX may be restricted to using north and east arrival flows, constraining the capacity required to handle the increases in air traffic demand to Beijing. The results indicate that the terminal airspace above Beijing, and the future JingJinJi region, requires careful consideration if the full capacity benefits of the two major airports are to be realized. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The multilayered structure of the European airport network (EAN), composed of connections and flights between European cities, is analyzed through the k-core decomposition of the connections network. This decomposition allows to identify the core, bridge and periphery layers of the EAN. The core layer includes the best-connected cities, which include important business air traffic destinations. The periphery layer includes cities with lesser connections, which serve low populated areas where air travel is an economic alternative. The remaining cities form the bridge of the EAN, including important leisure travel origins and destinations. The multilayered structure of the EAN affects network robustness, as the EAN is more robust to isolation of nodes of the core, than to the isolation of a combination of core and bridge nodes. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In aircraft wing design, engineers aim to provide the best possible aerodynamic performance under cruise flight conditions in terms of lift-to-drag ratio. Conventional control surfaces such as flaps, ailerons, variable wing sweep and spoilers are used to trim the aircraft for other flight conditions. The appearance of the morphing wing concept launched a new challenge in the area of overall wing and aircraft performance improvement during different flight segments by locally altering the flow over the aircraft's wings. This paper describes the development and application of a control system for an actuation mechanism integrated in a new morphing wing structure. The controlled actuation system includes four similar miniature electromechanical actuators disposed in two parallel actuation lines. The experimental model of the morphing wing is based on a full-scale portion of an aircraft wing, which is equipped with an aileron. The upper surface of the wing is a flexible one, being closed to the wing tip; the flexible skin is made of light composite materials. The four actuators are controlled in unison to change the flexible upper surface to improve the flow quality on the upper surface by delaying or advancing the transition point from laminar to turbulent regime. The actuators transform the torque into vertical forces. Their bases are fixed on the wing ribs and their top link arms are attached to supporting plates fixed onto the flexible skin with screws. The actuators push or pull the flexible skin using the necessary torque until the desired vertical displacement of each actuator is achieved. The four vertical displacements of the actuators, correlated with the new shape of the wing, are provided by a database obtained through a preliminary aerodynamic optimization for specific flight conditions. The control system is designed to control the positions of the actuators in real time in order to obtain and to maintain the desired shape of the wing for a specified flight condition. The feasibility and effectiveness of the developed control system by use of a proportional fuzzy feed-forward methodology are demonstrated experimentally through bench and wind tunnel tests of the morphing wing model. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The tracking characteristics of tracer particles for particle image velocimetry (PIV) measurements in supersonic flows were investigated. The experimental tests were conducted at Mach number 4 in Multi-Mach Wind Tunnel (MMWT) of Shanghai Jiao Tong University. The motion of tracer particles carried by the supersonic flow across shockwaves was theoretically modelled, and then their aerodynamic characteristics with compressibility and rarefaction effects were evaluated. According to the proposed selection criterion of tracer particles, the PIV measured results clearly identified that the shockwave amplitude is in good agreement with theory and Schlieren visualizations. For the tracer particles in nanoscales, their effective aerodynamic sizes in the diagnostic zone can be faithfully estimated to characterize the tracking capability and dispersity performance based on their relaxation motion across oblique shockwaves. On the other hand, the seeding system enabled the tracer particles well-controlled and repeatable dispersity against the storage and humidity. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Experimental study of the local and average heat transfer characteristics of a single round jet impinging on the concave surfaces was conducted in this work to gain in-depth knowledge of the curvature effects. The experiments were conducted by employing a piccolo tube with one single jet hole over a wide range of parameters: jet Reynolds number from 27000 to 130000, relative nozzle to surface distance from 3.3 to 30, and relative surface curvature from 0.005 to 0.030. Experimental results indicate that the surface curvature has opposite effects on heat transfer characteristics. On one hand, an increase of relative nozzle to surface distance (increasing jet diameter in fact) enhances the average heat transfer around the surface for the same curved surface. On the other hand, the average Nusselt number decreases as relative nozzle to surface distance increases for a fixed jet diameter. Finally, experimental data-based correlations of the average Nusselt number over the curved surface were obtained with consideration of surface curvature effect. This work contributes to a better understanding of the curvature effects on heat transfer of a round jet impingement on concave surfaces, which is of high importance to the design of the aircraft anti-icing system. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Combustion mode transition is a valuable and challenging research area in dual-mode scramjet engines. The thermal behavior of an isolator with mode transition inducing backpressure is investigated by direct-connect dual-mode scramjet experiments and theoretical analysis. Combustion experiments are conducted under the incoming airflow conditions of total temperature 1270 K and Mach 2. A small increment of the fuel equivalence ratio is scheduled to trigger mode transition. Correspondingly, the variation of the coolant flow rate is very small. Based on the measured wall pressures, the heat-transfer model can quantify the thermal state variation of the engine with active cooling. Compared with the combustor, mode transition has a greater effect on the isolator thermal behavior, and it significantly changes the isolator heat-flux and wall temperature. To further study the isolator thermal behavior from flight Mach 4 to Mach 7, a theoretical analysis is carried out. Around the critical point of combustion mode transition, sudden changes of the isolator flowfield and thermal state are discussed. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-offreedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator's instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5 degrees. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. A systematic methodology for formulating, implementing, solving and verifying discrete adjoint of the compressible Reynolds-averaged Navier-Stokes (RANS) equations for aerodynamic design optimization on unstructured meshes is proposed. First, a general adjoint formulation is constructed for the entire optimization problem, including parameterization, mesh deformation, flow solution and computation of the objective function, which is followed by detailed formulations of matrix-vector products arising in the adjoint model. According to this formulation, procedural components of implementing the required matrix-vector products are generated by means of automatic differentiation (AD) in a structured and modular manner. Furthermore, a duality-preserving iterative algorithm is employed to solve flow adjoint equations arising in the adjoint model, ensuring identical convergence rates for the tangent and the adjoint models. A three-step strategy is adopted to verify the adjoint computation. The proposed method has several remarkable features: the use of AD techniques avoids tedious and error-prone manual derivation and programming; duality is strictly preserved so that consistent and highly accurate discrete sensitivities can be obtained; and comparable efficiency to hand-coded implementation can be achieved. Upon the current discrete adjoint method, a gradient-based optimization framework has been developed and applied to a drag reduction problem. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. A flight dynamics model based on elastic blades for helicopters is developed. Modal shape analysis is used to describe the rotating elastic blades for the purpose of reducing the elastic degrees of freedom for blades. The analytical result is employed to predict the rotor forces and moments. The equilibrium equation of the flight dynamics model is then constructed for the elastic motion for blades and the rigid motion for other parts. The nonlinear equation is further simplified, and the gradient descent algorithm is adopted to implement the trim simulation. The trim analysis shows that the effect of blade elasticity on the accuracy of rotor forces and moments is apparent at high speed, and the proposed method presents good accuracy for trim performance. The timedomain response is realized by a combination of the Newmark method and the adaptive RungeKutta method. The helicopter control responses of collective pitch show that the response accuracy of the model at a yaw-and-pitch attitude is improved. Finally, the influence of blade elasticity on the helicopter dynamic response in low-altitude wind shear is investigated. An increase in blade elasticity reduces the oscillation amplitude of the yaw angle and the vertical speed by more than 70%. Compared with a rigid blade, an elastic blade reduces the vibration frequency of the angular velocity and results in a fast return of the helicopter to its stable flight. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM). The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM) especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/ finite element (MFE) method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM). Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The structure and dynamics of an oblique shock train in a duct model are investigated experimentally in a hypersonic wind tunnel. Measurements of the pressure distribution in front of and across the oblique shock train have been taken and the dynamics of upstream propagation of the oblique shock train have been analyzed from the synchronized schlieren imaging with the dynamic pressure measurements. The formation and propagation of the oblique shock train are initiated by the throttling device at the downstream end of the duct model. Multiple reflected shocks, expansion fans and separated flow bubbles exist in the unthrottled flow, causing three adversepressure- gradient phases and three favorable-pressure-gradient phases upstream the oblique shock train. The leading edge of the oblique shock train propagates upstream, and translates to be asymmetric with the increase of backpressure. The upstream propagation rate of the oblique shock train increases rapidly when the leading edge of the oblique shock train encounters the separation bubble near the shock reflection point and the adverse-pressure-gradient phase, while the oblique shock train slow movement when the leading edge of the oblique shock train is in the favorablepressure- gradient phase for unthrottled flow. The asymmetric flow pattern and oscillatory nature of the oblique shock train are observed throughout the whole upstream propagation process. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. Three-dimensional numerical computations are conducted to investigate the effects of the blowing ratio and corrugation geometry on the adiabatic film cooling effectiveness as well as the heat transfer coefficient over a transverse corrugated surface. It is noticeable that the adiabatic wall temperature on the wavy valley of the transverse corrugated surface is relatively lower than that on the wavy peak. Surface corrugation has a relatively obvious influence on the laterallyaveraged adiabatic film cooling effectiveness in the region where the effusion film layer is developed, but has little influence in the front region. Compared to a flat surface, the transverse corrugated surface produces a smaller adiabatic film cooling effectiveness and a higher heat transfer coefficient ratio. The effusion cooling difference between the flat and corrugated surfaces behaves more obviously under a small aspect ratio of the wavy corrugation. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. The present work is about the stall margin enhancement ability of a kind of stall precursor-suppressed (SPS) casing treatment when fan/compressor suffers from a radial total pressure inlet distortion. Experimental researches are conducted on a low-speed compressor with and without SPS casing treatment under radial distorted inlet flow of different levels as well as uniform inlet flow. The distorted flow fields of different levels are generated by annular distortion flow generators of different heights. The characteristic curves under these conditions are measured and analyzed. The results show that the radial inlet distortion could cause a stall margin loss from 2% to 30% under different distorted levels. The SPS casing treatment could remedy this stall margin loss under small distortion level and only partly make up the stall margin loss caused by distortion in large level without leading to perceptible additional efficiency loss and obvious change of characteristic curves. The pre-stall behavior of the compressor is investigated to reveal the mechanism of this stall margin improvement ability of the SPS casing treatment. The results do show that this casing treatment delays the occurrence of rotating stall by weakening the pressure perturbations and suppressing the nonlinear amplification of the stall precursor waves in the compression system. (C) 2016 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In order to solve the aero-propulsion system acceleration optimal problem, the necessity of inlet control is discussed, and a fully new aero-propulsion system acceleration process control design including the inlet, engine, and nozzle is proposed in this paper. In the proposed propulsion system control scheme, the inlet, engine, and nozzle are simultaneously adjusted through the FSQP method. In order to implement the control scheme design, an aero-propulsion system component-level model is built to simulate the inlet working performance and the matching problems between the inlet and engine. Meanwhile, a stabilizing inlet control scheme is designed to solve the inlet control problems. In optimal control of the aero-propulsion system acceleration process, the inlet is an emphasized control unit in the optimal acceleration control system. Two inlet control patterns are discussed in the simulation. The simulation results prove that by taking the inlet ramp angle as an active control variable instead of being modulated passively, acceleration performance could be obviously enhanced. Acceleration objectives could be obtained with a faster acceleration time by 5%. (C) 2017 Production and hosting by Elsevier Ltd. This article studies the elastic properties of several biomimetic micro air vehicle (BMAV) wings that are based on a dragonfly wing. BMAVs are a new class of unmanned micro-sized air vehicles that mimic the flapping wing motion of flying biological organisms (e.g., insects, birds, and bats). Three structurally identical wings were fabricated using different materials: acrylonitrile butadiene styrene (ABS), polylactic acid (PLA), and acrylic. Simplified wing frame structures were fabricated from these materials and then a nanocomposite film was adhered to them which mimics the membrane of an actual dragonfly. These wings were then attached to an electromagnetic actuator and passively flapped at frequencies of 10-250 Hz. A three-dimensional high frame rate imaging system was used to capture the flapping motions of these wings at a resolution of 320 pixels x 240 pixels and 35000 frames per second. The maximum bending angle, maximum wing tip deflection, maximum wing tip twist angle, and wing tip twist speed of each wing were measured and compared to each other and the actual dragonfly wing. The results show that the ABS wing has considerable flexibility in the chordwise direction, whereas the PLA and acrylic wings show better conformity to an actual dragonfly wing in the spanwise direction. Past studies have shown that the aerodynamic performance of a BMAV flapping wing is enhanced if its chordwise flexibility is increased and its spanwise flexibility is reduced. Therefore, the ABS wing (fabricated using a 3D printer) shows the most promising results for future applications. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In this study, a process for establishing design requirements and selecting alternative configurations for the conceptual phase of aircraft design has been proposed. The proposed process uses system-engineering-based requirement-analysis techniques such as objective tree, analytic hierarchy process, and quality function deployment to establish logical and quantitative standards. Moreover, in order to perform a logical selection of alternative aircraft configurations, it uses advanced decision-making methods such as morphological matrix and technique for order preference by similarity to the ideal solution. In addition, a preliminary sizing tool has been developed to check the feasibility of the established performance requirements and to evaluate the flight performance of the selected configurations. The present process has been applied for a two-seater very light aircraft (VLA), resulting in a set of tentative design requirements and two families of VLA configurations: a high-wing configuration and a low-wing configuration. The resulting set of design requirements consists of three categories: customer requirements, certification requirements, and performance requirements. The performance requirements include two mission requirements for the flight range and the endurance by reflecting the customer requirements. The flight performances of the two configuration families were evaluated using the sizing tool developed and the low-wing configuration with conventional tails was selected as the best baseline configuration for the VLA. (C) 2017 Production and hosting by Elsevier Ltd. Stratospheric airship is a special near-space air vehicle, and has more advantages than other air vehicles, such as long endurance, strong survival ability, excellent resolution, low cost, and so on, which make it an ideal stratospheric platform. It is of great significance to choose a reasonable and effective way to launch a stratospheric airship to the space for both academic research and engineering applications. In this paper, the non-forming launch way is studied and the method of differential pressure gradient is used to study the change rules of the airship's envelope shape during the ascent process. Numerical simulation results show that the head of the envelope will maintain the inflatable shape and the envelope under the zero-pressure level will be compressed into a wide range of wrinkles during the ascent process. The airship's envelope will expand with the ascent of the airship and the position of the zero-pressure level will move downward constantly. At the same time, the envelope will gradually form a certain degree of stiffness under the action of the inner and external differential pressure. The experimental results agree well with the analytical results, which shows that the non-forming launch way is effective and reliable, and the analytical method has exactness and feasibility. (C) 2017 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. In this paper, two typical stealth aircraft concepts (wing fuselage blended and flyingwing) were designed. Then three gradually changed surface distribution models with the same planform for each concept were created. Based on the multilevel fast multipole algorithm (MLFMA), the vertical polarization transmitting/vertical polarization receiving (VV) and horizontal polarization transmitting/ horizontal polarization receiving (HH) radar cross section (RCS) characteristics were simulated with five frequencies between 0.1 and 1.0 GHz. The influences and mechanisms of aircraft surface distribution on electromagnetic scattering characteristics were investigated. The results show that for the wing fuselage blended concept, the VV RCS of this frequency range is higher than the HH RCS in most cases, while it is just the opposite for the flying-wing concept. As for the two aircraft concepts, the RCS levels of HH and VV both decrease with the frequency increasing, but the HH RCS has a faster downward trend. The surface distribution has little influence on HH RCS characteristics. On the contrary, it has a significant impact on VV RCS characteristics, and the amplitude of the VV RCS increases with the surface thickness. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. A computational homogenization technique (CHT) based on the finite element method (FEM) is discussed to predict the effective elastic properties of honeycomb structures. The need of periodic boundary conditions (BCs) is revealed through the analysis for in-plane and out-of-plane shear moduli of models with different cell numbers. After applying periodic BCs on the representative volume element (RVE), comparison between the volume-average stress method and the boundary stress method is performed, and a new method based on the equality of strain energy to obtain all non-zero components of the stiffness tensor is proposed. Results of finite element (FE) analysis show that the volume-average stress and the boundary stress keep a consistency over different cell geometries and forms. The strain energy method obtains values that differ from those of the volume-average method for non-diagonal terms in the stiffness matrix. Analysis has been done on numerical results for thin-wall honeycombs and different geometries of angles between oblique and vertical walls. The inaccuracy of the volume-average method in terms of the strain energy is shown by numerical benchmarks. (C) 2017 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. This paper proposes a fault-tolerant strategy for hypersonic reentry vehicles with mixed aerodynamic surfaces and reaction control systems (RCS) under external disturbances and subject to actuator faults. Aerodynamic surfaces are treated as the primary actuator in normal situations, and they are driven by a continuous quadratic programming (QP) allocator to generate torque commanded by a nonlinear adaptive feedback control law. When aerodynamic surfaces encounter faults, they may not be able to provide sufficient torque as commanded, and RCS jets are activated to augment the aerodynamic surfaces to compensate for insufficient torque. Partial loss of effectiveness and stuck faults are considered in this paper, and observers are designed to detect and identify the faults. Based on the fault identification results, an RCS control allocator using integer linear programming (ILP) techniques is designed to determine the optimal combination of activated RCS jets. By treating the RCS control allocator as a quantization element, closed-loop stability with both continuous and quantized inputs is analyzed. Simulation results verify the effectiveness of the proposed method. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. A constrained adaptive neural network control scheme is proposed for a multi-input and multi-output (MIMO) aeroelastic system in the presence of wind gust, system uncertainties, and input nonlinearities consisting of input saturation and dead-zone. In regard to the input nonlinearities, the right inverse function block of the dead-zone is added before the input nonlinearities, which simplifies the input nonlinearities into an equivalent input saturation. To deal with the equivalent input saturation, an auxiliary error system is designed to compensate for the impact of the input saturation. Meanwhile, uncertainties in pitch stiffness, plunge stiffness, and pitch damping are all considered, and radial basis function neural networks (RBFNNs) are applied to approximate the system uncertainties. In combination with the designed auxiliary error system and the backstepping control technique, a constrained adaptive neural network controller is designed, and it is proven that all the signals in the closed-loop system are semi-globally uniformly bounded via the Lyapunov stability analysis method. Finally, extensive digital simulation results demonstrate the effectiveness of the proposed control scheme towards flutter suppression in spite of the integrated effects of wind gust, system uncertainties, and input nonlinearities. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In this paper, the flight formation control problem of a group of quadrotor unmanned aerial vehicles (UAVs) with parametric uncertainties and external disturbances is studied. Unit-quaternions are used to represent the attitudes of the quadrotor UAVs. Separating the model into a translational subsystem and a rotational subsystem, an intermediary control input is introduced to track a desired velocity and extract desired orientations. Then considering the internal parametric uncertainties and external disturbances of the quadrotor UAVs, the priori-bounded intermediary adaptive control input is designed for velocity tracking and formation keeping, by which the bounded control thrust and the desired orientation can be extracted. Thereafter, an adaptive control torque input is designed for the rotational subsystem to track the desired orientation. With the proposed control scheme, the desired velocity is tracked and a desired formation shape is built up. Global stability of the closed-loop system is proven via Lyapunov-based stability analysis. Numerical simulation results are presented to illustrate the effectiveness of the proposed control scheme. (C) 2017 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In this paper, the formulas of elasto-hydrodynamic traction coefficients of three Chinese aviation lubricating oils, 4109, 4106 and 4050, were obtained by a great number of elasto-hydrodynamic traction tests. The nonlinear dynamics differential equations of high-speed angular contact ball bearing were built on the basis of dynamic theory of rolling bearings and solved by Gear Stiff (GSTIFF) integer algorithm with variable step. The impact of lubricant traction coefficient on cage's dynamic characteristics in high-speed angular contact ball bearing was investigated, and Poincare map was used to analyze the impact of three types of aviation lubricating oils on the dynamic response of cage's mass center. And then, the period of dynamic response of cage's mass center and the slip ratio of cage were used to assess the stability of cage under various working conditions. The results of this paper provide the theoretical basis for the selection and application of aviation lubricating oil. (C) 2016 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. In the process of composite prepreg tape winding, the compaction force could influence the quality of winding products. According to the analysis and experiments, during the winding process of a rocket motor nozzle aft exit cone with a winding angle, there would be an error between the deposition speed of tape layers and the feeding speed of the compaction roller, which could influence the compaction force. Both a lack of compaction and overcompaction related to the feeding of the compaction roller could result in defects of winding nozzles. Thus, a flexible winding system has been developed for rocket motor nozzle winding. In the system, feeding of the compaction roller could be adjusted in real time to achieve an invariable compaction force. According to experiments, the force deformation model of the winding tape is a time-varying system. Thus, a forgetting factor recursive least square based parameter estimation proportional-integral-differential (PID) controller has been developed, which could estimate the time-varying parameter and control the compaction force by adjusting the feeding of the compaction roller during the winding process. According to the experimental results, a winding nozzle with fewer voids and a smooth surface could be wounded by the invariable compaction force in the flexible winding system. (C) 2016 Chinese Society of Aeronautics and Astronautics. Production and hosting by Elsevier Ltd. A Cr-Si co-alloyed layer was successfully deposited on TA15 alloy by the double glow plasma surface technology to improve its poor wear resistance at elevated temperature. The microstructure, composition, and phase structure of the layer were investigated by SEM, EDS, and XRD. The tribological behaviors of the Cr-Si co- alloyed layer at 20 degrees C and 500 degrees C were analyzed in details. The results indicated that the friction coefficient and wear rate of the Cr-Si coalloyed layer at 20 degrees C and 500 degrees C were much lower than those of the substrate, which was due to higher hardness and superior elastic modulus. This layer may become an approach to effectively improving the wear resistance of TA15 alloy at elevated temperature. (C) 2017 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. Fragmented marketing debates concerning the role of alternative economies are attributable to the lack of a meaningful macromarketing dimension to which alternative economic practices can be anchored. This research frames an evaluation of existing macromarketing developments aimed at reformulating the mindless pursuit of economic growth. Raising concerns with the treadmill dynamics of marketing systems, three different approaches - green growth, a-growth and degrowth - are critically evaluated to: (a) introduce degrowth as a widely overlooked concept in the macromarketing literature; (b) expose how each perspective entails a specific organization of provisioning activities; and (c) foreground the role of alternative economic practices beyond the growth paradigm. We conclude that socially sustainable degrowth is the missing voice within macromarketing debates that lie central to elucidating the future direction of alternative economic practices. The dominant social paradigm (DSP) defines the basic belief structures and practices of marketplace actors and is manifested in existing exchange structures. Sustainability - a so-called megatrend - challenges the DSP by questioning its underlying assumptions, resulting in tensions or conflicts for different marketplace actors. This study examines a specific case of an alternative market arrangement that bridges tensions between the DSP and environmental concerns. Ethnography in the context of retail food waste disposition reveals tensions experienced by several marketplace actors - namely consumers, retail firms and regulators - and investigates an alternative market arrangement that alleviates those tensions by connecting the actors and their practices in a creative new way. We identify complementarity as the underlying mechanism of connection and resolution. Compared to previously identified alternative market arrangements that are either oppositional or parallel to the DSP, complementarity opens another path toward greater environmental sustainability through market-level solutions. The introduction of money into previously non-monetary, alternative economies can lead to many socio-cognitive tensions, if money is perceived as having been imposed from the outside', and disconnected from traditional ways of life. In this paper, we employ the lens of institutional theory to frame the phenomenon of money-use in remote Indigenous Australia. Through an immersive study in two remote communities, we develop themes of socio-cognitive tensions that arise as a result of disparity in exchange logics governing marketplace exchange in monetary marketplaces vis-a-vis historically non-monetary alternative economies. We draw upon emergent insights, and derive macro-marketing implications for the design of marketplace-literacy education, aimed at alleviating these tensions and enhancing well-being. This paper explores the institutionalization process of complementary currencies; understanding the institutionalization of alternative economies is key in order to assess their sustainability. Drawing from neo-institutional theory, the paper examines the objectification processes in a sample of Spanish and Greek recently formed time banks. The paper focuses on the rationalization stage, i.e. when rules, practices and symbols are embedded in the organization, and studies how symbols and norms establish the framework for social interaction and make the common space of action visible. The main finding of this paper is that timebanking is subject to multiple logics, both inter and intra time banks. These logics lead to adopt different organizational forms, promote disparate forms of actorhood, and adopt different pricing and accounting systems. Yet, these objects are not aligned with one another, and tensions between the symbolic and the functional are found. Institutionalization is not yet complete as there are still missing blended models that bridge multiple logics. This study examines how object circulationthe recurrent transferring of objects among members of a groupcan be used to foster a hybrid value regime in alternative economies. Prior research notes that alternative economies harbor multiple conceptions of what is valuable, suggesting that hybridity can help sustain alternative economies. This study mobilizes ethnographic and netnographic data to examine the circulation of singularized objects in a religion-based alternative economy in Brazil. It focuses on value creation through object circulation to shed light on the constitution of value regime hybridity. Findings explain how the governing institution in this economythe church - fosters a hybrid value regime through promoting the creation of multiple types of value outcomes and incentivizing their intertwining. We discuss how value regime hybridity reduces tension and criticism directed at the alternative economy and promotes resource dependence among heterogeneous participants. Alternative markets are often described as exchange systems for people who wish to escape the capitalist markets. In this article, we argue that alternative markets may also emerge in a context of economic constraint, in which people have restricted access to mainstream markets. In order to survive, they have to find alternative ways of obtaining goods and services. To understand the emergence and characteristics of the constrained type of alternative market, we examine the bartering systems established in Greece in the aftermath of the financial crisis and the imposition of austerity measures by international and European institutions on the country's economic system. This study offers theoretical insights for the literature on alternative markets by conceptualizing bartering systems as a complementary market to the capitalist markets. Alternative economies are built on shared commitments to improve subjects' well-being. Traditional commercial markets, premised upon growth driven by separate actors pursuing personal material gain, lead to exploitation of some actors and to negligible well-being gains for the rest. Through resocializing economic relations and expanding the recognition of interdependence among the actors in a marketing system, economic domination and exploitation can be mitigated. We define shared commitments as a choice of a course of action in common with others. We empirically demonstrate the existence of shared commitments through an in-depth study of a spatially extended alternative food network in Turkey. Finally, we offer an inductive model of how shared commitments can be developed between local and non-local actors to bring new economies into being and improve the well-being of consumers and producers, localities, markets, and society. In this study, we used eye-tracking methodology for deeper understanding of the refutation text effect on online text comprehension. A refutation text acknowledges the reader's alternative conceptions about a phenomenon, refutes them and presents the correct conceptions. We tested two hypotheses about its facilitation effect: the coherence hypothesis (refutation text is more coherent than standard text, thus facilitating comprehension) and the elaboration hypothesis (refutation text involves deeper processing, thus facilitating comprehension). Forty university students read one refutation text and one non-refutation text about two science topics. Offline data confirmed that refutation text readers recall more scientific facts than non-refutation text readers. Online eye-tracking measures revealed both an increase and a decrease in reading time in response to the refutation statements. Topic-medial text sentences with the correct science facts were fixated for a shorter time when first encountered in the refutation text. Refutation statements, however, increased integrative processing at the end of each text paragraph, as indexed by longer look-back fixation times on topic-final sentences with the science concepts, as well as longer look-back fixation times directed to the refutation statements. These findings support the elaboration hypothesis and are discussed in the light of current accounts of the refutation effect for theory development and educational practice. When students solve problems on the Internet, they have to find a balance between quickly scanning large sections of information in web pages and deeply processing those that are relevant for the task. We studied how high school students articulate scanning and deeper processing of information while answering questions using a Wikipedia document, and how their reading comprehension skills and the question type interact with these processes. By analyzing retrospective think-aloud protocols and eye-tracking measures, we found that scanning of information led to poor hypertext comprehension, while deep processing of information produced better performance, especially in location questions. This relationship between scanning, deep processing, and performance was qualified by reading comprehension skills in an unexpected way: Scanning led to lower performance especially for good comprehenders, while the positive effect of deep processing was independent of reading comprehension skills. We discussed the results in light of our current knowledge of Internet problem solving. Nowadays, almost everyone uses the World Wide Web (WWW) to search for information of any kind. In education, students frequently use the WWW for selecting information to accomplish assignments such as writing an essay or preparing a presentation. The evaluation of sources and information is an important sub-skill in this process. But many students have not yet optimally developed this skill. On the basis of verbal reports, eye-tracking data and navigation logs, this study investigated how novices in the domain of psychology evaluate Internet sources as compared to domain experts. In addition, two different verbal reporting techniques, namely thinking aloud and cued retrospective reporting, were compared in order to examine students' evaluation behaviour. Results revealed that domain expertise has an impact on individuals' evaluation behaviour during Web search, such that domain experts showed a more sophisticated use of evaluation criteria to judge the reliability of sources and information and selected more reliable information than domain novices. Furthermore, the different verbal reporting techniques did not lead to different conclusions on criteria use in relation to domain expertise, although in general more utterances concerning evaluation of sources and information were expressed during cued retrospective reporting. The objective of this study was to determine the usefulness of augmented reality (AR) in teaching. An experiment was conducted to examine children's learning performances, which included the number of errors they made, their ability to remember the content of what they had read and their satisfaction with the three types of teaching materials, including a picture book, physical interactions and an AR graphic book. The three teaching materials were aimed to respectively demonstrate the characteristics of six bacteria with 2D graphics, 3D physical objects, and 3D virtual objects. Seventy-two fifth-grade children were randomly selected to participate in the study, and they were divided into three groups, each of which used the assigned teaching material to learn the name of the six different bacteria in intervals of 1, 2 and 3min. Results showed that the AR graphic book offers a practical and hands-on way for children to explore and learn about the bacteria. Follow-up interviews indicated that the children liked the AR graphic book the most, and they preferred it to the other materials. A quasi-experimental study was set up in secondary education to study the role of teachers while implementing tablet devices in science education. Three different classroom scripts that guided students and teachers' actions during the intervention on two social planes (group and classroom level) are compared. The main goal was to investigate which classroom script leads to the best results regarding progress in domain-specific knowledge and inquiry skills. Besides student achievement, students' experiences towards the role of the teacher and students' perceptions towards learning with tablets within the three conditions were investigated. In the first condition, the classroom script included learning activities that were balanced between the group and the classroom level. In the second condition, the learning activities occurred predominantly on the group level. The third condition entailed the classroom script as the control condition in which the learning activities were situated only on the classroom level, with the tablet used in a traditional way or as book behind glass'. Results show that students perform better on domain-specific knowledge in the conditions where the teacher intervened on the classroom level. Regarding the acquisition of inquiry skills, students performed best in the condition where the learning activities were balanced between the group and the classroom level. Moreover, students who perceived more structure achieved better. These results indicate that the role of the teacher cannot be ignored in technology-enhanced learning. Moreover, these results seem to suggest that one of the best apps remains the teacher. Metacomprehension as reflected in judgements of one's learning is crucial for self-regulated study, yet their accuracy is often low. We investigated text difficulty as a constraint on metacomprehension accuracy in text learning. A total of 235 participants studied a 10-section expository text and afterwards took a knowledge test. They made judgements of learning after each section. Sections were of high, medium or low difficulty; we manipulated between participants the order of difficulty levels across sections. In blocked orders, texts in each block (sections 1-4; sections 5-6; sections 7-10) were of the same difficulty level. In mixed orders, difficulty varied throughout the learning unit either from easy to difficult or from difficult to easy. Our general tenet was that orders would trigger different extents of experience-based processing and thus influence metacomprehension accuracy to different degrees. As hypothesized, accuracy was higher for blocked difficulty orders. Late-section judgement magnitude decreased more strongly in the blocked groups. At the same time, late-section judgement accuracy was higher in the blocked group. We discuss implications and limitations of the influence of fluctuations in text difficulty on judgements of learning accuracy together with some avenues for further research. ObjectiveTo quantify the population at risk of serious adverse reactions to replicating smallpox vaccine. Design and SampleConditions known or suspected to carry risk were identified via Centers for Disease Control and Prevention planning documents, other federal publications, and peer-reviewed literature. Conditions identified were categorized as historically recognized risks or more recently recognized immunocompromised states that may pose risk. Major historical risk factors were as follows: eczema/atopic dermatitis, pregnancy, HIV, and primary immunodeficiency. More recently identified states were as follows: rheumatoid arthritis, inflammatory bowel disease, dialysis, bone marrow transplant recipients within 24months post-transplant, solid-organ transplant recipients within 3months post-transplant, age under 1year, and systemic lupus erythematosus. MeasuresThe estimated prevalence or absolute number of affected individuals for each condition was ascertained from peer-reviewed studies, vital statistics, and registry databases. ResultsAn estimated 48,121,280 to 50,028,045 individuals (15.2-15.8% of the U.S. population) are potentially contraindicated to replicating smallpox vaccine. This rises to 119,244,531 to 123,669,327 (37.4-38.8%) if household contacts are included. ConclusionsThese figures are significant and larger than the only previously published study. Understanding this number allows for improved clinical utilization, equitable attention to the health needs of a vulnerable population, and strategic vaccine stockpiling. ObjectivesTransgender women experience a variety of factors that may contribute to HIV risk. The purpose of this study was to explore links among HIV risk perception, knowledge, and sexual risk behaviors of transgender women. Design and SampleA descriptive, correlational study design was used. Fifty transgender women from the South Florida area were enrolled in the study. MeasuresTransgender women completed a demographic questionnaire and standardized instruments measuring HIV risk perception, knowledge, and sexual risk behaviors. ResultsTransgender women reported low levels of HIV risk perception, and had knowledge deficits regarding HIV risk/transmission. Some participants engaged in high-risk sexual behaviors. Predictors of sexual risk behaviors among transgender women were identified. ConclusionsMore research is needed with a larger sample size to continue studying factors that contribute to sexual risk behaviors in the understudied population of transgender women. Evidence-based guidelines are available to assist public health nurses in providing care for transgender women. Nurses must assess HIV perception risk and HIV knowledge and provide relevant education to transgender women on ways to minimize sexual risk. ObjectiveLittle nutrition research has been conducted among families with unstable housing. The objective of this study was to examine the role of food stamps (i.e., Supplemental Nutrition Assistance Program; SNAP) in home food availability and dietary intake among WIC families who experienced unstable housing. Design and SampleCross-sectional study among vulnerable families. Low-income, multiethnic families with children participating in WIC (n=54). MeasuresDietary intake was assessed with 24-hr recalls. Home food availability was assessed with an adapted home food inventory for low-income, multiethnic families. Validation results from adapted home food inventory for these families are also reported. ResultsSNAP households had more foods than non-SNAP households; few significant associations were observed between food availability and child dietary intake. ConclusionsWith few exceptions, the home food environment was not related to children's dietary intake among these vulnerable families. More research is needed on food access for families facing unstable housing. ObjectivesTo pilot a group health service delivery model, CenteringParenting, for new parents, to assess its feasibility and impact on maternal and infant outcomes. Design and SampleFamilies attended six, 2-hr group sessions in their child's first year of life with three to seven other families. Health assessments, parent-led discussions, and vaccinations occurred within the group. MeasuresDemographic, breastfeeding, vaccination, maternal psychosocial health, parenting, and satisfaction data were collected and compared to a representative cohort. ResultsFour groups ran in two clinics. Four to eight parent/infant dyads participated in each group, 24 total dyads. Most participating parents were mothers. Dyads in the group model received 12hr of contact with Public Health over the year compared to 3hr in the typical one-on-one model. Participants were younger, more likely to have lower levels of education, and lower household income than the comparison group. Parents reported improvements in parenting experiences following the program. At 4months, all CenteringParenting babies were vaccinated compared to 95% of babies in the comparison group. ConclusionsThe pilot was successfully completed. Additional research is required to examine the effectiveness of CenteringParenting. Data collected provide insight into potential primary outcomes of interest and informs larger, rigorously designed longitudinal studies. ObjectiveDescribe the rates of CPR/AED training in high schools in the state of Washington after passage of legislation mandating CPR/AED training. Design and SampleA web-based survey was sent to administrators at 660 public and private high schools in the state of Washington. Results and ConclusionsThe survey was completed by 148 schools (22%); 64% reported providing CPR training and 54% provided AED training. Reported barriers to implementation included instructor availability, cost, and a lack of equipment. Descriptive statistics were used to describe the sample characteristics and implementation rates. Mandates without resources and support do not ensure implementation of CPR/AED training in high schools. Full public health benefits of a CPR mandate will not be realized until barriers to implementation are identified and eliminated through use of available, accessible public health resources. ObjectiveTo describe how characteristics of food retail stores (potential access) and other factors influence self-reported food shopping behavior (realized food access) among low-income, rural Central Appalachian women. Design and SampleCross-sectional descriptive. Potential access was assessed through store mapping and in-store food audits. Factors influencing consumers' realized access were assessed through in-depth interviews. Results were merged using a convergent parallel mixed methods approach. Food stores (n=50) and adult women (n=9) in a rural Central Appalachian county. ResultsPotential and realized food access were described across five dimensions: availability, accessibility, affordability, acceptability, and accommodation. Supermarkets had better availability of healthful foods, followed by grocery stores, dollar stores, and convenience stores. On average, participants lived within 10miles of 3.9 supermarkets or grocery stores, and traveled 7.5miles for major food shopping. Participants generally shopped at the closest store that met their expectations for food availability, price, service, and atmosphere. Participants' perceptions of stores diverged from each other and from in-store audit findings. ConclusionsFindings from this study can help public health nurses engage with communities to make affordable, healthy foods more accessible. Recommendations are made for educating low-income consumers and partnering with food stores. ObjectivePrior research suggests that adverse neighborhood conditions are related to preterm birth. One potential pathway by which neighborhood conditions increase the risk for preterm birth is by increasing women's psychological distress. Our objective was to examine whether psychological distress mediated the relationship between neighborhood conditions and preterm birth. Design and SampleOne hundred and one pregnant African-American women receiving prenatal care at a medical center in Chicago participated in this cross-sectional design study. MeasuresWomen completed the self-report instruments about their perceived neighborhood conditions and psychological distress between 15-26weeks gestation. Objective measures of the neighborhood were derived using geographic information systems (GIS). Birth data were collected from medical records. ResultsPerceived adverse neighborhood conditions were related to psychological distress: perceived physical disorder (r=.26, p=.01), perceived social disorder (r=.21, p=.03), and perceived crime (r=.30, p=.01). Objective neighborhood conditions were not related to psychological distress. Psychological distress mediated the effects of perceived neighborhood conditions on preterm birth. ConclusionsPsychological distress in the second trimester mediated the effects of perceived, but not objective, neighborhood conditions on preterm birth. If these results are replicable in studies with larger sample sizes, intervention strategies could be implemented at the individual level to reduce psychological distress and improve women's ability to cope with adverse neighborhood conditions. Community asset mapping (CAM) is the collective process of identifying local assets and strategizing processes to address public health issues and concerns and improve quality of life. Prior to implementing a community-based physical activity intervention with Latinas in the Texas Lower Rio Grande Valley, promotoras [community health workers] conducted 16 interactive sessions in 8 colonias. The analysis of the transcribed CAM recordings and on-site observational data resulted in the construction of Living in Limbo as the thematic representation of these Latinas' social isolation and marginalization associated with pervasive poverty, undocumented immigration status or lack of citizenship, their fears emanating from threats to physical and emotional safety, and the barriers created by lack of availability and access to resources. ObjectiveThis study sought to better understand and improve influenza vaccination in low-income populations regardless of their health insurance/immigration status. It assessed client satisfaction and experiences with services provided at community-based flu outreach clinics in South Los Angeles. The clinics represent a community-public agency partnershipa model of vaccine delivery that was relatively novel to the region. Design and SampleDuring 2011-2012, a self-administered questionnaire was distributed to clients of the local health department's 39 flu outreach clinics in South Los Angeles. MeasuresThe study utilized a 10-item satisfaction scale and survey questions that gauged client history and experiences with present and prior vaccinations. ResultsOf 4,497 adults who were eligible, 3,860 completed the survey (participation rate=86%). More than 90% were satisfied with their experiences at the clinics. Younger adults were significantly more likely than adults aged 65+ to report not having been vaccinated in the previous year (p<.05). No statistical differences were observed by gender or race/ethnicity. ConclusionsHigh satisfaction with flu outreach services in South Los Angeles suggests that this model for vaccine delivery could lead to meaningful client experience of care. Local health departments could capitalize on this model to improve preventive services delivery for the underserved. BackgroundBreastfeeding is a global initiative of the World Health Organization and the U.S. domestic health agenda, Healthy People 2020; both recommend exclusive breastfeeding, defined as providing breast milk only via breast or bottle, through the first 6months of an infant's life. Previous literature has shown the correlation between socioeconomic status and breastfeeding, with higher maternal education and income as predictors of sustained breastfeeding. This same population of women is more likely to be employed outside the home. MethodsPubMed and the Cochrane Database of Systematic Reviews were searched using inclusion and exclusion criteria to identify the effect of maternity leave length and workplace policies on the sustainment of breastfeeding for employed mothers. ResultsCommon facilitators to sustainment of breastfeeding included longer length of maternity leave as well as adequate time and space for the pumping of breast milk once the mother returned to the workplace. Barriers included inconsistency in policy and the lack of enforcement of policies in different countries. ConclusionsThere is a lack of consistency globally on maternity leave length and workplace policy as determinants of sustained breastfeeding for employed mothers. A consistent approach is needed to achieve the goal of exclusive breastfeeding for infants. A paucity of nursing literature is available on disaster-related community resilience. Using a nursing method for analyzing concepts, this article attempts to clarify the meaning of this novel concept to encourage nursing research and practice. This concept analysis provides an introduction to the phenomenon of disaster-related community resilience for nurses and consumers of nursing research. The article proposes the definition, antecedents, attributes, consequences, and empirical referents of disaster-related community resilience and provides suggestions for nursing research and practice. It also provides nurses a foundation for participating in resilience-building activities that may save lives and allow communities to recover more rapidly postdisaster. Alternative high school (AHS) students are at-risk for school dropout and engage in high levels of health-risk behaviors that should be monitored over time. They are excluded from most public health surveillance efforts (e.g., Youth Risk Behavior Survey; YRBS), hindering our ability to monitor health disparities and allocate scarce resources to the areas of greatest need. Using active parental consent, we recruited 515 students from 14 AHSs in Texas to take a modified YRBS. We calculated three different participation rates, tracked participation by age of legal consent (18 and <18years), and identified other considerations for obtaining quality data. Being required to use active consent resulted in a much lower cooperation rate among students <18years (32%) versus those who were 18years and could provide their own consent (57%). Because chronic truancy is prevalent in AHS students, cooperation rates may be more accurate than participation rates based off of enrollment or attendance. Requiring active consent and not having accurate participation rates may result in surveillance data that are of disparate quality. This threatens to mask the needs of AHS students and perpetuate disparities because we are likely missing the highest-risk students within a high-risk sample and cannot generalize findings. In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss function. One strength of the Cut-Score Operating Function is that it shows how robust error rates are to the choice of cut-score and identifies the regions of extreme sensitivity relative to that choice. In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for individual components and also for when one is required to retake the whole battery. We derive the overall classification error rate using a flexible cost function defined by weights assigned to false negative and false positive errors. The result, shown graphically, is that competing demands of minimizing both false positive and false negative errors yield a unique optimal value for the cut-score. This cut-score can be estimated numerically for any number of components and any number of retakes. Among the results we obtain is that the more lenient the retake policy the higher one must set the cut-score to minimize the error rate. This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and about 12% for a classroom test. A new parametric density function for ability estimates, the tilted scaled , is used to resolve the nonidentifiability of the univariate test theory model. Item characteristic curves (ICCs) are represented as basis function expansions of their log-odds transforms. A parameter cascading method along with roughness penalties is used to estimate the corresponding log odds of the ICCs and is demonstrated to be sufficiently computationally efficient that it can support the analysis of large data sets. When a multisite randomized trial reveals between-site variation in program impact, methods are needed for further investigating heterogeneous mediation mechanisms across the sites. We conceptualize and identify a joint distribution of site-specific direct and indirect effects under the potential outcomes framework. A method-of-moments procedure incorporating ratio-of-mediator-probability weighting (RMPW) consistently estimates the causal parameters. This strategy conveniently relaxes the assumption of no Treatment x Mediator interaction while greatly simplifying the outcome model specification without invoking strong distributional assumptions. We derive asymptotic standard errors that reflect the sampling variability of the estimated weight. We also offer an easy-to-use R package, MultisiteMediation , that implements the proposed method. It is freely available at the Comprehensive R Archive Network (http://cran.r-project.org/web/packages/MultisiteMediation). This paper reports a typical synthesis of a nanocomposite of functionalized graphene quantum dots and imprinted polymer at the surface of screen-printed carbon electrode using N-acryloyl-4-aminobenzamide, as a functional monomer, and an anticancerous drug, ifosfamide, as a print molecule (test analyte). Herein, graphene quantum dots in nanocomposite practically induced the electrocatalytic activity by lowering the oxidation overpotential of test analyte and thereby amplifying electronic transmission, without any interfacial barrier in between the film and the electrode surface. The differential pulse anodic stripping signal at functionalized graphene quantum dots based imprinted sensor was realized to be about 3- and 7-fold higher as compared to the traditionally made imprinted polymers prepared in the presence and the absence of graphene quantum dots (un-functionalized), respectively. This may be attributed to a pertinent synergism in between the positively charged functionalized graphene quantum dots in the film and the target analyte toward the enhancement of electro-conductivity of the film and thereby the electrode kinetics. In fact, the covalent attachment of graphene quantum dots with N-acryloyl-4-aminobenzamide molecules might exert an extended conjugation at their interface facilitating electro conducting to render the channelized pathways for the electron transport. The proposed sensor is practically applicable to the ultratrace evaluation of ifosfamide in real (biological/pharmaceutical) samples with detection limit as low as 0.11 ng mL(-1) (S/N=3), without any matrix effect, cross-reactivity, and false-positives. We have demonstrated a palm-size NanoAptamer analyzer capable of detecting bisphenol A (BPA) at environmentally relevant concentrations (< 1 ng/mL or ppb). It is designed for performing reaction and fluorescence measurement on single cuvette sample. Modified NanoGene assay was used as the sensing mechanism where signaling DNA and QD(655) was tethered to QD(565) and magnetic bead via the aptamer. Aptamer affinity with BPA resulted in the release of the signaling DNA and QD(655) from the complex and hence corresponding decrease in QD(655) fluorescence measurement signal. Baseline characterization was first performed with empty cuvettes, quantum dots and magnetic beads under near-ideal conditions to establish essential functionality of the NanoAptamer analyzer. Duration of incubation time, number of rinse cycles, and necessity of cuvette vibration were also investigated. In order to demonstrate the capability of the NanoAptamer analyzer to detect BPA, samples with BPA concentrations ranging from 0.0005 to 1.0 ng/mL (ppb) were used. The performance of the NanoAptamer analyzer was further examined by using laboratory protocol and commercial spectrofluorometer as reference. Correlation between NanoAptamer analyzer and laboratory protocol as well as commercial spectrofluorometer was evaluated via correlation plots and correlation coefficients. In this paper, a novel and signal-on electrochemical biosensor based on Hg2+- triggered nicking endonucleaseassisted target recycling and hybridization chain reaction (HCR) amplification tactics was developed for sensitive and selective detection of Hg2+. The hairpin-shaped capture probe A (PA) contained a specific sequence which was recognized by nicking endonuclease (NEase). In the presence of Hg2+, probe B (PB) hybridized with PA to form stand-up duplex DNA strands via the Hg2+ mediated thymine-Hg2+-thymine (T-Hg2+-T) structure, which automatically triggered NEase to selectively digest duplex region from the recognition sites, spontaneously dissociating PB and Hg2+ and leaving the remnant initiators. The released PB and Hg2+ could be reused to initiate the next cycle and more initiators were generated. The long nicked double helices were formed through HCR event, which was triggered by the initiators and two hairpin-shaped signal probes labeled with methylene blue, resulting in a significant signal increase. Under optimum conditions, the resultant biosensor showed the high sensitivity and selectivity for the detection of Hg2+ in a linear range from 10 pM to 50 nM (R-2=0.9990), and a detection limit as low as 1.6 pM (S/N=3). Moreover, the proposed biosensor was successfully applied in the detection of Hg2+ in environment water samples with satisfactory results. Copper (II) is one of the most of important cofactors for numerous enzymes and has captured broad attention due to its role as a neurotransmitters for physiological and pathological functions. In this article, we present a reaction-based fluorescent sensor for Cu2+ detection (NIR-Cu) with near-infrared excitation and emission, including probe design, structure characterization, optical property test and biological imaging application. NIR-Cu is equipped with a functional group, 2-picolinic ester, which hydrolyzes in the presence of Cu2+ with high selectivity over completed cations. With the experimental conditions optimized, NIR-Cu (5 mu M) exhibits linear response for Cu2+ range from 0.1 to 5 mu M, with a detection limit of 29 nM. NIR-Cu also shows excellent water solubility and are highly responsive, both desirable properties for Cu2+ detection in water samples. In addition, due to its near-infrared excitation and emission properties, NIR-Cu demonstrates outstanding fluorescent imaging in living cells and tissues. Fabrication of nitrogen-doped carbon dots (N-CDs) electrode for the screening of purine metabolic disorder was described in this paper. Peroxynitrite is a short-lived oxidant species that is a potent inducer of cell death. Uric acid (UA) can scavenge the peroxynitrite to avoid the formation of nitrotyrosine, which is formed from the reaction between peroxynitrite and tyrosine (Try). Scavenging the peroxynitrite avoids the inactivation of cellular enzymes and modification of the cytoskeleton. Reduced level of UA decreases the ability of the body from preventing the peroxynitrite toxicity. On the other hand, the abnormal level of UA leads to gout and hyperuricemia. Allopurinol (AP) is administered in UA lowering therapy. Thus, the simultaneous determination of UA, Try and AP using N-CDs modified glassy carbon (GC) electrode was demonstrated for the first time. Initially, N-CDs were prepared from L-asparagine by pyrolysis and characterized by different spectroscopic and microscopic techniques. The HR-TEM image shows that the average size of the prepared N-CDs was 1.8 +/- 0.03 nm. Further, the N-CDs were directly attached on GC electrode by simple immersion, follows Micheal's nucleophilic addition. XPS of N-CDs shows a peak at 285.3 eV corresponds to the formation of C-N bond. The GC/N-CDs electrode shows higher electrocatalytic activity towards UA, Tyr and AP by not only shifting their oxidation potentials toward less positive potential but also enhanced their oxidation currents in contrast to bare GC electrode. The GC/N-CDs electrode shows the limit of detection of 13x10(-10) M (S/N=3) and the sensitivity of 924 A mM(-1) cm(-2) towards the determination of UA. Finally, the N-CDs modified electrode was utilized for the determination of UA, Tyr and AP in human blood serum and urine samples. In this work, a novel kind of water-dispersible molecular imprinted conductive polyaniline particles was prepared through a facile and efficient macromolecular co-assembly of polyaniline with amphiphilic copolymer, and applied as the molecular recognition element to construct protein electrochemical sensor. In our strategy, an amphiphilic copolymer P(AMPS-co-St) was first synthesized using 2-acrylamido-2-methyl-l-propanesulfonic acid (AMPS) and styrene (St) as monomer, which could co-assemble with PANI in aqueous solution to generate PANI particles driven by the electrostatic interaction. During this process, ovalbumin (OVA) as template protein was added and trapped into the PANI NPs particles owing to their interactions, resulting in the formation of molecular imprinted polyaniline (MIP-PANI) particles. When utilizing the MIP-PANI particles as recognition element, the resultant imprinted PANI sensor not only exhibited good selectivity toward template protein (the imprinting factor a is 5.31), but also a wide linear range over OVA concentration from 10(-11) to 10(-6) mg mL(-1) with a significantly lower detection limit of 10(-12) mg mL(-1), which outperformed most of reported OVA detecting methods. In addition, an ultrafast response time of less than 3 min has also been demonstrated. The superior performance is ascribed to the water compatibility, large specific surface area of PANI particles and the electrical conductivity of PANI which provides a direct path for the conduction of electrons from the imprinting sites to the electrode surface. The outstanding sensing performance combined with its facile, quick, green preparation procedure as well as low production cost makes the MIP-PANI particles attractive in specific protein recognition and sensing. In this work, aiming at the construction of a disposable, wireless, low-cost and sensitive system for bioassay, we report a closed bipolar electrode electrochemiluminescence (BPE-ECL) sensing platform based on graphite paper as BPE for the first time. Graphite paper is qualified as BPE due to its unique properties such as excellent electrical conductivity, uniform composition and ease of use. This simple BPE-ECL device was applied to the quantitative analysis of oxidant (H2O2) and biomarker (CEA) respectively, according to the principle of BPE sensing-charge balance. For the H2O2 analysis, Pt NPs were electrodeposited onto the cathode through a bipolar electrodeposition approach to promote the sensing performance. As a result, this BPE-ECL device exhibited a wide linear range of 0.001-15 mM with a low detection limit of 0.5 mu M (S/N=3) for H2O2 determination. For the determination of CEA, chitosan-multi-walled carbon nanotubes (CS-MWCNTs) were employed to supply a hydrophilic interface for immobilizing primary antibody (Ab(1)); and Au@Pt nanostructures were conjugated with secondary antibody (Ab(2)) as catalysts for H2O2 reduction. Under the optimal conditions, the BPE-ECL immunodevice showed a wide linear range of 0.01-60 ng mL(-1) with a detection limit of 5.0 pg mL(-1) for CEA. Furthermore, it also displayed satisfactory selectivity, excellent stability and good reproducibility. The developed method opened a new avenue to clinical bioassay. MicroRNAs (miRNAs) play important roles in gene regulation and cancer development. Nowadays, it is still a challenge to detect low-abundance miRNAs. Here, we present a magnetic fluorescent miRNA sensing system for the rapid and sensitive detection of miRNAs from cell lysates and serum samples. In this system, albumin nanoparticles (Alb NPs) were prepared from inherent biocompatible bovine serum albumin (BSA). A large number of fluorescent dyes were loaded into Alb NPs to make Alb NPs serve as signal molecular nanocarriers for signal amplification. Benefited from the reactive functional groups-carboxyl groups of Alb NPs, p19 protein, a viral protein that can bind and sequester short RNA duplex effectively and selectively, was modified successfully to the surface of the fluorescent dyes-loaded Alb NPs, thus enabling the probe:target miRNA duplex recognition and binding. Followed by the introduction of gold nanoparticles coated magnetic microbeads (Au NPs-MBs), which were prepared through a novel and simple method, the system combined the merits of the rapid and efficient collection given by MBs with the good affinities to attach probe molecules endowed by the coated gold layer. A broad linear detection range of 10 fM-10 nM and a low detection limit of 9 fM were obtained within 100 min by detecting a model target miRNA-21. The feasibility of this method for rapid and sensitive quantification might advance the use of miRNAs as biomarkers in clinical praxis significantly. Microbial electrochemical technologies (METs) are one of the emerging green bioenergy domains that are utilizing microorganisms for wastewater treatment or electrosynthesis. Real-time monitoring of bioprocess during operation is a prerequisite for understanding and further improving bioenergy harvesting. Optical methods are powerful tools for this, but require transparent, highly conductive and biocompatible electrodes. Whereas indium tin oxide (ITO) is a well-known transparent conductive oxide, it is a non-ideal platform for biofilm growth. Here, a straightforward approach of surface modification of ITO anodes with gold (Au) is demonstrated, to enhance direct microbial biofilm cultivation on their surface and to improve the produced current densities. The trade-off between the electrode transmittance (critical for the underlying integrated sensors) and the enhanced growth of biofilms (crucial for direct monitoring) is studied. Au-modified ITO electrodes show a faster and reproducible biofilm growth with three times higher maximum current densities and about 6.9 times thicker biofilms compared to their unmodified ITO counterparts. The electrochemical analysis confirms the enhanced performance and the reversibility of the ITO/Au electrodes. The catalytic effect of Au on the ITO surface seems to be the key factor of the observed performance improvement since the changes in the electrode conductivity and their surface wettability are relatively small and in the range of ITO. An integrated platform for the ITO/Au transparent electrode with light-emitting diodes was fabricated and its feasibility for optical biofilm thickness monitoring is demonstrated. Such transparent electrodes with embedded catalytic metals can serve as multifunctional windows for biofilm diagnostic microchips. A new, precise, and very selective method for increasing the impact and assessment of histidine as a biomarker for early diagnosis of histidinemia disease in new born children was developed. The method depends on the formation of the ion pair associate between histidine and the nano optical samarium tetracycline [Sm-(TC)(2)](+) complex doped in sol-gel matrix in a borate buffer of pH 9.2. The [Sm-(TC)(2)](+) complex has +I net charge which is very selective and sensitive for [histidine](-) at pH 9.2 in serum and urine samples of histidinemia disease. Histidine enhances the luminescence intensity of the nano optical [Sm-(TC)(2)](+) complex at 645 nm after excitation at 400 nm, in borate buffer, pH 9.2. The remarkable enhancement of the luminescence intensity at 645 nm of nano [Sm-(TC)(2)](+) complex doped in sol-gel matrix by various concentrations of the histidine was successfully used as an optical probe for the assessment of histidine in different serum and urine samples of new born children infected by histidinemia. The calibration plot was achieved over the concentration range 1.4x10(-5) - 6.5 x10(-1) mol L-1 histidine with a correlation coefficient of (0.998) and a detection limit of (3.2x10(-10) mol L-1). The sensitivity (98.88%) and specificity (97.41%) of histidine as a biomarker were calculated. We developed a biosensor for nitrite ion on an electrode surface modified with M13 viruses and gold nanostructures. Gold dendritic nanostructures (Au-DNs) are electrochemically co-deposited from 4E peptides engineered M13 virus (M13(4E)) mixed electrolyte on to the ITO electrode. The M13(4E) could specifically nucleate Au precursor (Gold (III) chloride), which enable the efficient growth of dendritic nanostructures, whereas such dendritic structures were not obtained in the presence of wild-type and Y3E peptides engineered M13 viruses. The structural features of the Au-DNs and their interfacing mechanism with ITO electrode are characterized by SEM, EDX and XRD analyses. The growth of Au-DNs at ITO electrode has been monitored by time dependent SEM study. The M13(4E) induces the formation and plays a crucial role in shaping the dendritic morphology for Au. Biosensor electrode was constructed using Au-DNs modified electrode for nitrite ions and found improved sensitivity relative to the sensor electrode prepared from wild-type M13, Y3E peptides engineered M13 and without M13. Sensor electrode exhibited good selectivity toward target analyte from the possible interferences. Furthermore, 4E native peptides were used as additive to deposit Au nanostructures and it is compared with the structure and reactivity of the Au nanostructures prepared in the presence of M13(4E). Our novel biosensor fabrication can be extended to other metal and metal oxide nanostructures and its application might be useful to develop novel biosensor electrode for variety of biomolecules. Along with continuous growing widespread adulterations of botanical drugs, the necessity for drug quality monitoring has become more popular than ever. Considering that antioxidants are widely found in natural plant pharmaceuticals, gallic acid (GA) is often regarded as the reference standard to make sure whether these are up to grade as guided by Chinese Pharmacopeia. Herein, a novel Bi2MoO6/Bi2S3 photoelectrochemical sensor has been successfully involved toward selective GA analysis to supervise drug quality, in which y-Bi2MoO6 nanobelts were treated as template nanocrystal and scaffold. Such Bi2S3 accommodated in Bi2MoO6 nanobelts render platform with excellent light-harvesting capability, selectivity and reproducibility. Concerned mechanism was in-depth pursued through theoretical computation and morphology speculation, inferring that two aspects mainly contribute to the findings: (1) engineering particular structure brings about surface dangling bonds, which raises the likelihood of electrostatic interaction with opposite charges; (2) appending Bi2S3 to the Bi2MoO6 nanobelts acted as a new avenue to mediate photoelectrochemical behavior, nearly devoid of interference effect. Our work opens up broad possibilities for finely distinguishing different antioxidants. As the extension of this simple and valid strategy, photoelectrochemistry will become a potent backing for quality guaranty in drug field, which offers an entry into ensuring good consistency in batch production. A novel water-compatible C-60-monoadduct based imprinted micelles was synthesized by the self-assembly of vinylic-C-60-monoadduct with sodium dodecylsulfate micellar system, in the presence of chlorambucil as a model template (anticancer drug). After template retrieval with acetonitrile, these imprinted micelles were immobilized at the surface of ionic liquid decorated carbon ceramic electrode. Herein, C-60-monoadduct (the head group of micelle) actually served as a nanomediator for electronic transmission across multiple interfaces. Such modification induced electrocatalytic characteristics by decreasing analyte oxidation overpotential and thereby augmented the electrode kinetics. Consequently, the differential pulse anodic stripping transduction was realized to be approximately four-fold as compared to the corresponding electrode modified without C-60-monoadduct. This revealed the potential role of fullerene as nanomediator in the signal transduction. Herein, ionic liquids facilitated electron transport by two-fold without any interfacial barrier through carbon layers than that realized with modified ceramic electrodes made in the absence of ionic liquids. A perfect linearity in the current-concentration profile under optimal conditions was observed for the analyte concentration in the range 1.47-247.20 ng mL(-1), with the detection limits to the tune of 0.36 ng mL(-1) (S/N=3) in aqueous and real samples. The development of smartphone based biosensors for point-of-care testing (POCT) applications allows realizing "all in one" instruments, with large potential distribution among the general population. With this respect, paper color-based detection performed by reflectance measurement is the most popular, simple, inexpensive and straightforward method. Despite the large number of scientific publications related to these biosensors, they still suffer from a poor detectability and reproducibility related to inhomogeneity of color development, which leads to low assay reproducibility. To overcome these problems, we propose a smartphone paper-based biosensor, in which all the reagents necessary to complete the analysis are co-entrapped on paper in a "wafer"-like bilayer film of polyelectrolytes (Poly (allyl amine hydrochloride/poly(sodium 4-styrene sulfonate)). Using a 3D printing low-cost technology we fabricated the smartphone-based device that consists in a cover accessory attached to the smartphone and incorporating a light diffuser over the flash to improve the image quality, a mini dark box and a disposable analytical cartridge containing all the reagents necessary for the complete analysis. The biosensor was developed exploiting coupled enzyme reactions for quantifying L-lactate in oral fluid, which is considered a biomarker of poor tissue perfusion, a key element in the management of severe sepsis, septic shock and in sports performance evaluation. The developed method is sensitive, rapid, and it allows detecting L-lactate in oral fluid in the relevant physiological range, with a limit of detection of 0.1 mmol L-1. The extreme simplicity of assay execution (no reagents need to be added) and flexibility of fabrication of the device, together with the high assay versatility (any oxidase can be coupled with HRP-based color change reaction) make our approach suitable for the realization of smartphone-based biosensors able to non-invasively detect a large variety of analytes of clinical interest. Combining dual-signal channels of differential pulse voltammetry (DPV) and amperometric i-t curve, we developed an amplified sandwich-type electrochemical immunosensor for ultrasensitive detection of prostate specific antigen (PSA). Due to the large specific surface area and good adsorption property, 3-aminopropyl-triethox functionalized CeO2 mesoporous nanoparticles (NH2-M-CeO2) was used for supporting toluidine blue (TB) as the electron transfer mediator and anti-PSA as the signal response. To prevent the leak of TB and facilitate the electron transfer property of NH2-M-CeO2, ionic liquids doped carboxymethyl chitosan (CMC/ILs) were incorporated and TB/M-CeO2/CMC/ILs complex was gotten to act as the label of anti-PSA (Ab(2)). Meanwhile, Au-CoS/graphene was as the matrix materials modifying the electrode to immobilize anti-PSA (Ab(1)). Specifically, Au-CoS/graphene cannot produce electrochemical signals through DPV method but can provide an obviously electrochemical signal towards catalysis hydrogen peroxide (H2O2) through amperometric i-t curve method. At the time of detection, with the increase of concentrations of PSA, the DPV signals from TB/M-CeO2/CMC/ILs were increased and the amperometric i-t curve signals from the Au-CoS/graphene were decreased. Under the optimum conditions, the immunosensor demonstrated remarkable analytical performance of a linear range of 0.5 pg/mL to 50 ng/mL with a detection limit of 0.16 pg/mL for quantitative detection of PSA (S/N=3). Sialoglycan expression is critical for assessing various diseases progression. Especially, its abnormal levels are commonly believed to be associated with tumor and metastatic cancer types. While, complicated structures, multiple types and dynamic distributions make it challenging for in situ investigating sialoglycans at the physiological status. Herein, we developed a 4-mercaptophenylboronic acid (MPBA)-based surface-enhanced Raman scattering (SERS) nanosensor to in situ study sialoglycan levels and dynamic expression processes of different cell types based on molecular recognition between phenylboronic acid and sialoglycans at physiological condition. This nanosensor is designed by the MPBA decorated silver nanoparticle (AgNP), which is unique and multifunctional because of its three-in-one role involving the Raman signal enhancer (AgNP), the sensing reporter of MPBA and the target receptor based on the recognition of phenylboronic acid and sialoglycans. When this nanosensor binds to sialoglycans, the molecular vibrational modes of MPBA will change, which can be traced by ultrasensitive SERS technique. The superiority of this study is that we built the relation between the spectral changes of MPBA (relative intensities) in molecular recognition with the sialoglycan dynamic expression of cells. We believe that our SERS strategy could be further extended to explore crucial physiological processes and significant biological system that glycans are involved in. Nanostructured artificial receptor materials with unprecedented hierarchical structure for determination of human serum albumin (HSA) are designed and fabricated. For that purpose a new hierarchical template is prepared. This template allowed for simultaneous structural control of the deposited molecularly imprinted polymer (MIP) film on three length scales. A colloidal crystal templating with optimized electrochemical polymerization of 2,3'-bithiophene enables deposition of an MIP film in the form of an inverse opal. Thickness of the deposited polymer film is precisely controlled with the number of current oscillations during potentiostatic deposition of the imprinted poly(2,3'-bithiophene) film. Prior immobilization of HSA on the colloidal crystal allows formation of molecularly imprinted cavities exclusively on the internal surface of the pores. Furthermore, all binding sites are located on the surface of the imprinted cavities at locations corresponding to positions of functional groups present on the surface of HSA molecules due to prior derivatization of HSA molecules with appropriate functional monomers. This synergistic strategy results in a material with superior recognition performance. Integration of the MIP film as a recognition unit with a sensitive extended-gate field-effect transistor (EG-FET) transducer leads to highly selective HSA determination in the femtomolar concentration range. S-nitrosylation is a posttranslational modification of protein cysteine residues leading to the formation of S-nitrosothiols and its detection is crucial to understanding of redox regulation and NO-based signaling. Prototypical detection methods for S-nitrosylation are always carried out ex situ. However, the reversible nature and the tendency of transnitrosylation highlight the necessity of its probing in intact live biological contexts. Herein we provide a fluorogenic chemical probe for the detection of S-nitrosylation in live endothelial cells. The probe is weakly emissive alone and becomes highly fluorescent only after undergoing a reaction with S-nitrosothiols in live cellular environments. This probe features high degrees of specificity and desirable sensitivity. Furthermore, it has been successfully applied to image the dynamic change of protein S-nitrosylation in live endothelial cells. The applicability of the probe in complex biological systems has been additionally verified by imaging a known target of S-nitrosylation, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), in live cells. Due to the versatility exemplified, this probe holds great promise for exploring the role of protein S-nitrosylation in the pathophysiological process of a variety of vascular diseases. In this research, we found that the peroxidase-like activities of noncovalent DNA-Pt hybrid nanoparticles could be obviously blocked, when Pt nanoparticles (PtNPs) were synthesized in situ using DNA as a template. Moreover, this self-assembled synthetic process was very convenient and rapid (within few mintues), and the inhibition mediated by DNA was also very effective. First, by the paper-based analytical device (PAD) we found the catalytic activities of DNA-Pt hybrid nanoparticles exhibited a linear response to the concentration of DNA in the range from 0.0075 to 0.25 mu M. Then, with the magnetic bead isolated system and target DNA-induced hybridization chain reaction (HCR), we realized the specific target DNA analysis with a low detection of 0.228 nM, and demonstrated its effectivity in distinguishing the target DNA from other interferences. To our knowledge, this is the first report that used the nanoassembly between DNA and PtNPs for colorimetric detection of nucleic acids, which was based on DNA-mediated inhibition of catalytic activities of platinum nanoparticles. The results may be useful for understanding the interactions between DNA and metal nanoparticles, and for development of other convenient and effective analytical strategies. Current diagnostic tools for Mycobacterium tuberculosis (Mtb) have many disadvantages including low sensitivity, slow turnaround times, or high cost. Accurate, easy to use, and inexpensive point of care molecular diagnostic tests are urgently needed for the analysis of multidrug resistant (MDR) and extensively drug resistant (XDR) Mtb strains that emerge globally as a public health threat. In this study, we established proof-of-concept for a novel diagnostic platform (TB-DzT) for Mtb detection and the identification of drug resistant mutants using binary deoxyribozyme sensors (BiDz). TB-DzT combines a multiplex PCR with single nucleotide polymorphism (SNP) detection using highly selective BiDz sensors targeting loci associated with species typing and resistance to rifampin, isoniazid and fluoroquinolone antibiotics. Using the TB-DzT assay, we demonstrated accurate detection of Mtb and 5 mutations associated with resistance to three anti-TB drugs in clinical isolates. The assay also enables detection of a minority population of drug resistant Mtb, a clinically relevant scenario referred to as heteroresistance. Additionally, we show that TB-DzT can detect the presence of unknown mutations at target loci using combinatorial BiDz sensors. This diagnostic platform provides the foundation for the development of cost-effective, accurate and sensitive alternatives for molecular diagnostics of MDR- and XDR-TB. Here we prepared an electrochemical immunosensor employing Au sheet as working electrode, Fe3O4 magnetic nanoparticles (MNPs) as supporting matrix and hemin/G-quadruplex DNAzyme as signal amplifier for determination of hepatitis B virus surface antigen (HBsAg). First, the primary antibody of HBs (Ab(1)) was immobilized on the surface of the carboxyl-modified MNPs. Then, the assembly of antibody and alkylthiol/G-quadruplex DNA/hemin on gold nanoparticles was used as bio-bar-coded nanoparticle probe. Protein target was sandwiched between the primary antibody of HBs (Ab(1)) immobilized on the MNPs and hemin bio-barcoded AuNPs probe labeled antibody (Ab(2)). Hemin/G-quadruplex structure as HRP mimicking-DNAzyme significantly improved the catalytic reduction of H2O2 by oxidation of methylene blue (MB). Square wave voltammetry signals of MB provided quantitative measurements of HBsAg with a linear concentration range of 0.3-1000 pgmL(-1) and detection limit of 0.19 pgmL(-1). Due to efficient catalytic activity of HRP mimicking-DNAzyme, the proposed immunosensor exhibited high sensitivity and it holds great promise for clinical application and provides a new platform for immunosensor development and fast disease diagnosis. Label-free approaches to assess cell properties ideally suit the requirements of cell-based therapeutics, since they permit to characterize cells with minimal perturbation and manipulation, at the benefit of sample recovery and re-employment for treatment. For this reason, label-free techniques would find sensible application in adoptive T cell-based immunotherapy. In this work, we describe the label-free and single-cell detection of in vitro activated T lymphocytes in flow through an electrical impedance-based setup. We describe a novel platform featuring 3D free-standing microelectrodes presenting passive upstream and downstream extensions and integrated into microfluidic channels. We employ such device to measure the impedance change associated with T cell activation at electrical frequencies maximizing the difference between non-activated and activated T cells. Finally, we harness the impedance signature of unstimulated T cells to set a boundary separating activated and non-activated clones, so to characterize the selectivity and specificity of the system. In conclusion, the strategy here proposed highlights the possible employment of impedance to assess T cell activation in label-free. We explore graphene oxide (GO) nanosheets functionalized dual-peak long period grating (dLPG) based biosensor for ultrasensitive label-free antibody-antigen immunosensing. The GO linking layer provides a remarkable analytical platform for bioaffinity binding interface due to its favorable combination of exceptionally high surface-to-volume ratio and excellent optical and biochemical properties. A new GO deposition technique based on chemical-bonding in conjunction with physical-adsorption was proposed to offer the advantages of a strong bonding between GO and fiber device surface and a homogeneous GO overlay with desirable stability, repeatability and durability. The surface morphology of GO overlay was characterized by Atomic force microscopy, Scanning electron microscope, and Raman spectroscopy. By depositing the GO with a thickness of 49.2 nm, the sensitivity in refractive index (RI) of dLPG was increased to 2538 nm/RIU, 200% that of non-coated dLPG, in low RI region (1.333-1.347) where bioassays and biological events were usually carried out. The IgG was covalently immobilized on GO-dLPG via EDC/NHS heterobifunctional cross-linking chemistry leaving the binding sites free for target analyte recognition. The performance of immunosensing was evaluated by monitoring the kinetic bioaffinity binding between IgG and specific anti-IgG in real-time. The GO-dLPG based biosensor demonstrates an ultrahigh sensitivity with limit of detection of 7 ng/mL, which is 10-fold better than non-coated dLPG biosensor and 100-fold greater than LPG-based immunosensor. Moreover, the reusability of GO-dLPG biosensor has been facilitated by a simple regeneration procedure based on stripping off bound anti-IgG treatment. The proposed ultrasensitive biosensor can be further adapted as biophotonic platform opening up the potential for food safety, environmental monitoring, clinical diagnostics and medical applications. Detecting viable circulating tumor cells (CTCs) without disruption to their functions for in vitro culture and functional study could unravel the biology of metastasis and promote the development of personalized antitumor therapies. However, existing CTC detection approaches commonly include CTC isolation and subsequent destructive identification, which damages CTC viability and functions and generates substantial CTC loss. To address the challenge of efficiently detecting viable CTCs for functional study, we develop a nanosphere-based cell-friendly one-step strategy. Immunonanospheres with prominent magnetic/fluorescence properties and extraordinary stability in complex matrices enable simultaneous efficient magnetic capture and specific fluorescence labeling of tumor cells directly in whole blood. The collected cells with fluorescent tags can be reliably identified, free of the tedious and destructive manipulations from conventional CTC identification. Hence, as few as 5 tumor cells in ca. 1 mL of whole blood can be efficiently detected via only 20 min incubation, and this strategy also shows good reproducibility with the relative standard deviation (RSD) of 8.7%. Moreover, due to the time-saving and gentle processing and the minimum disruption of immunonanospheres to cells, 93.8 0.1% of detected tumor cells retain cell viability and proliferation ability with negligible changes of cell functions, capacitating functional study on cell migration, invasion and glucose uptake. Additionally, this strategy exhibits successful CTC detection in 10/10 peripheral blood samples of cancer patients. Therefore, this nanosphere-based cell-friendly one-step strategy enables viable CTC detection and further functional analyses, which will help to unravel tumor metastasis and guide treatment selection. Coupling the light-harvesting capabilities of semiconductors with the catalytic power of bacteria is a promising way to increase the efficiency of bioelectrochemical systems. Here, we reported the enhanced photocurrents produced by the synergy of hematite nanowire-arrayed photoanode and the bio-engineered Shewanella oneidensis MR-1 in a solar-assisted microbial photoelectrochemical system (solar MPS) under the visible light. To increase the supply of bioelectrons, the D-lactate transporter, 501522, was overexpressed in the recombinant S. oneidensis (T-SO1522) that could digest D-lactate 61% faster than the wild-type S. oneidenesis. Without light illumination, the addition of either the wild-type or the recombinant S. oneidensis to the system did not induce any obvious increase in the current output. However, under one-sun illumination, the photocurrent of the abiotic control was 16 +/- 2 mu A cm(-2) at 0.8 V vs. Ag/AgCl, and the addition of the wild-type S. oneidensis and the recombinant S. oneidensis increased the photocurrent to 70 +/- 6 and 95 +/- 8 mu A cm(-2), respectively, at 0.8 V vs. Ag/AgCl. Moreover, the solar MPS with T-SO1522 presented quick and repeatable responses to the on/off illumination cycles, and had relatively stable photocurrent generation in the 273-h operation. Scanning electron microscope (SEM) images showed that the cell density on the hematite photoelectrode was similar between the recombinant and the wild-type S. oneidensis. These findings revealed the pronounced influence of metabolic rates on the light-to-electricity conversion in the complex photocatalyst-electricigen hybrid system, which is important to promote the development of the solar MPS for electricity production and wastewater treatment. This work presented a simple, sensitive and label-free electrochemical method for the detection of microRNAs (miRNAs). It is based on the boronate ester covalent interaction between 4-mercaptophenylboronic acid (MPBA) and cis-diol at the 3'-terminal of miRNAs and the MPBA-induced in situ formation of citrate-capped silver nanoparticles (AgNPs) aggregates as labels on the electrode surface. In this design, MPBA acted as the cross-linker of AgNPs assembly. Specifically, the thiolated hairpin-like DNA probe was assembled onto the gold nanoparticles (nano-Au) modified electrode surface through the Ag-S interaction. After hybridization with the target miRNAs, MPBA was anchored onto the 3'-terminal of miRNAs through the formation of a boronate ester bond and then captured AgNP via the Ag-S interaction. Meanwhile, free MPBA molecules in solution induced the in situ assembly of AgNPs on electrode surface through the covalent interactions between a-hydroxycarboxylate of citrate and boronate of MPBA and the formation of Ag-S bonds. The electrochemical signal was therefore amplified due to the formation of AgNPs network architecture. To demonstrate the feasibility and analytical performances of the method, miRNA-21 was determined as a model analyte. The detection limit was found to be 20 aM. The viability of our method for biological sample assays was demonstrated by measuring the miRNA-21 contents in three human serum samples. In contrast to other signal-amplified electrochemical strategies for miRNAs detection, our method requires simple detection principle and easy operation procedure and obviates the specific modification of nanoparticles and capture/detection probes. In this work, a novel silver nanoclusters (AgNCs) were in situ synthesized and used as versatile electrochemiluminescence (ECL) and electrochemical (EL) signal probes for thrombin detection by using DNAzyme-assisted target recycling and hybridization chain reaction (HCR) multiple amplification strategy. The presence of target thrombin firstly opened the hairpin DNA, followed by DNAzyme-catalytic recycling cleavage of excess substrates, which could generate large number of substrate fragments (sl). Then these sl fragments were captured by SH-DNA on the Au nanoparticle-modified electrode, which further triggered the subsequent HCR of the hairpin DNA probes (H1 and H2) to form the long dsDNA. The numerous AgNCs were thus in situ synthesized by incubation the dsDNA template (with cytosine-rich loop)-modified electrode in solution with AgNO3 and sodium borohydride. By integrating the DNAzyme recycling and HCR dual amplification strategy, the amount of AgNCs is dramatically enhanced, leading to substantially amplified ECL and electrochemical signals for sensitive thrombin detection. Importantly, this design introduces the novel AgNCs into versatile ECL and EL bioassays by multiple amplification strategy, thus it is promising to provide a highly sensitive platform for various target biomolecules. We demonstrate a flexible strain-gauge sensor and its use in a wearable application for heart rate detection. This polymer-based strain-gauge sensor was fabricated using a double-sided fabrication method with polymer and metal, i.e., polyimide and nickel-chrome. The fabrication process for this strain-gauge sensor is compatible with the conventional flexible printed circuit board (FPCB) processes facilitating its commercialization. The fabricated sensor showed a linear relation for an applied normal force of more than 930 kPa, with a minimum detectable force of 6.25 Pa. This sensor can also linearly detect a bending radius from 5 mm to 100 mm. It is a thin, flexible, compact, and inexpensive (for mass production) heart rate detection sensor that is highly sensitive compared to the established optical photoplethysmography (PPG) sensors. It can detect not only the timing of heart pulsation, but also the amplitude or shape of the pulse signal. The proposed strain-gauge sensor can be applicable to various applications for smart devices requiting heartbeat detection. The early diagnosis of pathogenic bacteria is significant for bacterial identification and antibiotic resistance. Implementing rapid, sensitive, and specific detection, molecular diagnosis has been considered complementary to the conventional bacterial culture. Composite microparticles of a primer-immobilized network (cPIN) are developed for multiplex detection of pathogenic bacteria with real-time polymerase chain reaction (qPCR). A pair of specific primers are incorporated and stably conserved in a cPIN particle. One primer is crosslinked to the polymer network, and the other is bound to carbon nanotubes (CNTs) in the particle. At the initiation of qPCR, the latter primer is released from the CNTs and participates in the amplification. The amplification efficiency of this cPIN qPCR is estimated at more than 90% with suppressed non-specific signals from complex samples. In multiplexing, four infective pathogens are successfully discriminated using this cPIN qPCR. Multiplex qPCR conforms with the corresponding singleplex assays, proving independent amplification in each particle. Four bacterial targets from clinical samples are differentially analyzed in 30 min of a single qPCR trial with multiple cPIN particles. The severe background fluorescence and scattering light of real biological samples or environmental samples largely reduce the sensitivity and accuracy of fluorescence resonance energy transfer sensors based on fluorescent quantum dots (QDs). To solve this problem, we designed a novel target sequence DNA biosensor based on phosphorescent resonance energy transfer (PRET). This sensor relied on Mn-doped ZnS (Mn-ZnS) room-temperature phosphorescence (RTP) QDs/poly-(diallyldimethylammonium chloride) (PDADMAC) nano composite (QDs(+)) as the energy donor and the single-strand DNA-ROX as the energy receptor. Thereby, an RTP biosensor was built and used to quantitatively detect target sequence DNA. This biosensor had a detection limit of 0.16 nM and a linear range of 0.5-20 nM for target sequence DNA. The dependence on RTP of QDs effectively avoided the interference from background fluorescence and scattering light in biological samples. Moreover, this sensor did not need sample pretreatment. Thus, this sensor compared with FRET is more feasible for quantitative detection of target sequence DNA in biological samples. Interestingly, the QDs(+) nanocomposite prolonged the phosphorescence lifetime of Mn-ZnS QDs by 2.6 times to 4.94 ms, which was 5-6 magnitude-order larger than that of fluorescent QDs. Thus, this sensor largely improves the optical properties of QDs and permits chemical reactions at a long enough time scale. Alkaline phosphatase (ALP) as an essential enzyme plays an important role in clinical diagnoses and biomedical researches. Hence, the development of convenient and sensitivity assay for monitoring ALP is extremely important. In this work, on the basis of chemical redox strategy to modulate the fluorescence of nitrogen-doped graphene quantum dots (NGQDs), a novel label-free fluorescent sensing system for the detection of alkaline phosphatase (ALP) activity has been developed. The fluorescence of NGQDs is firstly quenched by ultrathin cobalt oxyhydroxide (CoOOH) nanosheets, and then restored by ascorbic acid (AA), which can reduce CoOOH to Co2+, thus the ALP can be monitored based on the enzymatic hydrolysis of L-ascorbic acid-2-phosphate (AAP) by ALP to generate AA. Quantitative evaluation of ALP activity in a range from 0.1 to 5 U/L with the detection limit of 0.07 U/L can be realized in this sensing system. Endowed with high sensitivity and selectivity, the proposed assay is capable of detecting ALP in biological system with satisfactory results. Meanwhile, this sensing system can be easily extended to the detection of various AA-involved analytes. Drug-induced organ damages have been considered as a grave problem regarding public health; hence effective method for in vivo detection of drug-induced organ damages is of great significance. Herein we developed a ratiometric fluorescent nanoprobe (NPs-A), which was prepared by loading the probe molecules into phospholipid bilayer, for assaying hydrogen peroxide (H2O2, an organ damage biomarker) level in vivo. The photophysical behavior of the probe molecule depends on the electron-withdrawing ability of the group at the 6 position of anthracene ring, on which the recognition moiety for hydrogen peroxide (dicarbonyl coupled with nitrophenyl, referred to as nitrophenyl-dicarbonyl) was introduced. Upon the reaction of the probe with H2O2, nitrophenyl-dicarbonyl group transforms into carboxyl group, and due to the variation of the electron withdrawing ability of the 6th substituent, the fluorescent properties of the probe molecule alters accordingly, thus ensuring the ratiometric detection for H2O2 with high selectivity with the detection limit of 0.49 mu M. In addition, the nanoprobe (NPs-A) was applied for cell and in vivo imaging applications; and the results indicate that it can detect and track the level of H2O2 in living cells and to monitor and spatially map endogenous H2O2 levels in a drug-induced organ damage model of zebrafish. The accurate and highly sensitive detection of prostate specific antigen (PSA) is particularly important, especially for obese men and patients. In this report, we present a novel aptamer-based surface-enhanced Raman scattering (SERB) sensor that employs magnetic nanoparticles (MNPs) core-Au nanoparticles (AuNPs) satellite assemblies to detect PSA. The high specific biorecognition between aptamer and PSA caused the dissolution of the core-satellite assemblies, thus the concentration of functionalized AuNPs (signal probes) existing in the supernatant was on the rise with the continual addition of PSA. The aptamer-modified MNPs were used as supporting materials and separation tools in the present sensor. With the assistance of magnet, the mixture was removed from the supernatant for the concentration effects. It was found that the corresponding SERB signals from the supernatant were in direct correlation to PSA concentrations over a wide range and the limit of detection (LOD) was as low as 5.0 pg/mL. Excellent recovery was also obtained to assess the feasibility of this method for human serum samples detection. All of these results show a promising application of this method. And this novel sensor can be used for the accurate and highly sensitive detection of PSA in clinic samples in the future. Carbon quantum dots (CQDs) obtained from natural organics attract significant attention due to the abundance of carbon sources, varieties of heteroatom doping (such as N, S, P) and good biocompatibility of precursor. In this study, tunable fluorescence emission CQDs originated from chlorophyll were synthesized and characterized. The fluorescence emission can be effectively quenched by gold nanoparticles (Au NPs) via fluorescence resonance energy transfer (FRET). Thiocholine, which was produced from acetylthiocholine (ATC) by the hydrolysis of butyrylcholinesterase (BChE), could cause the aggregation of Au NPs and the corresponding recovery of FRET-quenched fluorescence emission. The catalytic activity of BChE could be irreversibly inhibited by organophosphorus pesticides (OPs), thus, the recovery effect was reduced. By evaluating the fluorescence emission intensity of CQDs, a FRET-based sensing platform for OPs determination was established. Paraoxon was studied as an example of OPs. The sensing platform displayed a linear relationship with the logarithm of the paraoxon concentrations in the range of 0.05-50 mu g L-1 and the limit of detection (LOD) was 0.05 mu g L-1. Real sample study in tap and river water revealed that this sensing platform was repeatable and accurate. The results indicate that the OP sensor is promising for applications in food safety and environmental monitoring. InGaN/GaN nanowire heterostructures are presented as nanophotonic probes for the light-triggered photo electrochemical detection of NADH. We demonstrate that photogenerated electron-hole pairs give rise to a stable anodic photocurrent whose potential- and pH-dependences exhibit broad applicability. In addition, the simultaneous measurement of the photoluminescence provides an additional tool for the analysis and evaluation of light-triggered reaction processes at the nanostructured interface. InGaN/GaN nanowire ensembles can be excited over a wide wavelength range, which avoids interferences of the photoelectrochemical response by absorption properties of the compounds to be analyzed by adjusting the excitation wavelength. The photocurrent of the nanostructures shows an NADH-dependent magnitude. The anodic current increases with rising analyte concentration in a range from 5 mu M to 10 mM, at a comparatively low potential of 0 mV vs. Ag/AgCl. Here, the InGaN/GaN nanowires reach high sensitivities of up to 91 mu A mM(-1) cm(-2) (in the linear range) and provide a good reusability for repetitive NADH detection. These results demonstrate the potential of InGaN/GaN nanowire heterostructures for the defined conversion of this analyte paving the way for the realization of light-switchable sensors for the analyte or biosensors by combination with NADH producing enzymes. Single-nucleotide mutation (SNM) has proven to be associated with a variety of human diseases. Development of reliable methods for the detection of SNM is crucial for molecular diagnosis and personalized medicine. The sandwich assays are widely used tools for detecting nucleic acid biomarkers due to their low cost and rapid signaling. However, the poor hybridization specificity of signal probe at room temperature hampers the discrimination of mutant and wild type. Here, we demonstrate a dynamic sandwich assay on magnetic beads for SNM detection based on the transient binding between signal probe and target. By taking the advantage of mismatch sensitive thermodynamics of transient DNA binding, the dynamic sandwich assay exhibits high discrimination factor for mutant with a broad range of salt concentration at room temperature. The beads used in this assay serve as a tool for separation, and might be helpful to enhance SNM selectivity. Flexible design of signal probe and facile magnetic separation allow multiple-mode downstream analysis including colorimetric detection and isothermal amplification. With this method, BRAF mutations in the genomic DNA extracted from cancer cell lines were tested, allowing sensitive detection of SNM at very low abundances (0.1-0.5% mutant/wild type). A variety of electrical activities occur depending on the functional state in each section of the gut, but the application of microelectrode array (MEA) is rather limited. We thus developed a dialysis membranes-enforced technique to investigate diverse and complex spatio-temporal electrical activity in the gut. Muscle sheets isolated from the gastrointestinal (GI) tract of mice along with a piece of dialysis membrane were woven over and under the strings to fix them to the anchor rig, and mounted on an 8x8 MEA (inter-electrode distance=150 pm). Small molecules (molecular weight < 12,000) were exchanged through the membrane, maintaining a physiological environment. Low impedance MEA was used to measure electrical signals in a wide frequency range. We demonstrated the following examples: 1) pacemaker activity-like potentials accompanied by bursting spike-like potentials in the ileum; 2) electrotonic potentials reflecting local neurotransmission in the ileum; 3) myoelectric complex-like potentials consisting of slow and rapid oscillations accompanied by spike potentials in the colon. Despite their limited spatial resolution, these recordings detected transient electric activities that optical probes followed with difficulty. In Addition, propagation of pacemaker-like potential was visualized in the stomach and ileum. These results indicate that the dialysis membrane-enforced technique largely extends the application of MEA, probably due to stabilisation of the access resistance between each sensing electrode and a reference electrode and improvement of electric separation between sensing electrodes. We anticipate that this technique will be utilized to characterise spatio-temporal electrical activities in the gut in health and disease. This work reports a novel optical microfluidic biosensor with highly sensitive organic photodetectors (OPDs) for absorbance-based detection of salivary protein biomarkers at the point of care. The compact and miniaturized biosensor has comprised OPDs made of polythiophene-C-70 bulk heterojunction for the photoactive layer; whilst a calcium-free cathode interfacial layer, made of linear polyethylenimine, was incorporated to the photo detectors to enhance the low cost. The OPDs realized onto a glass chip were aligned to antibody-functionalized chambers of a poly(methyl methacrylate) microfluidic chip, in where immunogold-silver assays were conducted. The biosensor has detected IL-8, IL-1 beta and MMP-8 protein in spiked saliva with high detection specificity and short analysis time exhibiting detection limits between 80 pg mL(-1) and 120 pg mL(-1). The result for IL-8 was below the clinical established cut-off of 600 pg mL(-1), which revealed the potential of the biosensor to early detection of oral cancer. The detection limit was also comparable to other previously reported immunosensors performed with bulky instrumentation or using inorganic photodetectors. The optical detection sensitivity of the polythiophene-C-70 OPD was enhanced by optimizing the thickness of the photoactive layer and anode interfacial layer prior to the saliva immunoassays. Further, the biosensor was tested with unspiked human saliva samples, and the results of measuring IL-8 and IL-1 beta were in statistical agreement with those provided by two commercial assays of ELISA. The optical microfluidic biosensor reported hereby offers an attractive and cost-effective tool to diagnostics or screening purposes at the point of care. A new type of sensing protocol, based on a high precision metrology of quantum weak measurement, was first proposed for molecularly imprinted polymers (MIP) sensor. The feasibility, sensitivity and selectivity of weak measurement based MIP (WMMIP) sensor were experimentally demonstrated with bovine serum albumin (BSA). Weak measurement system exhibits high sensitivity to the optical phase shift corresponding to the refractive index change, which is induced by the specific capture of target protein molecules with its recognition sites. The recognition process can be finally characterized by the central wavelength shift of output spectra through weak value amplification. In our experiment, we prepared BSA@MIP with modified reversed-phase microemulsion method, and coated it on the internal surface of measuring channels assembled into the Mach-Zehnder (MZ) interferometer based optical weak measurement system. The design of this home-built optical system makes it possible to detect analyte in real time. The dynamic process of the specific adsorption and concentration response to BSA from 5x10(-4) to 5x10(-1) mu g/L was achieved with a limit of detection (LOD) of 8.01x10(-12) g/L. This WMMIP shows superiority in accuracy, fast response and low cost. Furthermore, real-time monitoring system can creatively promote the performance of MIP in molecular analysis. This paper introduces a new and simple concept for fabricating low-cost, easy-to-use capillary microchannel (CMC) assisted thread-based microfluidic analytical devices (CMCA-mu TADs) for bipolar electrochemiluminescence (BP-ECL) application. The thread with patterns of carbon screen-printed electrodes and bare thread zones (BTZs) is embedded into a CMC. Such CMCA-mu TADs can produce a strong and stable BP-ECL signal, and have an extremely low cost ($0.01 per device). Interestingly, the CMCA-mu TADs are ultraflexible, and can be bent with a 135 bending angle at the BTZ or with a 150 bending angle at the middle of bipolar electrode (BPE), with no loss of analytical performance. Additionally, the two commonly-used ECL systems of Ru(bpy)(3)(2+)/TPA and luminol/H2O2 are applied to demonstrate the quantitative ability of the BP-ECL CMCA-mu TADs. It has been shown that the proposed devices have successfully fulfilled the detection of TPA and H2O2, with detection limits of 0.00432 mM and 0.00603 mM, respectively. Based on the luminol/H2O2 ECL system, the CMCA- mu TADs are further applied for the glucose measurement, with the detection limit of 0.0205 mM. Finally, the applicability and validity of the CMCA-mu TADs are demonstrated for the measurements of H2O2 in milk, and glucose in human urine and serum. The results indicate that the proposed devices have the potential to become an important new tool for a wide range of applications. The traditional microbial fuel cell (MFC) sensor with bioanode as sensing element delivers limited sensitivity to toxicity monitoring, restricted application to only anaerobic and organic rich water body, and increased potential fault warning to the combined shock of organic matter/toxicity. In this study, the biocathode for oxygen reduction reaction was employed for the first time as the sensing element in MFC sensor for toxicity monitoring. The results shown that the sensitivity of MFC sensor with biocathode sensing element (7.4 +/- 2.0 to 67.5 +/- 4.0 mA%(-1) cm(-2)) was much greater than that showed by bioanode sensing element (3.4 +/- 1.5 to 5.5 +/- 0.7 mA%(-1) cm(-2)). The biocathode sensing element achieved the lowest detection limit reported to date using MFC sensor for formaldehyde detection (0.0005%), while the bioanode was more applicable for higher concentration ( > 0.0025%). There was a quicker response of biocathode sensing element with the increase of conductivity and dissolved oxygen (DO). The biocathode sensing element made the MFC sensor directly applied to clean water body monitoring, e.g., drinking water and reclaimed water, without the amending of background organic matter, and it also decreased the warning failure when challenged by a combined shock of organic matter/toxicity. Specific peptide aptamers can be used in place of expensive antibody proteins, and they are gaining increasing importance as sensing probes due to their potential in the development of non-immunological assays with high sensitivity, affinity and specificity for human chorionic gonadotropin (hCG) protein. We combined graphene oxide (GO) sheets with a specific peptide aptamer to create a novel, simple and label-free tool to detect abnormalities at an early stage of pregnancy, a GO-peptide-based surface plasmon resonance (SPR) biosensor. This is the first binding interface experiment to successfully demonstrate binding specificity in kinetic analysis biomechanics in peptide aptamers and GO sheets. In addition to the improved affinity offered by the high compatibility with the target hCG protein, the major advantage of GO-peptide-based SPR sensors was their reduced nonspecific adsorption and enhanced sensitivity. The calculation of total electric field intensity (Delta E) in the GO-based sensing interfaces was significantly enhanced by up to 1.2 times that of a conventional SPR chip. The GO-peptide-based chip (1 mM) had a high affinity (K-A) of 6.37x 10(12) M-1, limit of detection of 0.065 nM and ultra-high sensitivity of 16 times that of a conventional SPR chip. The sensitivity of the slope ratio of the low concentration hCG protein assay in linear regression analysis was GO-peptide (1 mM): GO-peptide (0.1 mM): conventional chip (8-mercaptooctanoic acid)-peptide (0.1 mM)=8.6: 3.3: 1. In summary, the excellent binding affinity, low detection limit, high sensitivity, good stability and specificity suggest the potential of this GO peptide -based SPR chip detection method in clinical application. The development of real-time whole blood analytic and diagnostic tools to detect abnormalities at an early stage of pregnancy is a promising technique for future clinical application. A new core-shell nanostructured composite composed of Fe(III)-based metal organic framework (Fe-MOF) and mesoporous Fe3O4@C nanocapsules (denoted as Fe-MOF@mFe(3)O(4)@mC) was synthesized and developed as a platform for determining trace heavy metal ions in aqueous solution. Herein, the mFe(3)O(4)@mC nanocapsules were prepared by calcining the hollow Fe3O4@C that was obtained using the SiO2 nanoparticles as the template, followed by composing the Fe-MOF. The Fe-MOF@mFe(3)O(4)@mC nanocomposite demonstrated excellent electrochemical activity, water stability and high specific surface area, consequently resulting in the strong biobinding with heavy-metal-ion-targeted aptamer strands. Furthermore, by combining the conformational transition interaction, which is caused by the formation of the G-quadruplex between a single-stranded aptamer and high adsorbed amounts of heavy metal ions, the developed aptasensor exhibited a good linear relationship with the logarithm of heavy metal ion (Pb2+ and As3+) concentration over the broad range from 0.01 to 10.0 nM. The detection limits were estimated to be 2.27 and 6.73 pM toward detecting Pb2+ and As3+, respectively. The proposed aptasensor showed good regenerability, excellent selectivity, and acceptable reproducibility, suggesting promising applications in environment monitoring and biomedical fields. An efficient electrochemical impedance genosensing platform has been constructed based on graphene/zinc oxide nanocomposite produced via a facile and green approach. Highly pristine graphene was synthesised from graphite through liquid phase sonication and then mixed with zinc acetate hexahydrate for the synthesis of graphene/zinc oxide nanocomposite by solvothermal growth. The as-synthesised graphene/zinc oxide nano composite was characterised with scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy and X-ray diffractometry (XRD) to evaluate its morphology, crystallinity, composition and purity. An amino-modified single stranded DNA oligonucleotide probe synthesised based on complementary Coconut Cadang-Cadang Viroid (CCCVd) RNA sequence, was covalently bonded onto the surface of graphene/zinc oxide nanocomposite by the bio-linker 1-pyrenebutyric acid N-hydroxysuccinimide ester. The hybridisation events were monitored by electrochemical impedance spectroscopy (EIS). Under optimised sensing conditions, the single stranded CCCVd RNA oligonucleotide target could be quantified in a wide range of 1.0x 10(-11) M to 1.0%10(-6) with good linearity (R =0.9927), high sensitivity with low detection limit of 4.3x10(-12) M. Differential pulse voltammetry (DPV) was also performed for the estimation of nucleic acid density on the graphene/zinc oxide nanocomposite-modified sensing platform. The current work demonstrates an important advancement towards the development of a sensitive detection assay for various diseases involving RNA agents such as CCCVd in the future. There is a prompt need for determination of aflatoxin B-1 (AFB(1)) in food products to avoid distribution and consumption of contaminated food products. In this study, an accurate electrochemical sensing strategy was presented for detection of AFB1 based on aptamer (Apt)-complementary strands of aptamer (CSs) complex which forms a pi-shape structure on the surface of electrode and exonuclease I (Exo I). The presence of n-shape structure as a double-layer physical barrier allowed detection of AFB(1) with high sensitivity. In the absence of AFB(1), the pi-shape structure remained intact, so only a weak peak current was recorded. Upon the addition of AFB(1), the pi-shape structure was disassembled and a strong current was recorded following the addition of Exo I. Under optimal conditions, the electrochemical signals enhanced as AFB(1) concentrations increased with a dynamic range of 7-500 pg/mL and a limit of detection (LOD) of 2 pg/mL. The developed aptasensor was also used to analyze AFB(1) spiked human serum and grape juice samples and the recoveries were 95.4-108.1%. Widespread presence of cadmium in soil and water systems is a consequence of industrial and agricultural processes. Subsequent accumulation of cadmium in food and drinking water can result in accidental consumption of dangerous concentrations. As such, cadmium environmental contamination poses a significant threat to human health. Development of microbial biosensors, as a novel alternative method for in situ cadmium detection, may reduce human exposure by complementing traditional analytical methods. In this study, a multiplex cadmium biosensing construct was assembled by cloning a single-output cadmium biosensor element, cadRgfp, and a constitutively expressed mrfp1 onto a broad-host range vector. Incorporation of the duplex fluorescent output [green and red fluorescence proteins] allowed measurement of biosensor functionality and viability. The biosensor construct was tested in several Gram-negative bacteria including Pseudomonas, Shewanella and Enterobacter. The multiplex cadmium biosensors were responsive to cadmium concentrations ranging from 0.01 to 10 mu g ml(-1), as well as several other heavy metals, including arsenic, mercury and lead at similar concentrations. The biosensors were also responsive within 20-40 min following exposure to 3 mu g ml(-1) cadmium. This study highlights the importance of testing biosensor constructs, developed using synthetic biology principles, in different bacterial genera. In this work, a novel time-resolved ratiometric fluorescent probe based on dual lanthanide (Tb: terbium, and Eu: europium)-doped complexes (Tb/DPA@SiO2-Eu/GMP) has been designed for detecting anthrax biomarker (dipicolinic acid, DPA), a unique and major component of anthrax spores. In such complexes-based probe, Tb/DPA@SiO2 can serve as a stable reference signal with green fluorescence and Eu/GMP act as a sensitive response signal with red fluorescence for ratiometric fluorescent sensing DPA. Additionally, the probe exhibits long fluorescence lifetime, which can significantly reduce the autofluorescence interferences from biological samples by using time-resolved fluorescence measurement. More significantly, a paper-based visual sensor for DPA has been devised by using filter paper embedded with Tb/DPA@SiO2-Eu/GMP, and we have proved its utility for fluorescent detection of DPA, in which only a handheld UV lamp is used. In the presence of DPA, the paper-based visual sensor, illuminated by a handheld UV lamp, would result in an obvious fluorescence color change from green to red, which can be easily observed with naked eyes. The paper-based visual sensor is stable, portable, disposable, cost-effective and easy-to-use. The feasibility of using a smartphone with easy-to-access color-scanning APP as the detection platform for quantitative scanometric assays has been also demonstrated by coupled with our proposed paper-based visual sensor. This work unveils an effective method for accurate, sensitive and selective monitoring anthrax biomarker with backgroud-free and self-calibrating properties. Bovine serum albumin (BSA) was firstly implemented as an effective sensitivity enhancer for a peptide-based amperometric biosensor for the ultrasensitive detection of prostate specific antigen (PSA). A porous and conductive substrate of chitosan-lead ferrocyanide-(poly(diallyldimethylammonium chloride)-graphene oxide) was in-situ generated on a glassy carbon electrode (GCE), in which Pb2[Fe(CN)(6)] served as a novel redox species with strong current signal at -0.46 V (vs. Ag/AgC1). Poly(diallyldimethylammonium chloride)-graphene oxide was applied to improve conductivity of the substrate. After adsorbing Pb2+ for signal amplification, chitosan provided active sites to simultaneously immobilize peptides and 1-aminopropyl-3-methylimidazolium chloride by glutaraldehyde. To enhance the sensitivity, BSA was chemically linked to the immobilized peptide, behaving as a serious decrease of current signal for BSA hindering the electron transfer. The dramatic increase of current signal of the biosensor was obtained by PSA cleaving the immobilized BSA-peptide. The proposed biosensor exhibited a detection limit of 1 fg mL(-1) for PSA and its sensitivity was seven-fold higher than previous works. Extracellular vesicles (EVs) are abundant in various biological fluids including blood, saliva, urine, as well as extracellular milieu. Accumulating evidence has indicated that EVs, which contain functional proteins and small RNAs, facilitate intercellular communication between neighbouring cells, and are critical to maintain various physiological processes. In contrast, EV-derived toxic signals can spread out over the tissues adjacent to the injured area in certain diseases, including brain tumors and neurodegenerative disorders. This demands better characterization of EVs which can be employed for liquid biopsy clinically as well as for the study of intercellular signalling. Exosomes and microvesicles share a number of similar characteristics, but it is important to distinguish between these two types of EVs. Here, we report for the first time that our in-house developed Localized Surface Plasmon Resonance biosensor with self-assembly gold nanoislands (SAM-AuNls) can be used to detect and distinguish exosomes from MVs isolated from A-549 cells, SH-SY5Y cells, blood serum, and urine from a lung cancer mouse model. Exosomes, compared with MVs, produced a distinguishable response to the bare LSPR biosensor without functionalization, suggesting a different biophysical interaction between exosomes and MVs with SAM AuNIs. This sensor attains the limit of detection to 0.194 mu g/ml, and the linear dynamic range covers 0.194-100 mu g/inl. This discovery not only reveals great insight into the distinctive membrane property of tumor-derived exosomes and MVs, but also facilitate the development of novel LSPR biosensors for direct detection and isolation of heterogeneous EVs. Despite all the efforts made over years to study the cancer expression and the metastasis event, there is not a clear understanding of its origins and effective treatment. Therefore, more specialized and rapid techniques are required for studying cell behaviour under different drug-based treatments. Here we present a quantum dot signalling-based cell assay carried out in a segmental microfluidic device that allows studying the effect of anticancer drugs in cultured cell lines by monitoring phosphatidylserine translocation that occurs in early apoptosis. The developed platform combines the automatic generation of a drug gradient concentration, allowing exposure of cancer cells to different doses, and the immunolabeling of the apoptotic cells using quantum dot reporters. Thereby a complete cell-based assay for efficient drug screening is performed showing a clear correlation between drug dose and amount of cells undergoing apoptosis. The conventional test strip has usually only one electrochemical reaction channel, which requires two times figure punctures for the self-management of patients suffering from both diabetes and gout. Considering the large number of such patients and for the sake of reducing their pains, we report an enzymatic test strip which can simultaneously monitor glucose and uric acid (UA) with only one fingertip blood droplet. The proposed test strip is composed of dual channels. The glucose in blood is detected in the 1st channel above on the substrate and the UA is characterized in the 2nd channel located at the bottom of the substrate. The proposed design intensively matches the requirement of those patients simultaneously suffering from diabetes and gout. We carried out comparative investigations on the proposed test strip and clinical biochemical analyser, which indicates a good agreement and proved the reliability and accuracy of the proposed test strip, as promising solution for the fast growth of family health management market. Accurate and sensitive quantification of a specific class of mycotoxins at trace levels in complex matrices with greener approaches is of significant importance. In this study, a green and economical protocol of magnetic microspheres-based cytometric bead array (CBA) assay on indirect competitive principle was developed for sensitive and rapid detection of ochratoxin A (OTA) in malts with a small number of standard and sample solutions. The protocol included the competition of OTA in malt samples and that covalently coupled on the surface of microspheres with its monoclonal antibodies, the separation and aggregation of the magnetic microspheres, and the fluorescence detection of fluorescein isothiocyanate labeled goat anti-mouse immunoglobulin G probes. The magnetic microspheres-based CBA assay allowed for ultralow limit of detection (0.025-mu g kg(-1)) for OTA and showed higher sensitivity compared with the common polystyrene beads-based CBA method. This is the first report on the magnetic microspheres-based CBA assay by using a simple and easy to-operate magnetic separator for highly sensitive and rapid detection of OTA in complex malt samples. By consuming less solvent, time and cost, as well as fewer standard and samples, the developed green protocol expressed high potential for one-site real-time detection of trace components in complex matrices. A virulent phage named as PaP1 was isolated from hospital sewage based on a lambda phage isolation protocol. This phage showed a strong and highly specific binding ability to Pseudomonas aeruginosa (P. aeruginosa). Using this isolated phage as a recognition agent, a novel electrochemiluminescent (ECL) biosensor was developed for label-free detection of P. aeruginosa. The biosensor was fabricated through depositing phage-conjugated carboxyl graphene onto the surface of a glass carbon electrode. After specific binding of the host bacteria through the adsorption of P. aeruginosa cell wall by phage tail fibers and baseplate, the ECL signal of luminol suffered a decrease since the formed non-conductive biocomplex obstructed the interfacial electron transfer and blocked the diffusion of the ECL active molecules. The ECL emission declined linearly with P. aeruginosa concentration in the range of 1.4x10(2) -1.4x10(6) CFU mL(-1), with a very low detection limit of 56 CFU mL(-1). The whole detection process could be completed within 30 min as a ready-for-use biosensor was adopted. This biosensor was successfully applied to quantitate P. aeruginosa in milk, glucose injection and human urine with acceptable recovery values ranging from 78.6% to 114.3%. The development of a versatile microbiosensor for hydrogen detection is reported. Carbon-based microelectrades were modified with a [NiFe]-hydrogenase embedded in a viologen-modified redox hydrogel for the fabrication of a sensitive hydrogen biosensor By integrating the microbiosensor in a scanning photoelectrochemical microscope, it was capable of serving simultaneously as local light source to initiate photo(bio) electrochemical reactions while acting as sensitive biosensor for the detection of hydrogen. A hydrogen evolution biocatalyst based on photosystem 1-platinum nanoparticle biocomplexes embedded into a specifically designed redox polymer was used as a model for proving the capability of the developed hydrogen biosensor for the detection of hydrogen upon localized illumination. The versatility and sensitivity of the proposed microbiosensor as probe tip allows simplification of the set-up used for the evaluation of complex electrochemical processes and the rapid investigation of local photoelectrocatalytic activity of biocatalysts towards light-induced hydrogen evolution. Electrochemical sensing is moving to the forefront of point-of-care and wearable molecular sensing technologies due to the ability to miniaturize the required equipment, a critical advantage over optical methods in this field. Electrochemical sensors that employ roughness to increase their microscopic surface area offer a strategy to combatting the loss in signal associated with the loss of macroscopic surface area upon miniaturization. A simple, low-cost method of creating such roughness has emerged with the development of shrink-induced high surface area electrodes. Building on this approach, we demonstrate here a greater than 12-fold enhancement in electrochemically active surface area over conventional electrodes of equivalent on-chip footprint areas. This two-fold improvement on previous performance is obtained via the creation of a superwetting surface condition facilitated by a dissolvable polymer coating. As a test bed to illustrate the utility of this approach, we further show that electrochemical aptamer-based sensors exhibit exceptional signal strength (signal-to-noise) and excellent signal gain (relative change in signal upon target binding) when deployed on these shrink electrodes. Indeed, the observed 330% gain we observe for a kanamycin sensor is 2-fold greater than that seen on planar gold electrodes. DNA repair processes are responsible for maintaining genome stability. Ligase and polynucleotide kinase (PNK) have important roles in ligase-mediated DNA repair. The development of analytical methods to monitor these enzymes involved in DNA repair pathways is of great interest in biochemistry and biotechnology. In this work, we reported a new strategy for label-free monitoring PNK and ligase activity by using dumbbell-shaped DNA templated copper nanoparticles (CuNPs). In the presence of PNK and ligase, the dumbbell-shaped DNA probe (DP) was locked and could resist the digestion of exonucleases and then served as an efficient template for synthesizing fluorescent CuNPs. However, in the absence of ligase or PNK, the nicked DP could be digested by exonucleases and failed to template fluorescent CuNPs. Therefore, the fluorescence changes of CuNPs could be used to evaluate these enzymes activity. Under the optimal conditions, highly sensitive detection of ligase activity of about 1 U/mL and PNK activity down to 0.05 U/mL is achieved. To challenge the practical application capability of this strategy, the detection of analyte in dilute cells extracts was also investigated and showed similar linear relationships. In addition to ligase and PNK, this sensing strategy was also extended to the detection of phosphatase, which illustrates the versatility of this strategy. Herein, an environment friendly paper-based biobattery is demonstrated that yields a power of 12.5 W/m(3). Whatman filter papers were used not only as support for electrode fabrication but also as separator of the biobattery. To provide electrical conductivity to the paper-based cathode and anode, commercially available eyeliner containing carbon nanoparticles and Fe3O4 was directly employed as conductive ink without any binder. With an instant start-up, the as-fabricated biocompatible electrodes could hold bacteria in an active form at the anode allowing chemical oxidation of organic fuel producing current. The facile process delineated here can be employed for the tailored electrode fabrication of various flexible energy harnessing devices. Since HCV and HIV share a common transmission path, high sensitive detection of HIV and HCV gene is of significant importance to improve diagnosis accuracy and cure rate at early stage for HIV virus-infected patients. In our investigation, a novel nanozyme-based bio-barcode fluorescence amplified assay is successfully developed for simultaneous detection of HIV and HCV DNAs with excellent sensitivity in an enzyme-free and label-free condition. Here, bimetallic nanoparticles, PtAuNps, present outstanding peroxidase-like activity and act as barcode to catalyze oxidation of nonfluorescent substrate of amplex red (AR) into fluorescent resorufin generating stable and sensitive "Turn On" fluorescent output signal, which is for the first time to be integrated with bio-barcode strategy for fluorescence detection DNA. Furthermore, the provided strategy presents excellent specificity and can distinguish single-base mismatched mutant from target DNA. What interesting is that cascaded INHIBIT-OR logic gate is integrated with biosensors for the first time to distinguish individual target DNA from each other under logic function control, which presents great application in development of rapid and intelligent detection. The homeostasis of lysosomal pH is crucial in cell physiology. Developing small fluorescent nanosensors for lysosome imaging and ratiometric measurement of pH is highly demanded yet challenging. Herein, a pH sensitive fluorescein tagged aptamer AS1411 has been utilized to covalently modify the label-free fluorescent silicon nanodots via a crosslinker for construction of a ratiometric pH biosensor. The established aptasensor exhibits the advantages of ultrasmall size, hypotoxicity, excellent pH reversibility and good photostability, which favors its application in an intracellular environment. Using human breast MCF-7 cancer cells and MCF-10A normal cells as the model, this aptasensor shows cell specificity for cancer cells and displays a wide pH response range of 4.5-8.0 in living cells. The results demonstrate that the pH of MCF-7 cells is 5.1, which is the expected value for acidic organelles. Lysosome imaging and accurate measurement of pH in MCF-7 cells have been successfully conducted based on this nanosensor via fluorescent microscopy and flow cytornetry. This study presents an efficient acoustic and hybrid three-dimensional (3D)-printed electrochemical biosensors for the detection of liver cancer cells. The biosensors function by recognizing the highly expressed tumor marker CD133, which is located on the surface of liver cancer cells. Detection was achieved by recrystallizing a recombinant S-layer fusion protein (rSbpA/ZZ) on the surface of the sensors. The fused ZZ-domain enables immobilization of the anti-CD133 antibody in a defined manner. These highly accessible anti-CD133 antibodies were employed as a sensing layer, thereby enabling the efficient detection of liver cancer cells (HepG2). The recognition of HepG2 cells was investigated in situ using a quartz crystal microbalance with dissipation monitoring (QCM-D), which enabled the label-free, real-time detection of living cells on the modified sensor surface under controlled conditions. Furthermore, the hybrid 3D additive printing strategy for biosensors facilitates both rapid development and small-scale manufacturing. The hybrid strategy of combining 3D -printed parts and more traditionally fabricated parts enables the use of optimal materials: a ceramic substrate with noble metals for the sensing element and 3D -printed capillary channels to guide and constrain the clinical sample. Cyclic voltammetry (CV) measurements confirmed the efficiency of the fabricated sensors. Most importantly, these sensors offer low-cost and disposable detection platforms for real-world applications. Thus, as demonstrated in this study, both fabricated acoustic and electrochemical sensing platforms can detect cancer cells and therefore may have further potential in other clinical applications and drug-screening studies. In this study, a green and fast method was developed to synthesize high-yield carbon dots (CDs) via one-pot microwave treatment of banana peels without using any other surface passivation agents. Then the as-prepared CDs was used as the reducing agent and stabilizer to synthesize a Pd-Au@CDs nanocomposite by a simple sequential reduction strategy. Finally, Pd-Au@CDs nanocomposite modified glassy carbon electrode (Pd-Au@ CDs/GCE) was obtained as a biosensor for target DNA after being immobilized a single-stranded probe DNA by a carboxyl ammonia condensation reaction. Under the optimal conditions, the sensor could detect target DNA concentrations in the range from 5.0x10(-16) to 1.0x10(-1)degrees mol L-1. The detection limit (LD) was estimated to be 1.82x10(-17) mol L-1, which showed higher sensitivity than other electrochemical biosensors reported. In addition, the DNA sensor was also successfully applied to detect colitoxin DNA in human serum. The current epidemic caused by the Zika virus (ZIKV) and the devastating effects of this virus on fetal development, which result in an increased incidence of congenital microcephaly symptoms, have prompted the World Health Organization (WHO) to declare the ZIKV a public health issue of global concern. Efficient probes that offer high detection sensitivity and specificity are urgently required to aid in the point-of-care treatment of the virus. In this study, we show that localized surface plasmon resonance (LSPR) signals from plasmonic nanoparticles (NPs) can be used to mediate the fluorescence signal from semiconductor quantum dot (Qdot) nanocrystals in a molecular beacon (MB) biosensor probe for ZIKV RNA detection. Four different plasmonic NPs functionalized with 3-mercaptopropionic acid (MPA), namely MPA-AgNPs, MPA-AuNPs, core/shell (CS) Au/AgNPs, and alloyed AuAgNPs, were synthesized and conjugated to L-glutathione-capped CdSeS alloyed Qdots to form the respective LSPR-mediated fluorescence nanohybrid. The concept of the plasmonic NP-Qdot-MB biosensor involves using LSPR from the plasmonic NPs to mediate a fluorescence signal to the Qdots, triggered by the hybridization of the target ZIKV RNA with the DNA loop sequence of the MB. The extent of the fluorescence enhancement based on ZIKV RNA detection was proportional to the LSPR-mediated fluorescence signal. The limits of detection (LODs) of the nanohybrids were as follows: alloyed AuAgNP-Qdot646-MB (1.7 copies/mL)) > CS Au/AgNP-Qdot646-MB (LOD =2.4 copies/mL) > AuNP-Qdot646-MB (LOD =2.9 copies/mL) > AgNP-Qdot646-MB (LOD =7.6 copies/mL). The LSPR-mediated fluorescence signal was stronger for the bimetallic plasmonic NP-Qdots than the single metallic plasmonic NP-Qdots. The plasmonic NP-Qdot-MB biosensor probes exhibited excellent selectivity toward ZIKV RNA and could serve as potential diagnostic probes for the point-of care detection of the virus. In this work, we prepared glutathione (GSH)-capped copper nanoclusters (Cu NCs) with red emission by simply adjusting the pH of GSH/Cu2+ mixture at room temperature. A photoluminescence light-up method for detecting Zn2+ was then developed based on the aggregation induced emission enhancement of GSH-capped Cu NCs. Zn2+ could trigger the aggregation of Cu NCs, inducing the enhancement of luminescence and the increase of absolute quantum yield from 1.3% to 6.2%. GSH-capped Cu NCs and the formed aggregates were characterized, and the possible mechanism was also discussed. The prepared GSH-capped Cu NCs exhibited a fast response towards Zn2+ and a wider detection range from 4.68 to 2240 mu M. The detection limit (1.17 mu M) is much lower than that of the World Health Organization permitted in drinking water. Furthermore, taking advantages of the low cytotoxicity, large Stokes shift, red emission and light-up detection mode, we explored the use of the prepared GSH-capped Cu NCs in the imaging of Zn2+ in living cells. The developed luminescence light-up nanoprobe may hold the potentials for Zn2+-related drinking water safety and biological applications. In the present work, electrogenerated chemiluminescence (ECL) of luminol was investigated in neutral condition at a gold electrode in the presence of silicon quantum dots (SiQDs). The results revealed that SiQDs can not only greatly enhance luminol ECL, but also act as energy acceptor to construct a novel ECL resonance energy transfer (ECL-RET) system with luminol. As a result, strong anodic ECL signal was obtained in neutral condition at the bare gold electrode, which is suitable for biosensing application. Lysozyme exhibited apparent inhibiting effect on the ECL-RET system, based on which an ECL aptasensor was fabricated for the sensitive detection of lysozyme. The proposed method showed high sensitivity, good selectivity, and wide linearity for the detection of lysozyme in the range of 5.0x10(-14)-5.0x10(-9) g mL(-1) with a detection limit of 5.8x10(-15) g mL(-1) (3 sigma). The results suggested that as-proposed luminol/SiQDs ECL biosensor will be promising in the detection enzyme. Hydrogen peroxide (H2O2), one of the reactive oxygen species (ROS), plays vital roles in diverse physiological processes. Imbalance of the H2O2 is concerned with serious diseases such as cardiovascular disorders, neurodegenerative diseases, Alzheimer's disease and cancer. Therefore, it is critical to develop efficient methods for monitoring H2O2 in vivo. In this work, a two-photon excitation (860 nm) NIR fluorescent turn-on probe TPNR-H2O2 for H2O2 based on Dicyanomethylene-4H-pyran fluorophore is reported, which can be used in solution detection with 13.2-fold NIR fluorescence enhancement, fast response (completed within 40 min), excellent sensitivity (DL 72.48 nM), and lower cellular auto-fluorescence interference. Importantly, the perfect photostability of TPNR-H2O2 clearly demonstrated that the probe could be applied to imaging intracellular H2O2 for a long time without photobleaching. In addition, through two-photon imaging, this probe was cell permeable and used to monitor the level of endogenous and exogenous H2O2 with promising biological application. The frequency of breathing and peak flow rate of exhaled air are necessary parameters to detect chronic obstructive pulmonary diseases (COPDs) such as asthma, bronchitis, or pneumonia. We developed a lung function monitoring point-of-care-testing device (LFM-POCT) consisting of mouthpiece, paper-based humidity sensor, micro-heater, and real-time monitoring unit. Fabrication of a mouthpiece of optimal length ensured that the exhaled air was focused on the humidity-sensor. The resistive relative humidity sensor was developed using a filter paper coated with nanoparticles, which could easily follow the frequency and peak flow rate of the human breathing. Adsorption followed by condensation of the water molecules of the humid air on the paper-sensor during the forced exhalation reduced the electrical resistance of the sensor, which was converted to an electrical signal for sensing. A micro-heater composed of a copper-coil embedded in a polymer matrix helped in maintaining an optimal temperature on the sensor surface. Thus, water condensed on the sensor surface only during forcible breathing and the sensor recovered rapidly after the exhalation was complete by rapid desorption of water molecules from the sensor surface. Two types of real-time monitoring units were integrated into the device based on light emitting diodes (LEDs) and smart phones. The LED based unit displayed the diseased, critical, and fit conditions of the lungs by flashing LEDs of different colors. In comparison, for the mobile based monitoring unit, an application was developed employing an open source software, which established a wireless connectivity with the LFM-POCT device to perform the tests. The detection of microRNA plays an important role in early cancer diagnosis. Herein, a dual-mode electronic biosensor was developed for microRNA-21 (miRNA-21) detection based on gold nanoparticle-decorated MoS2 nanosheet (AuNPs@MoS2). A classical DNA "sandwich" structure was employed to construct MoS2-based electrochemical sensor, including capture DNA, target miRNA-21 and DNA-modified nanoprobe. [Fe(CN)(6)](3-/4-) and [Ru(NH3)(6)](3+) were selected as electrochemical indicators to monitor the preparation process and evaluate the performance of MoS2-based electrochemical biosensor by electrochemical impedance spectroscopy (EIS) and differential pulse voltammetry (DPV), respectively. Such MoS2-based biosensor exhibited excellent performance for miRNA-21 detection in the range from 10 fM to 1 nM with detection limit of 0.78 fM and 0.45 fM for DPV and EIS technique, respectively. Furthermore, the proposed MoS2-based biosensor displayed high selectivity and stability, which could be used to determine miRNA-21 in human serum samples with satisfactory results. All data suggested that such MoS2-based nanocomposite may be a potential candidate for biosensing ranging from nucleic acid to protein detection. Development of rapid and multiplexed diagnostic tools is a top priority to address the current epidemic problem of sexually transmitted diseases. Here we introduce a novel nanoplasmonic biosensor for simultaneous detection of the two most common bacterial infections: Chlamydia trachomatis and Neisseria gonorrhoeae. Our plasmonic microarray is composed of gold nanohole sensor arrays that exhibit the extraordinary optical transmission (EOT), providing highly sensitive analysis in a label-free configuration. The integration in a microfluidic system and the precise immobilization of specific antibodies on the individual sensor arrays allow for selective detection and quantification of the bacteria in real-time. We achieved outstanding sensitivities for direct immunoassay of urine samples, with a limit of detection of 300 colony forming units (CFU)/mL for C. trachomatis and 1500 CFU/mL for N. gonorrhoeae. The multiplexing capability of our biosensor was demonstrated by analyzing different urine samples spiked with either C trachomatis or N. gonorrhoeae, and also containing both bacteria. We could successfully detect, identify and quantify the levels of the two bacteria in a one-step assay, without the need for DNA extraction or amplification techniques. This work opens up new possibilities for the implementation of point-of-care biosensors that enable fast, simple and efficient diagnosis of sexually transmitted infections. A sensitive electrochemiluminescent (ECL) sandwich immunosensor was proposed herein based on the tris (2-phenylpyridine) iridium [Ir(ppY)(3)] doped silica nanoparticles (SiO2@Ir) with improved ECL emission as signal probes and glucose oxidase (GOD)-based in situ enzymatic reaction to generate H2O2 for efficiently quenching the ECL emission of SiO2@Ir. Typically, the SiO2@Ir not only increased the loading amount of Ir(ppy)3 as ECL indicators with high ECL emission, but also improved their water-solubility, which efficiently enhanced the ECL emission. Furthermore, by the efficient quench effect of H2O2 from in situ glucose oxidase (GOD) -based enzymatic reaction on the ECL emission of SiO2@Ir, a signal-off ECL immunsensor could be established for sensitive assay. With N-terminal of the prohormone brain natriuretic peptide (BNPT) as a model, the proposed ECL assay performed high sensitivity and low detection limit. Importantly, the proposed sensitive ECL strategy was not only suitable for the detection of BNPT for acute myocardial infarction, but also revealed a new avenue for early diagnosis of various diseases via proteins, nucleotide sequence, microRNA and cells. In this work, we report a novel iridium(III)-based luminescent switch on-off-on probe, for the in vitro and in vivo detection of sulfide ion. The mechanism of this platform is based on the effective charge transfer quenching of the iridium(III) complex 1 by Fe3+, followed by the restoration of luminescence upon the addition of Na2S. The probe, hereinafter referred to as 1 Fe3+, exhibited a linear range of detection for Na2S from 0.01 to 1.5 mM, with a detection limit of 2.9 1.1M at signal-to-noise ratio (S/N) of 3. We also demonstrate the utility of 1 Fe3+ for cell-based imaging as well as for the detection of enzymatic sulfide generation in living zebrafish. The determination of ethyl [4-oxo-8-(3-chloropheny1)-4,6,7,8-tetrahydroimidazo[2,1-c][1,2,4]triazin-3-yl]acetate (ETTA), a new anticancer prodrug, using adsorptive stripping voltammetry (AdSV) was described for the first time. This method is based on adsorptive/reductive behaviour of ETTA at an in situ plated bismuth film electrode (BiFE) as a sensor. A number of experimental variables (e.g., a composition and pH of the supporting electrolyte, the conditions of bismuth film deposition, an accumulation potential and time, the scan rate, etc.) were thoroughly studied in order to achieve a high sensitivity. Experimental results under optimized conditions revealed an excellent linear correlation between the monitored voltammetric peak current and the ETTA concentration in the range of 2-50 mu g L-1 following an accumulation time of 300 s. The limit of detection (LOD) for ETTA following 300 s of an accumulation time was 0.4 mu g L-1. The proposed facile, sensitive and inexpensive method was successfully applied to the determination of ETTA in serum. The investigated prodrug was extracted from serum using SPE method. Ultrasensitive biosensing technologies without gene amplification held great promise for direct detection of DNA. Herein we report a novel biosensing method, combining target recycling signal-amplification strategy and a homemade electrochemical device. Especially, the target recycling was achieved by a strand displacement process, no needing the help of any nucleases. In the presence of target DNA, the recycling system could be activated to generate a cascade of assembly steps with three hairpin DNA segments. Each recycling process were accompanied by a disassembly step that the last hairpin DNA segment displaces target DNA from the complex at the end of each circulation, freeing targets to activate the self-assembly of more trefoil DNA structures. This biosensing method could detect target DNA at aM level and can distinguish target DNA from interfering DNAs, demonstrating its high sensitivity and high selectivity. Importantly, the biosensing method could work well with serum samples. This paper demonstrates a new strategy for developing a fluorescent glycosyl-imprinted polymer for pH and temperature regulated sensing of target glycopeptide antibiotic. The technique provides amino modified Mn-doped ZnS QDs as fluorescent supports, 4-vinylphenylbronic acid as a covalent monomer, N-isopropyl acrylamide as a thermo-responsive monomer in combination with acrylamide as a non-covalent monomer, and glycosyl moiety of a glycopeptide antibiotic as a template to produce fluorescent molecularly imprinted polymer (FMIP) in aqueous solution. The FMIP can alter its functional moieties and structure with pH and temperature stimulation. This allows recognition of target molecules through control of pH and temperature. The fluorescence intensity of the FMIP was enhanced gradually as the concentration of telavancin increased, and showed selective recognition toward the target glycopeptide antibiotic preferentially among other antibiotics. Using the FMIP as a sensing material, good linear correlations were obtained over the concentration range of 3.0-300.0 mu g/L and with a low limit of detection of 1.0 mu g/L. The analysis results of telavancin in real samples were consistent with that obtained by liquid chromatography tandem mass spectrometry. An ultrasensitive sandwich-type electrochemical biosensor for DNA detection is developed based on spherical silicon dioxide/molybdenum selenide (SiO2@MoSe2) and graphene oxide-gold nanoparticles (GO-AuNPs) hybrids as carrier triggered Hybridization Chain Reaction (HCR) coupling with multi-signal amplification. The proposed sensoring assay utilizes a spherical SiO2@MoSe2/AuNPs as sensing platform and GO-AuNPs hybrids as carriers to supply vast binding sites. H2O2+HQ system is used for DNA detection and HCR as the signal and selectivity enhancer. The sensor is designed in sandwich type to increase the specificity. As a result, the present biosensor exhibits a good dynamic range from 0.1 fM to 100 pM with a low detection limit of 0.068 fM (S/N=3). This work shows a considerable potential for quantitative detection of DNA in early clinical diagnostics. As one of the most exciting building blocks, DNA has gained increasing attention in the construction of promising nanostructures for various biological and medical purposes. In this contribution, we have developed an easily constructed DNA nanoassembly-based biosensing system that consists of one signal hairpin probe (SHP) and one label -free hairpin probe (LHP) for target p53 gene analysis. The probes of SHP and LHP were designed to be incapable of interacting with each other in the absence of the p53 gene. When the target gene is introduced, the 3' end of SHP (or LHP) hybridizes with the middle region of LHP (or SHP), leading to polymerase-sustained DNA nanoassembly. Because one target species can exhaust many building scaffolds to execute the programmable nanoassembly in one-pot approach, the fluorescence intensity of SHP is greatly enhanced in the presence of target gene in a simple and robust manner. The practical applicability was successfully demonstrated by screening target gene extracted from cancer cells. We believe this intriguing sensing strategy and desirable assay ability would provide new opportunities to develop versatile biochemical analysis methods. Development of resistance to chemotherapy treatments is a major challenge in the battle against cancer. Although a vast repertoire of chemotherapeutics is currently available for treating cancer, a technique for rapidly identifying the right drug based on the chemo-resistivity of the cancer cells is not available and it currently takes weeks to months to evaluate the response of cancer patients to a drug. A sensitive, low-cost diagnostic assay capable of rapidly evaluating the effect of a series of drugs on cancer cells can significantly change the paradigm in cancer treatment management. Integration of microfluidics and electrical sensing modality in a 3D tumour microenvironment may provide a powerful platform to tackle this issue. Here, we report a 3D microfluidic platform that could be potentially used for a real-time deterministic analysis of the success rate of a chemotherapeutic drug in less than 12 h. The platform (66 mmx50 mm; LxW) is integrated with the microsensors (interdigitated gold electrodes with width and spacing 10 pm) that can measure the change in the electrical response of cancer cells seeded in a 3D extra cellular matrix when a chemotherapeutic drug is flown next to the matrix. B16-F10 mouse melanoma, 4T1 mouse breast cancer, and DU 145 human prostate cancer cells were used as clinical models. The change in impedance magnitude on flowing chemotherapeutics drugs measured at 12 h for drug-susceptible and drug tolerant breast cancer cells compared to control were 50,552 +/- 144 Omega and 28,786 +/- 233 Omega, respectively, while that of drug-susceptible melanoma cells were 40,197 +/- 222 Omega and 4069 +/- 79 Omega, respectively. In case of prostate cancer the impedance change between susceptible and resistant cells were 8971 +/- 1515 Omega and 3281 429 Omega, respectively, which demonstrated that the microfluidic platform was capable of delineating drug susceptible cells, drug tolerant, and drug resistant cells in less than 12 h. The accuracy of a bioassay based on smartphone-integrated fluorescent biosensors has been limited due to the occurrence of false signals from non-specific reactions as well as a high background and low signal-to-noise ratios for complementary metal oxide semiconductor image sensors. To overcome this problem, we demonstrate dual-wavelength fluorescent detection of biomolecules with high accuracy. Fluorescent intensity can be quantified using dual wavelengths simultaneously, where one decreases and the other increases, as the target analytes bind to the split capture and detection aptamer probes. To do this, we performed smartphone imaging based fluorescence microscopy using a microarray platform on a substrate with metal-enhanced fluorescence (MEF) using Ag film and Al2O3 nano-spacer. The results showed that the sensitivity and specificity of the dual wavelength fluorescent quantitative assay for the target biomolecule 17-beta-estradiol in water were significantly increased through the elimination of false signals. The detection limit was 1 pg/mL and the area under the receiver operating characteristic curve of the proposed assay (0.922) was comparable to that of an enzyme linked immunosorbent assay (0.956) from statistical accuracy tests using spiked wastewater samples. This novel method has great potential as an accurate point-of-care testing technology based on mobile platforms for clinical diagnostics and environmental monitoring. A pH-responsive calorimetric strategy was designed for sensitive and convenient biosensing by introducing acetylcholinesterase (AChE) catalyzed hydrolysis of acetylcholine to change solution pH and phenol red as an indicator. Using DNA as a target model, this technique was successfully employed for sensitive DNA analysis by labeling AChE to DNA. The sensitivity could be greatly improved by coupling a newly designed magnetic probe with target DNA-triggered nonenzymatic cascade amplification. In the presence of a help DNA (H) and the functional probe, the cascade assembly via toehold-mediated strand displacement released the AChE-conjugated sequence from magnetic beads, which could be simply separated from the reaction mixture to catalyze the hydrolysis of ACh in detection solution. The color change of detection solution from pink to orange red, orange-yellow and ultimately yellow could be used for target DNA detection by naked eye and colorimetry with the absorbance ratio of detection solution at 558 nm to 432 nm as the signal. The nonenzymatically sensitized calorimetric strategy showed a linear range from 50 pM to 50 nM with a detection limit of 38 pM, indicating a promising application in DNA analysis. Peptide-protein interactions mediate numerous biologic processes and provide great opportunity for developing peptide probes and analytical approaches for detecting and interfering with recognition events. Molecular interactions usually take place on the heterogeneous surface of proteins, and the spatial distribution and arrangement of probes are therefore crucial for achieving high specificity and sensitivity in the bioassays. In this study, small linear peptides, homogenous peptide dimers and hetero bivalent peptides were designed for site specific recognition of human serum albumin (HSA). Three hydrophilic regions located at different subdomains of HSA were chosen as targets for the molecular design. The binding affinity, selectivity and kinetics of the candidates were screened with surface plasmon resonance imaging (SPRi) and fluoroimmuno assays. Benefiting from the synergistic effect from the surface-targeted peptide binders and the flexible spacer, a heterogenetic dimer peptide (heter-7) with fast binding and slow dissociation behavior was identified as the optimized probe. Heter-7 specifically recognizes the target protein HSA, and effectively blocks the binding of antibody to HSA. Its inhibitory activity was estimated as 83 nM. It is noteworthy that heter-7 can distinguish serum albumins from different species despite high similarities in sequence and structure of these proteins. This hetero bivalent peptide shows promise for use in serum proteomics, disease detection and drug transport, and provides an effective approach for promoting the affinity and selectivity of ligands to achieve desirable chemical and biological outcomes. An innovative electrochemical sensor, based on a carbon paste electrode (CPE) modified with graphene (GR) and a boron-embedded duplex molecularly imprinted hybrid membrane (B-DMIHM), was fabricated for the highly sensitive and selective determination of lamotrigine (LMT). Density functional theory (DPT) was employed to study the interactions between the template and monomers to screen appropriate functional monomers for rational design of the B-DMIHM. The distinct synergic effect of GR and B-DMIHM was evidenced by the positive shift of the reduction peak potential of LMT at B-DMIHM/GR modified CPE (BDMIHM/GR/CPE) by about 300 mV, and the 13-fold amplification of the peak current, compared to a bare carbon paste electrode (CPE). The electrochemical reduction mechanism of lamotrigine was investigated by different voltammetric techniques. It was illustrated that square wave voltammetry (SWV) was more sensitive than different pulse voltammetry (DPV) for the quantitative analysis of LMT. Thereafter, a highly sensitive electroanalytical method for LMT was established by SWV at B-DMIHM/GR/CPE with a good linear relationship from 5.0x10(-8) to 5.0x10(-5) and 5.0x10(-5) to 3.0x10(-4) mol L-1 with a lower detection limit (1.52x10(-9) mol L-1) based on the lower linear range(S/N=3). The practical application of the sensor was demonstrated by determining the concentration of LMT in pharmaceutical and biological samples with good precision (RSD 1.04-4.41%) and acceptable recoveries (92.40-107.0%). A toehold-mediated strand displacement (TMSD)-based cross-catalytic hairpin assembly (C-CHA) is demonstrated in this study, achieving exponential amplification of nucleic acids. Functionally, this system consists of four hairpins (H1, H2, H3 and H4) and one single-stranded initiator (I). Upon the introduction of I, the first CHA reaction (CHA1) is triggered, leading to the self-assembly of hybrid H1"H2 that then initiates the second CHA reaction (CHA2) to obtain the hybrid H3 center dot H4. Since the single-stranded region in H3 center dot H4 is identical to I, a new CHAT is initiated, which thus achieves cross operation of CHA1 and CHA2 and exponential growth kinetics. Interestingly, because the C-CHA performs in a cascade manner, this system can be considered as multi-level molecular logic circuits with feedback mechanism. Moreover, through incorporating G-quadruplex subunits and fluorescein isothiocyanate (FITC) in the product of H1 center dot H2, this C-CHA is readily utilized to fabricate a chemiluminescence resonance energy transfer (CRET) biosensing platform, achieving sensitive and selective detection of DNA and microRNA in real samples. Since the high background signal induced by FTTC in the absence of initiator is greatly reduced through labeling quencher in Hl, the signal-to-noise ratio and detection sensitivity are improved significantly. Therefore, our proposed C-CHA protocol holds a great potential for further applications in not only building complex autonomous systems but also the development of biosensing platforms and DNA nanotechnology. Sensitive and rapid diagnostic systems for avian influenza (AI) virus are required to screen large numbers of samples during a disease outbreak and to prevent the spread of infection. In this study, we employed a novel fluorescent dye for the rapid and sensitive recognition of AI virus. The styrylpyridine phosphor derivative was synthesized by adding allyl bromide as a stable linker and covalently immobilizing it on latex beads with antibodies generating the unique Red dye 53-based fluorescent probe. The performance of the innovative rapid fluorescent immnunochromatographic test (FICT) employing Red dye 53 in detecting the AI virus (A/H5N3) was 4-fold and 16-fold higher than that of Europium-based FICT and the rapid diagnostic test (RDT), respectively. In clinical studies, the presence of human nasopharyngeal specimens did not alter the performance of Red dye 53-linked FICT for the detection of H7N1 virus. Furthermore, in influenza A virus-infected human nasopharyngeal specimens, the sensitivity of the Red dye 53-based assay and RDT was 88.89% (8/9) and 55.56% (5/9) relative to rRT-PCR, respectively. The photostability of Red dye 53 was higher than that of fluorescein isothiocyanate (FITC), showing a stronger fluorescent signal persisting up to 8 min under UV. The Red dye 53 could therefore be a potential probe for rapid fluorescent diagnostic systems that can recognize AI virus in clinical specimens. Herein, we demonstrate the exfoliation of bulk graphitic carbon nitrides (g-C3N4) into ultra-thin (similar to 3.4nm) twodimensional (2D) nanosheets and their functionalization with proton (g-C3N4H+). The layered semiconductor gC(3)N(4)H(+) nanosheets were doped with cylindrical spongy shaped polypyrrole (CSPPy-g-C3N4H+) using chemical polymerization method. The as-prepared nanohybrid composite was utilized to fabricate cholesterol biosensors after immobilization of cholesterol oxidase (ChOx) at physiological pH. Large specific surface area and positive charge nature of CSPPy-g-C3N4H+ composite has tendency to generate strong electrostatic attraction with negatively charged ChOx, and as a result they formed stable bionanohybrid composite with high enzyme loading. A detailed electrochemical characterization of as-fabricated biosensor electrode (Ch0x-C8131,y-gC(3)N(4)H-VGCE) exhibited high-sensitivity (645.7 pAmM-(1) cm(-2)) in wide -linear range of 0.02-5.0 mM, low detection limit (8.0 mu M), fast response time (similar to 3 s), long-term stability, and good selectivity during cholesterol detection. To the best of our knowledge, this novel nanocomposite was utilized for the first time for cholesterol biosensor fabrication that resulted in high sensing performance. Hence, this approach opens a new prospective to utilize CSPPy-g-C3N4H* composite as cost-effective, biocompatible, eco-friendly, and superior electrocatalytic as well as electroconductive having great application potentials that could pave the ways to explore many other new sensors fabrication and biomedical applications. Herein, a super-labeled immunoassay was fabricated for matrix metalloproteinases-2 detection. A self-corrosion ITO micro circuit board was designed in this sensing platform to reduce the random error in the same testing condition, and the self-constructed sensing platform is portable with a cheap price. The K-modified graphene (K-GS) was utilized as the matrix material, which was synthesized well by phenylate and phenanthrene through the polar bond of nonpolar molecule phenylate and the 71-71 interaction for the first time. An aptamer-based labels based on Au nanoparticles (AuNPs), thionine (Th) and horseradish peroxidase (HRP) were applied as the signal source for tri infinite amplification. This fabricated super-labeled immunoassay exhibit excellent performance for MMPs-2 detection. It displayed a broad linear range of 10-4-10 ng/mL with a low detection limit of 35 fg/mL, which may have a potential application in the clinical diagnose. Novel fluorescent DNA quantum dots (QDs) were synthesized by hydrothermal treatment of G-/T-rich ssDNA at relatively low reaction temperature. The obtained DNA QDs demonstrate unique optical properties, maintain the basic structure and biological activities of ssDNA precursors, which makes the DNA QDs able to specifically bind with arsenite, driving the (GT)(29) region suffer conformation evolution and form well-ordered assembly rather than random aggregations. We speculate that the strong inter-molecule interaction and efficient stacking of base pairs stiffen the assembly structure, block the nonradiative relaxation channels, populate the radiative decay, and thus making the assembly be highly emissive as a new fluorescence center. The arsenite-induced specific fluorescence enhancement facilitates DNA QDs as light-up probes for arsenite sensing. Under optimal conditions, a linear relationship between the increased fluorescence intensity of DNA QDs and the logarithmic values of arsenite concentration in the range of 1-150 ppb with a detection limit of 0.2 ppb (3 sigma) was obtained. The nanosensor shows excellent selectivity for "turn on" arsenite determination and arsenate does not show any interference, facilitating its application in complex real water analysis. We demonstrated the quantitative electrophysiological monitoring of histamine and anti histamine drug effects on live cells via reusable sensor platforms based on carbon nanotube transistors. This method enabled us to monitor the real time electrophysiological responses of a single HeLa cell to histamine with different concentrations. The measured electrophysiological responses were attributed to the activity of histamine type 1 receptors on a HeLa cell membrane by histamine. Furthermore, the effects of anti histamine drugs such as cetirizine or chlorphenamine on the electrophysiological activities of HeLa cells were also evaluated quantitatively. Significantly, we utilized only a single device to monitor the responses of multiple HeLa cells to each drug, which allowed us to quantitatively analyze the antihistamine drug effects on live cells without errors from the device-to-device variation in device characteristics. Such quantitative evaluation capability of our method would promise versatile applications such as drug screening and nanoscale bio sensor researches. This work demonstrates the facile and efficient preparation protocol of beta-Cyclodextrin-reduced graphene oxide modified glassy carbon electrode (beta-CD/RGO/GCE) sensor for an impressive chiral selectivity analysis for phenylalanine enantiomers. In this work, the immobilization of beta-CD over graphene sheets allows the excellent enantiomer recognition due to the large surface area and high conductivity of graphene sheets and extraordinary supramolecular (host-guest interaction) property of beta-CD. The proposed sensor was well characterized by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FI'IR), and electrochemical impedance spectroscopy (EIS) techniques. The analytical studies demonstrated that the 13 -CD/ RGO/GCE exhibit superior chiral recognition toward L-phenylalanine as compared to n-phenylalanine. Under optimum conditions, the developed sensor displayed a good linear range from 0.4 to 40 mu M with the limit of detection (LOD) values of 0.10 mu M and 0.15 mu M for L- and D-phenylalanine, respectively. Furthermore, the proposed sensor exhibits good stability and regeneration capacity. Thus, the as-synthesized material can be exploited for electrochemical enantiomer recognition successfully. The occurrence and development of many complex diseases are associated with various molecules, whose contents are rarely in the early stage of the disease. Thus a universal platform for the ultrasensitive detection of multilevel biomarkers should be developed. In this study, we introduced an electrochemical biosensing system based on nicking endonuclease (Nt.BbvCI) assisted DNA walking strategy. We successfully constructed a universal signal-off-on ratiometric electrochemical biosensor for various biomolecules, including small molecules, nucleic acids, and proteins, by progressively optimizing the schematics (schemes 1, 2, and 3). The MB-hairpin probes (MB-HPs) acted as a signal-off probe, and nanocomposites (MPNs@DOX@DNA2) acted as a conventional signal-on probe (scheme 3). With the aid of the MPNs@DOX@DNA2 and Nt.BbvCI assisted DNA walking mechanism, the designed ratiometric electrochemical biosensor showed a high sensitivity and broad detection range. In addition, the proposed method can be utilized to detect diverse targets quantitatively by changing the sequence of aptamers under optimum experimental conditions. Furthermore, it has been widely proved to realize well-accepted signal response in identifying complex samples, thereby resulting in an wide prospect for bioanalysis and clinical diagnosis. Researchers and educators continuously remark on the importance of integrating creativity into the learning process. This study proposes a creativity approach to facilitating participatory learning for the sustained engagement of young learners based on the principle of remix practice, which consists of learning to generate online artefacts, endless hybridization and scaffolding. This study investigated students' engagement in and perceptions of the creative learning process during a two-year participatory learning program. Data collected included students' flow perceptions during a 39-week activity, their motivation and creative self-efficacy before and after the intervention, as well as their creative products. The findings indicated that the remix-oriented approach led to a higher level of intrinsic motivation and sustained flow compared to a model-based approach, especially interest and curiosity, in this participatory learning program. The approach also helped the students to perceive a significant increase in their level of creative self-efficacy associated with strategies to generate creative ideas. The results of this study suggest that the principle of remix practice is helpful for leveraging knowledge acquisition and the creative nature of participatory learning activities to sustain student engagement in participatory learning programs. (C) 2017 Elsevier Ltd. All rights reserved. The promise of social network technology for learning purposes has been heavily debated, with proponents highlighting its transformative and opponents its distracting potential. However, little is known about the actual, everyday use of ubiquitous social network sites for learning and study purposes in secondary schools. In the present work, we present findings from two survey studies on representative samples of Israeli, Hebrew-speaking teenagers (N-1 = 206 and N-2 = 515) which explored the scope, characteristics and reasons behind such activities. Study 1 shows that these can be described best as online knowledge sharing, that is: the up- and downloading of knowledge and knowledge sources to social network-based peer groups. Findings were replicated in study 2 to further support the claim that school-related knowledge sharing is common and widespread and entails different types of knowledge. Findings from study 2 furthermore show that sharing is mainly motivated by prosocial motives, as well as expectations for future reciprocation. Sharing is predicted by individual differences, such as gender, collectivist values, mastery goal orientations and academic self-efficacy. Relations between competitive-individualist values and sharing are more complex, and are, among others, moderated by expectations for future benefits. Implications for educational practices and for learning are discussed. (C) 2017 Elsevier Ltd. All rights reserved. During the widespread development of open access online course materials in the last two decades, advances have been made in understanding the impact of instructional design on quantitative outcomes. Much less is known about the experiences of learners that affect their engagement with the course content. Through a case study employing text analysis of interview transcripts, we revealed the authentic voices of participants and gained a deeper understanding of motivations for and barriers to course engagements experienced by students participating in Massive Open Online Courses (MOOCs). We sought to understand why learners take the courses, specifically Introduction to Chemistry or Data Analysis and Statistical Inference, and to identify factors both inside and outside of the course setting that impacted engagement and learning. Thirty-six participants in the courses were interviewed, and these students varied in age, experience with the subject matter, and worldwide geographical location. Most of the interviewee statements were neutral in attitude; sentiment analysis of the interview transcripts revealed that 80 percent of the statements that were either extremely positive or negative were found to be positive rather than negative, and this is important because an overall positive climate is known to correlate with higher academic achievement in traditional education settings. When demographic data was added to the sentiment analysis, students who have already earned bachelor's degrees were found to be more positive about the courses than students with either more or less formal education, and this was a highly statistically significant result. In general, students from America were more critical than students from Africa and Asia, and the sentiments of female participants' comments were generally less positive than those of male participants. An examination of student statements related to motivations revealed that knowledge, work, convenience, and personal interest were the most frequently coded nodes (more generally referred to as "codes"). On the other hand, lack of time was the most prevalently coded barrier for students. Other barriers and challenges cited by the interviewed learners included previous bad classroom experiences with the subject matter, inadequate background, and lack of resources such as money, infrastructure, and internet access. These results are enriched by illustrative quotes from interview transcripts and compared and contrasted with previous findings reported in the literature, and thus this study enhances the field by providing the voices of the learners. (C) 2017 Elsevier Ltd. All rights reserved. The main aim of the study was to investigate the effects of mimicking gestures on learning from animations and static graphics. In Experiment 1, 48 university students learned to write Mandarin characters, and in Experiment 2, 44 young children learned to write Persian characters. In both experiments, participants were randomly assigned to one of four conditions animations without gestures, animations with gestures, statics without gestures, or statics with gestures. All groups viewed instructional content showing how to write the foreign characters, and then were tested. In the gesturing conditions, participants were required to mimic the character writing at the same time as watching the instructional presentation, and in the non-gesturing conditions, mimicking was prevented. Results from both experiments indicated a presentation-gesturing interaction, where gesturing was an advantage for static graphics but not animations. Experiment 2 found an advantage for animations over static graphics, and gesturing compared to not gesturing, (C) 2017 Published by Elsevier Ltd. Serious mini-games are promising tools to raise awareness. They motivate and enhance players' interest in a particular topic, and only require a small time-investment. The games should focus on a single concept or learning goal and should be carefully designed. This study therefore explores the usefulness of informant design when developing such serious mini-games. Informant design is a framework that involves stakeholders at different stages of the design process depending on their expertise, which maximizes the value of their contributions. When developing awareness campaigns, various stakeholders, with different goals, need to be involved. Therefore, this paper suggests the use of informant design to increase the support of every stakeholder. The informant design framework provides an excellent design methodology for games as it is very flexible in time, place and number of participants in the co-design activities. The current study shows a case study indicating the usefulness of informant design when developing serious mini-games to increase advertising literacy among adolescents. (C) 2017 Elsevier Ltd. All rights reserved. Even as mobile devices have become increasingly powerful and popular among learners and instructors alike, research involving their comprehensive integration into educational laboratory activities remains largely unexplored. This paper discusses efforts to integrate vision-based measurement and control, augmented reality (AR), and multi-touch interaction on mobile devices in the development of Mixed-Reality Learning Environments (MRLE) that enhance interactions with laboratory test-beds for science and engineering education. A learner points her device at a laboratory test-bed fitted with visual markers while a mobile application supplies a live view of the experiment augmented with interactive media that aid in the visualization of concepts and promote learner engagement. As the learner manipulates the augmented media, her gestures are mapped to commands that alter the behavior of the test-bed on the fly. Running in the background of the mobile application are algorithms performing vision-based estimation and wireless control of the test-bed. In this way, the sensing, storage, computation, and communication (SSCC) capabilities of mobile devices are leveraged to relieve the need for laboratory-grade equipment, improving the cost-effectiveness and portability of platforms to conduct hands-on laboratories. We hypothesize that students using the MRLE platform demonstrate improvement in their knowledge of dynamic systems and control concepts and have generally favorable experiences using the platform. To validate the hypotheses concerning the educational effectiveness and user experience of the MRLEs, an evaluation was conducted with two classes of undergraduate students using an illustrative platform incorporating a tablet computer and motor test-bed to teach concepts of dynamic systems and control. Results of the evaluation validate the hypotheses. The benefits and drawbacks of the MRLEs observed throughout the study are discussed with respect to the traditional hands-on, virtual, and remote laboratory formats. (C) 2017 Elsevier Ltd. All rights reserved. Teachers' perceptions of the usefulness of digital games might be a reason for the limited application of digital games in education. However, participants in most studies of teaching with digital games are teachers who do not use digital games regularly in their teaching. This study examined the practice-based perceptions of teachers who do teach with digital games - either playing or creating games - in their classroom. Semi-structured interviews were conducted with 43 secondary education teachers. Our findings showed that most teachers who actually use games in class perceived student engagement with a game and cognitive learning outcomes as effects of the use of games in formal teaching settings. Fewer teachers mentioned motivational effects of learning with digital games. The implications of these findings for the use of digital games in teachers' educational practice are discussed. (C) 2017 Published by Elsevier Ltd. The purpose of this study was to analyse how the interrelationships of interns, clients and mentors lead to success in a project-based learning design virtual internship program. Interns from eleven different university programmes were asked to apply their academic experiences in constructing real projects for clients using a virtual environment while under the supervision of mentors. Data included completed intern projects, intern journals, and mentor and client evaluations. Data were collected over five cohorts, from forty-two cases, six of which are highlighted in this study. Programme design, mentor and client training, and intern performance, are considered. Findings demonstrate that interrelated roles evolve during the virtual internship and project success is related to the co-construction of knowledge between the intern, mentor and client. The study of the functions of these roles leads to implications for the design, development and implementation of a successful virtual internship programme. (C) 2017 Elsevier Ltd. All rights reserved. A study of undergraduate students in Saudi Arabia found that although they used technology for an average of 45 h per week and had positive attitudes to it, they did not frequently use technology, in particular computers, in support of their learning. Qualitative evidence suggests that the students were not routinely required to use computers at university, and that in some cases the universities did not provide computing facilities or actively prevented technology usage. Factors which influenced attitudes to computers included: city of study, parental encouragement, and English language proficiency but not gender. (C) 2017 Elsevier Ltd. All rights reserved. The Analects of Confucius is an important course in the curriculum of Asian Studies in the Chinese community and around the world. Students have to learn a collection of the thoughts of Confucius which have shaped world history and the soul of China. However, educators have indicated that most students fail to understand its abstract thoughts, or even realize its spirit in their daily life experience. Meanwhile, with the advancements of technology, learning with computer games is currently a rapidly developing area of interest for researchers, teachers, material writers and application developers in the educational field. Several studies have shown that by properly incorporating learning contents into game scenarios, an experiential game-based learning approach might foster students' motivation to learn through experience. In addition, the experiential game-based approach is a learning method with great potential for motivating students and stimulating their willingness to engage in continuous and constant learning. Thus, in this study, an experiential digital game has been developed and presented to a fifth grade class learning the Analects of Confucius at an elementary school in Taipei city. An experiment was conducted to evaluate the effectiveness of the proposed approach by situating the experimental group in an experiential game-based learning scenario, while the control group learned with a conventional technology-enhanced learning system. The experimental results showed that the proposed approach effectively enhanced the students' learning effects in terms of their learning motivation, deep learning strategy and acceptance of the technology. As a consequence, it is concluded that an experiential digital game-based learning approach can help students understand the conception and meaning of learning, which is important for them to become life-long learners with positive learning attitudes. (C) 2017 Elsevier Ltd. All rights reserved. This study investigated the effects of feedback on performance with pairs of isomorphic items that were embedded within consecutive mathematics web-based practice tests. Participants were randomly assigned to different experimental testing conditions: (1) feedback type: knowledge of correct response (ICCR) or KCR with elaborated feedback (EF) in the form of additional explanations of the correct answer, (2) feedback timing: immediately after answering each item or delayed after completing the practice test, and (3) item format: multiple-choice or constructed response. The study specifically investigated the likelihood that participants would correctly answer the second version of the item, conditioned on their answer to the first version, across feedback type and timing conditions, and taking into account item format and participant initial ability. Results from 2445 participants showed a different pattern of results depending on initial item response correctness. With respect to feedback type, EF resulted in higher performance than KCR following incorrect first responses (suggesting initial lack of knowledge and understanding), but not following correct first responses. With respect to feedback timing, immediate feedback with additional delayed review resulted in higher performance than delayed feedback following incorrect first responses, but resulted in lower performance following correct first responses (immediate feedback without delayed review resulted in lower performance in both cases). (C) 2017 Published by Elsevier Ltd. The rapid development of social media tools has increased interest in their pedagogical value. It has been suggested that social media tools such as wikis can promote online collaborative and interactive learning. This study investigated the value of wilds in supporting collaborative group writing quality among secondary school students in Hong Kong. Students from a local secondary school engaged in group writing projects using Pbworks, a popular wild tool. Data were gathered from (1) the revision tracking history, (2) a questionnaire on the perceived pedagogical value of the wild, and (3) group interviews with students. Findings showed that students who made more collaborative revisions on the wild produced higher-quality writing output. In general, students reported a moderately positive attitude towards the pedagogical value of the wild. The findings suggest that wikis promote collaborative writing, but teachers need to adopt pedagogical strategies that would equip students to use wilds. (C) 2017 Published by Elsevier Ltd. The dynamic assessment (DA) approach has been shown to be a useful evaluation tool for understanding students' learning potential. In the present learning context via technology mediated learning (TML), the DA approach has a significant effect on learning. The aim of this study was to understand two gaps in the research on the effect of DA (in our case, computer-based graduated prompting assessment) on students' academic performance. First, the extant research has focused on DA that is based on pre-and post-test evaluation. The influence of time is an important predictor of information technology use, and understanding the effect of computer-based DA on students' academic performance across time is thus necessary. Second, the TML-based assessment has been designed because the assessment system has students who receive help directly in isolated TML environments. As such, we developed a TML-based, computer-based graduated prompting assessment and conducted a longitudinal examination of computer-based graduated prompting assessments in graphing courses. Quasi-experiments involving 60 students in an experimental group and 60 students in a control group were conducted to test the growth model of hierarchical linear modeling. The results showed that this assessment statistically significantly influenced students' academic performance, as might be expected. However, the use of this assessment over time did not lead to a change in the growth rate. Recommendations for using computer-based graduated prompting assessments across a long timeframe to prompt students' academic performance are also discussed. (C) 2017 Elsevier Ltd. All rights reserved. Researches in natural neuron dynamics have shown that phase transition and chaos provide optimal behaviour for information processing. In artificial neural models that behavior is expected also to maximize the neuron learnability. By employing an open quantum systems approach, we propose a new quantum information extraction method in the quantum RAM node dynamics where complex values are iterated. We experimentally show bifurcation and chaos emergence by varying their parameters. (C) 2017 Elsevier B.V. All rights reserved. This paper is interested in recommender systems that work with implicit feedback in dynamic scenarios providing online recommendations, such as news articles and ads recommendation in Web portals. In these dynamic scenarios, user feedback to the system is given through clicks, and feedback needs to be quickly exploited to improve subsequent recommendations. In this scenario, we propose an algorithm named multi-objective ranked bandits, which in contrast with current methods in the literature, is able to recommend lists of items that are accurate, diverse and novel. The algorithm relies on four main components: a scalarization function, a set of recommendation quality metrics, a dynamic prioritization scheme for weighting these metrics and a base multi-armed bandit algorithm. Results show that our algorithm provides improvements of 7.8 and 10.4% in click-through rate in two real-world large-scale datasets when compared to the single-objective state-of-the-art algorithm. (C) 2017 Elsevier B.V. All rights reserved. Predicting the properties of materials like concrete has been proven a difficult task given the complex interactions among its components. Over the years, researchers have used Statistics, Machine Learning, and Evolutionary Computation to build models in an attempt to accurately predict such properties. High quality models are often non-linear, justifying the study of nonlinear regression tools. In this paper, we employ a traditional multiple linear regression method by ordinary least squares to solve the task. However, the model is built upon nonlinear features automatically engineered by Kaizen Programming, a recently proposed hybrid method. Experimental results show that Kaizen Programming can find low correlated features in an acceptable computational time. Such features build high-quality models with better predictive quality than results reported in the literature. (C) 2017 Elsevier B.V. All rights reserved. The recent growing size of datasets requires scalability of data mining algorithms, such as clustering algorithms. The MapReduce programing model provides the scalability needed, alongside with portability as well as automatic data safety and management. k-means is one of the most popular algorithms in data mining and can be easily adapted to the MapReduce model. Nevertheless, k-means has drawbacks, such as the need to provide the number of clusters (k) in advance and the sensitivity of the algorithm to the initial cluster prototypes. This paper presents two evolutionary scalable metaheuristics in MapReduce that automatically seek the solution with the optimal number of clusters and best clustering structure for scalable datasets. The first consists in an algorithm able to iteratively enhance k-means clusterings through evolutionary operators designed to handle distributed data. The second consists in applying evolutionary k-means to cluster each distributed portion of a dataset in an independent way, combining the obtained results into an ensemble afterwards. The proposed techniques are compared asymptotically and experimentally with other state-of-the-art clustering algorithms also developed in MapReduce. The results are analyzed by statistical tests and show that the first proposed metaheuristic yielded results with the best quality, while the second achieved the best computing times. (C) 2017 Elsevier B.V. All rights reserved. The multiobjective unconstrained binary quadratic programming (mUBQP) is a combinatorial optimization problem which is able to represent several multiobjective optimization problems (MOPs). The problem can be characterized by the number of variables, the number of objectives and the objective correlation strength. Multiobjective evolutionary algorithms (MOEAs) are known as an efficient technique for solving MOPs. Moreover, several recent studies have shown the effectiveness of the MOEA/D framework applied to different MOPs. Previously, we have presented a preliminary study on an algorithm based on MOEA/D framework and the bio-inspired metaheuristic called binary ant colony optimization (BACO). The metaheuristic uses a positive feedback mechanism according to the best solutions found so far to update a probabilistic model which maintains the learned information. This paper presents the improved MOEA/D-BACO framework for solving the mUBQP. The components (i) mutation-like effect, and (ii) diversity preserving method are incorporated into the framework to enhance its search ability avoiding the premature convergence of the model and consequently maintaining a more diverse population of solutions. Experimental studies were conducted on a set of mUBQP instances. The results have shown that the proposed MOEA/D-BACO has outperformed MOEA/D, which uses genetic operators, in most of the test instances.. Moreover, the algorithm has produced competitive results in comparison to the best approximated Pareto fronts from the literature. (C) 2017 Elsevier B.V. All rights reserved. This paper examines how teachers' knowledge relations and the profession's epistemic infrastructure shape collective autonomy. Professionals' autonomy derives partially from their responsibility for a specific knowledge base. This responsibility is currently challenged by educational policies and complex knowledge landscapes. Existing research has shown how epistemic policy instruments impact teachers' autonomy. However, less attention has been paid to how professional autonomy is informed by teachers' knowledge relations, and to collective, rather than individual, aspects of teachers' autonomy. Implications include how teachers can define the role of knowledge resources in professional work, and how the profession can navigate epistemic and political landscapes. (C) 2017 Elsevier Ltd. All rights reserved. This study documents the process of implementation of an adolescent reading intervention. Using data from observations of teachers (n = 17) during the 2013-14 school year, I conducted a nuanced descriptive analysis of fidelity of implementation (FoI). I also analyzed weekly logs completed by literacy coaches (n = 3) to examine variation in quantity and intensity of coaching. I then compared variation in coaching with variation in FoI, and finally compared FoI to outcomes for students (n = 287). FoI at observation 1 was found to predict coaching time, and FoI across both observations predicted student outcomes, emphasizing the critical role of implementation in intervention research. (C) 2017 Elsevier Ltd. All rights reserved. Although teacher education standards address preparing candidates to serve diverse learners, minimal guidance is available concerning specific program components and their influence on candidates' growth and development. Through constant comparative analysis of end-of-semester reflections, this exploratory, qualitative study investigated preservice special educators' developing perceptions about disability and cultural and linguistic diversity following field experiences aligned with courses. Participants reported a growing awareness of themselves, students they encountered, and the intersectionality between diversity and disability. Further, their insights reflected recognition of the combined influence of coursework, fieldwork, and opportunities for supported reflection. Implications for research and program development are offered. (C) 2017 Elsevier Ltd. All rights reserved. Teacher judgments of student achievement are increasingly used for high-stakes decision-making, making it imperative that judgments be as fair and reliable as possible. Using a large national database from New Zealand, we explored the relation between psychometrically designed standardized achievement results and teacher judgments in reading (N = 4771 students) and writing (N = 11,765 students) using hierarchical linear modelling. Our findings indicated that judgments were systematically lower for marginalized learners after controlling for standardized achievement differences. Additionally, classroom and school achievement composition were inversely related to teacher judgments. These discrepancies are concerning, with important implications for equitable educational opportunities. (C) 2017 The Authors. Published by Elsevier Ltd. This study explores the professional identity of Content and Language Integrated Learning (CLIL) teachers in Finnish primary education. It aims at explaining how CLIL teachers negotiate their pedagogical and relational identity, and how identity agency is exercised in negotiating a more encompassing professional identity. Thematic analysis of thirteen interviews outlines the bi-directional process of identity negotiation between personal and professional resources, and social contexts at work. The results highlight a connection between professional identity and agency, and suggest that identity negotiation is a process of working and sharing with others, but also individually. (C) 2017 Elsevier Ltd. All rights reserved. The implementation of educational innovations by teachers seems to benefit from a team approach and team learning. The study's goal is to examine to what extent transformational leadership is associated with team learning, and to investigate the mediating roles of participative decision-making, team commitment, task interdependence and teachers' proactivity in this association. Data were analysed using multilevel structural equation modeling (N = 992 teachers, 92 teams). Results show that transformational leadership had direct and indirect positive associations with team learning through all mediators. These results provide insights into how transformational leaders can have a positive influence on team learning. (C) 2017 Elsevier Ltd. All rights reserved. Students in teacher training programs are familiar with the use of e-mail, blogs, and instant messaging. However, few studies have investigated the use of technology in field-related experience in teacher training program. We investigated whether asynchronous computer-mediated communication (ACMC) influences teacher efficacy during a practicum. The results indicated that such device was not as effective in enhancing teacher efficacy as many studies had hypothesized. This study revealed the importance of mentors and experienced teachers when adopting ACMC devices and suggested caution and evaluation regarding aspects such as learners' experience and the purpose of using the device as a learning tool. (C) 2017 Elsevier Ltd. All rights reserved. In response to globalization, leaders have called for more global education in K-12 schools. This study utilized a sequential exploratory mixed methods design to validate the construct teaching for global readiness. After exploratory qualitative analysis of 24 expert teacher interviews, an instrument was developed and administered to K-12 U.S. classroom teachers. Based on EFA and CFA, four factors were interpreted as: situated practice, integrated global learning, critical literacy instruction, and transactional experiences. The end product was a measurement model and scale of teacher practices related to global readiness instruction. (C) 2017 Elsevier Ltd. All rights reserved. Teachers' conceptions about assessment influence their classroom assessment practices. In this investigation, we examined 179 K-12 teachers' conceptions of the purposes of assessment from a person-centered perspective. An exploratory factor analysis of teachers' responses to the Conceptions of Assessment Instrument yielded a three-factor model: assessment as valid for accountability, improves teaching and learning, and as irrelevant. Next, we used cluster analysis to identify belief profiles of teacher groups: Cluster-1: Moderate, Cluster-2: Irrelevant, Cluster-3: Teaching and Learning. Within and across cluster comparisons revealed significant differences indicating that these are distinct profiles: teachers can, and do, hold multiple beliefs about assessment simultaneously. (C) 2017 Elsevier Ltd. All rights reserved. Reporting on a study of school based playwriting pedagogy, this article explores the teaching and learning experience created by teachers for students writing a play for external assessment. It explores teachers' views of their role in teaching a creative task and their views on the form and content of input and feedback. It argues that the teachers' belief in intrinsic creativity encouraged passivity in pedagogy and a focus on reactive feedback and problematisation, which paradoxically hindered student progress and proficiency. This article suggests the need for a paradigm shift, repositioning the teacher from passive facilitator to an interventionist dramaturg. (C) 2017 Elsevier Ltd. All rights reserved. Growing numbers of young people are disclosing that they are trans or gender diverse, requiring affirming and informed responses from schools. This article reports on a survey examining attitudes towards inclusion, comfort, and confidence amongst 180 South Australian primary school teachers and pre-service teachers. The findings suggest that women held more positive attitudes and had greater comfort in working with trans and gender diverse students than men, and that awareness of programs designed to increase understanding was related to more positive attitudes, and greater comfort and confidence. The article discusses the need for further training alongside additional resourcing of initiatives aimed at facilitating inclusion. (C) 2017 Elsevier Ltd. All rights reserved. To prevent future burnout and turnover among early childhood education students, mindfulness training may hold promise as a measure. A lab-based pilot and classroom-based feasibility study was designed to investigate an effective way to introduce mindfulness meditation. Results suggested that two types of audio-guided mindfulness meditation, a sitting meditation and a compassion meditation reduced stress level, and that the compassion meditation was perceived as more comfortable than the sitting meditation. As a tentative conclusion, the compassion meditation may be introduced first before the sitting meditation. Additional studies should examine the effects of longer-term meditation practice on teaching career development. (C) 2017 Elsevier Ltd. All rights reserved. Comprehensive, school-based student support interventions are an approach to addressing out-of-school factors that may interfere with students' achievement and thriving. The effect of these approaches on teachers has not been extensively studied, although the literature points to potential benefits. This paper explores the impact of student support on teachers through a case study of City Connects, which collaborates with every teacher in a school to tailor services for students. A mixed-methods study finds that teachers report new awareness of students' out-of-school lives, develop classroom management strategies, and feel more supported. Implications for teacher education and holistic student support are discussed. (C) 2017 Elsevier Ltd. All rights reserved. Twenty-one preschool teachers in California participated in a Q methodology study exploring beliefs about linguistic diversity. Four perspectives emerged from the factor analysis: Aesthetic Caregivers emphasized the importance of effectively negotiating student differences, Bilingualism Advocates supported bilingualism to reinforce family ties, Diversity Accommodators focused on adapting teaching methods to meet English learners' individual needs, and an English Acquisition Supporter highlighted the need to learn English. All teachers agreed that linguistic diversity contributes positively to the classroom. Findings present a nuanced picture of these teachers' beliefs about linguistic diversity, illustrating the usefulness of Q methodology as a mixed-methods exploration of perspectives. (C) 2017 Elsevier Ltd. All rights reserved. This study evaluates changes in teachers' instructional skills after participating in an intensive data-based decision making (DBDM) intervention for grade 4 teachers. Teachers were recorded three times prior to the intervention, and three times after the intervention, and all recordings were rated by four raters. The data was analyzed by means of advanced item response theory (IRT) techniques, combined with a generalizability model. Teachers significantly improved their DBDM related skills. Teachers' initial basic teaching skills did not seem to matter for the extent to which teachers developed their DBDM related instructional skills. Suggestions for future research are presented. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents the development and pilot of the Teacher Wellbeing Web-based Application (t*) to capture real-time teacher emotional states and triggers. In Phase 1, the t* was developed from literature as an innovative real-time measure of 11 teacher emotional states and five associated triggers. In Phase 2, the t* was piloted with 11 teachers in Australian high schools. Data were collected prior to and after teaching eight modules about emotion regulation to students. The t* appears to differentiate common teacher emotions and triggers. Student behavior and workload were associated with negative emotional states, while staffroom had a positive effect. (C) 2017 Elsevier Ltd. All rights reserved. To learn science and demonstrate science learning, school students must bridge the gap between everyday use of language and image and the specialised use of language and image needed to achieve science curriculum outcomes. Pre-service teachers studying at a regional Australian university were shown how to help their future students bridge this gap. A transdisciplinary model was used to demonstrate the teaching of specialised science literacies integrated with the teaching of science in the middle school years, resulting in high levels of engagement, more effective use of learning time, and valuable opportunities for teacher educator professional learning. (C) 2017 Elsevier Ltd. All rights reserved. The purpose of this paper is to examine how authenticity influences students' discussions of socio-scientific issues (SSI). The students were found to bridge school knowledge and everyday knowledge, i.e. enter a "third space", in their explorative discussions. When the SSI task changed into a decision-making discussion for communication with an authentic stakeholder, the students excluded many perspectives. In the process, authenticity caused a loss of relevance for one discourse and several figured worlds, including the students' emotional reasoning. While losing emotional aspects, students' reasoning became more precise when grounded in rational reasoning, supporting well-informed decisions. (C) 2017 The Authors. Published by Elsevier Ltd. As part of a two-year long professional development program for in-service teachers leading to endorsement in English as a Second Language (ESL), this investigation examined teachers' perceptions of the role of professional learning communities on their professional development experience, particularly on their sharing and transfer of learning into their instructional practice. Results from this mixed-methods research indicated that teachers perceived their professional learning communities, especially the reflective dialogue enacted during their weekly Try It Out discussion based on peers' strategy implementations, as the most relevant component of their professional development program to their transferring of learning into their instructional practice. (C) 2017 Elsevier Ltd. All rights reserved. This study examined the trajectories of depressive and anxious symptoms among early-career teachers (N = 133) as they transitioned from their training programs into their first year of teaching. In addition, perceived school climate was explored as a moderator of these trajectories. Multilevel linear growth modeling revealed that depressive and anxious symptoms increased across the transition, and negative perceived school climate was related to more drastically increasing symptoms. Results suggest that this career stage may be a time when teachers are particularly vulnerable to declines in mental health, and speak to some within-school features that may be related to teachers' experiences. (C) 2017 Elsevier Ltd. All rights reserved. Teaching-related motivations are often assumed to influence teaching quality; however, the empirical evidence regarding the directionality of such influences is scarce. The present study thus examined the reciprocal links between teaching-related motivations (self-efficacy and enthusiasm for teaching) and student-reported teaching quality (classroom management, learning support, and cognitive activation). Two-level cross-lagged panel analyses across three time points (with an initial sample of 165 secondary-level mathematics teachers and their 4273 students) revealed no significant cross-lagged effects when teachers' stable inter-individual differences are taken into account. Our findings suggest that teachers' motivations are remarkably stable over time. (C) 2017 Elsevier Ltd. All rights reserved. This program evaluation examines the effectiveness of a school-based dental clinic. A repeated-measures design was used to longitudinally examine secondary data from participants (N = 293). Encounter intensity was developed to normalize data. Multivariate analysis of variance and Kruskal-Wallis test were used to investigate the effect of encounter intensity on the change in decay, restorations, and treatment urgency. A Pearson's correlation was used to measure the strengths of association. Encounter intensity had a statistically significant effect on change in decay (p = .005), restorations (p = .000), and treatment urgency (p = .001). As encounter intensity increased, there was a significant association with the decrease in decay (-.167), increase in restorations (.221), and reduction in referral urgency (-.188). Incorporating dental care into a school-based health center resulted in improved oral health in underserved children while overcoming barriers that typically restrict access. The collaboration of school nurses with the school-based dental clinic was an important element for maximizing student access to dental care. School-based asthma education offers an opportunity to reach low-income children at risk for poor asthma control. Iggy and the Inhalers (Iggy) is an asthma education program that was implemented in a Midwest metropolitan school district. The purpose of this evaluation was to conduct a comprehensive program evaluation. Objectives included increasing children's asthma-related knowledge and families' awareness of asthma management, while cultivating collaboration between school nurses and asthma providers. A total of 173 students participated in Iggy education, with 147 completing both initial and 1-month posttests. Thirty-one parents and seven school nurses provided qualitative feedback. Iggy was well received by children, parents, and school nurses. Asthma knowledge increased significantly (p < .001) between pretest and posttest, and this increase was retained at 1-month follow-up. This program evaluation suggests that our program had a significant, sustained impact on students' asthma knowledge. It also supports the value of collaboration between asthma providers and school nurses. A significant proportion of youth engage in health risk behaviors, which are of concern, as they are associated with adverse health consequences across development. Two factors associated with engagement in such behaviors are emotion dysregulation and impulsivity. Dialectical behavioral therapy (DBT) is an effective intervention that enhances emotion regulation skills to reduce problem behaviors among adolescent populations; however, limited research has been conducted implementing the program within school settings. The current study was a 9-week DBT skills group conducted among 80 middle school youth, with pre-posttest data among 53 students. Findings indicated feasibility to implement the program in schools and preliminary evidence of efficacy in decreasing youth's likelihood to engage in risky, particularly among youth high on an emotion-based impulsivity trait. Brief DBT skills group may be an effective program to be utilized by school nurses and health-care teams to reduce health risk behaviors among school-aged youth. Guided by the social cognitive theory, this randomized controlled trial tested the Make a Move, a provider-led intervention for Head Start parents aimed to produce changes in the outcomes of knowledge, attitude, and behavior of physical activity and healthy eating. Participants were parents of children ages 3-5 years enrolled in a Head Start program. Participants completed a 57-item questionnaire at baseline and postintervention. The Wilcoxon rank-sum test revealed a statistically significant difference between the intervention and control groups in scores on knowledge of healthy eating (z = 1.99, p = .05), attitude of physical activity (z = 2.71, p < .01), and behavior of physical activity (z = 2.03, p = .04). Ten participants (77%) completed all four intervention sessions. This study provided new insights into the relationship of a provider-led intervention with respect to knowledge, attitude, and behaviors in healthy eating and physical activity. Children's use of the toilet at school, although rarely explored, is an important facet of school experience with consequences for physical and psychological health. A mixed methods study investigated views of 25 children (4-5 years) regarding potential stressors in the first school year, including views of toileting, in Dublin, Ireland. Despite very positive responses to school, most responses to toileting (15 of 25) were mixed or negative. Although some liked to go, or noted the toilets were clean, most indicated delayed toilet use (bursting to go) and ambivalent or negative experiences such as fear of not identifying the right toilet, fear of being alone, lack of privacy, and potential bullying. Many children did not expect to receive help from the teacher. As delaying toilet use can have lasting health consequences, teacher-nurse collaboration could be used to develop whole-school policies to support children's early adjustment in this sensitive area of functioning. Students with physical symptoms and diseases may be at an increased risk of peer victimization. This study examined the associations of several medical conditions (obesity, asthma, allergy, epilepsy, and diabetes) with experience of physical, verbal, and relational victimization among children. A sample of 6,233 fourth-grade students from 314 elementary schools in Taiwan was recruited for the analysis. The mean age of the sample was 10.5, with an even distribution of gender (50.3% male and 49.7% female). Children with asthma, allergy, and epilepsy reported higher frequencies of peer victimization. Those who took daily medications or received treatment were also at a higher risk of being victimized. Diabetes and obesity were not found to be associated with peer victimization. The findings highlighted that children with physical conditions suffer maltreatment from peers. Sensitivity training should be provided to school health professionals, so they can evaluate the risk of victimization among students with special needs during assessment. The purpose of this study was to describe college-aged females' human papillomavirus (HPV) knowledge and beliefs, perceptions and perceived benefits of the HPV vaccine, and identify characteristics associated with vaccination status and support for HPV vaccine mandates. Data were collected from 1,105 females by an Internet-delivered questionnaire during February to March 2011. This descriptive study utilizes (2) tests and t-tests to compare participant responses. HPV-related knowledge scores were 8.08 out of 11 points. Those who initiated HPV vaccination were significantly younger, single, engaged in sex, were sexually active, and had a Pap test. Participants who had more friends receiving the vaccine were significantly more likely to support mandates for 9-11 and 12-17 years and were more likely to complete the HPV vaccination cycle. Findings suggest the importance of educational programs adopted and delivered by school nurses, which aim to improve student knowledge and reduce misconceptions related to the HPV vaccine and vaccination mandates. The beneficiaries of a corporate defined benefit pension plan in financial distress care about the security of their promised pensions. We propose to value the pension obligations of a corporate defined benefit plan using a discount rate that reflects the funding ability of the pension plan and its sponsoring company, and therefore depends, in part, on the chosen asset allocation. An optimal valuation is determined by a strategic asset allocation that is optimal given the risk premium a representative pension plan member demands for being exposed to funding risk. We provide an empirical application using the General Motors pension plan. The article investigates whether the market concentration is associated with an insurer's financial stability in the U.S. property-liability insurance industry over the period 1992-2010. We employ two-stage least squares techniques with instrumental variables to address likely endogeneity problems. The results show that higher market concentration is associated with lower financial stability of insurance firms, consistent with the concentration-fragility view. Our results indicate that firm-specific characteristics including firm size, underwriting leverage, organizational form, product and geographical diversification, along with the exposure to natural catastrophes and macroeconomic conditions are important determinants in ensuring a safe and sound insurance system. Robustness tests using various estimation methods and alternative measures of financial stability present consistent results. This study investigates the impact of organization structure on corporate demand for reinsurance. Previous research has shown that the unique corporate groupings in Japan known as the keiretsu have relatively low bankruptcy costs, low agency conflicts, low information asymmetry, and low effective taxes. These conditions should mitigate the benefits of reinsurance purchase. This conjecture is tested by examining demand for reinsurance of Japanese non-life insurance companies during 1974-2010. Consistent with the prediction, keiretsu non-life insurers have lower reinsurance purchase than independent non-life insurance companies. The effects of the keiretsu structure also receded when keiretsu groupings' power was weakened after the asset bubble burst and the breakdown of the convoy system in mid 1990s. Consistent with previous studies, Japanese mutual insurers also purchase more reinsurance than stock insurers. Empirical studies have found that high litigation costs often discourage small firms from investing in R&D, as they fear their patent will be infringed and they will not be able to afford litigation. As a solution, firms have been encouraged to purchase insurance policies that, by covering legal costs in the event of a trial, serve as a commitment to litigate so that settlement terms are more favorable to the insured, and potential infringement is less likely to occur. However, very few firms are purchasing insurance and the market remains poorly developed throughout the world. I show that firms might be discouraged from buying insurance because of information asymmetries, not only with insurance companies but also with their competitors. I study the situation of a patent holder, who perfectly knows the validity and enforceability (strength) of her patent, which has been infringed by a competitor with less information on the patent. The patent holder can purchase insurance to have a credible threat to litigate and increase the infringer's settlement offer. But the decision to buy insurance conveys information about the patent strength to the infringer. As a result the patent holder may prefer not to be insured rather than transmitting this information. This signaling effect can yield different equilibriums, in particular, a pooling equilibrium no insurance where no patent holder purchases insurance. I study if this situation might be improved by imposing mandatory insurance or by giving the insurer a share of litigation proceeds. The financial guarantees embedded in variable annuity contracts expose insurers to a wide range of risks, lapse risk being one of them. When policyholders' lapse behavior differs from the assumptions used to hedge variable annuity contracts, the effectiveness of dynamic hedging strategies can be significantly impaired. By studying how the fee structure and surrender charges affect surrender incentives, we obtain new theoretical results on the optimal surrender region and use them to design a marketable contract that is never optimal to lapse. Although proponents of tort reform argue that it will benefit consumers through lowered insurance premiums and increased insurance availability, to date there is limited empirical evidence linking tort law to consumer outlays. Using data from the Consumer Expenditure Survey and a differences-in-differences research design, this article examines whether any of several common state-level modifications to tort law affect consumer costs for auto insurance. Expenditures on auto insurance fall by 12 percent following no-fault repeal and 6 percent following relaxation of collateral source restrictions, but are not measurably affected by bad faith reform, modifications to joint and several liability, or noneconomic damage caps. None of the modifications to tort law generate measurable increases in auto insurance take-up. There is little variation in the impact of the reforms across income, education, and age groups, but no-fault repeal and collateral source reform do disproportionately benefit consumers with lower cost policies. The prediction of insurance liabilities often requires aggregating experience of loss payment from multiple insurers. The resulting data set of intercompany loss triangles displays a multilevel structure of claim development where a portfolio consists of a group of insurers, each insurer several lines of business, and each line various cohorts of claims. In this article, we propose a Bayesian hierarchical model to analyze intercompany claim triangles. A copula regression is employed to join multiple triangles of each insurer, and a hierarchical structure is specified on major parameters to allow for information pooling across insurers. Numerical analysis is performed for an insurance portfolio of multivariate loss triangles from the National Association of Insurance Commissioners. We show that prediction is improved through borrowing strength within and between insurers based on training and holdout observations. This article investigates scale economies and the optimal scale of pension funds, estimating different cost functions with varying assumptions about the shape of the underlying average cost function: U-shaped versus monotonically declining. Using unique data for Dutch pension funds over 1992-2009, we find that unused scale economies for both administrative activities are indeed large and concave, that is, huge for small pension funds and decreasing with pension fund size. We observe a clear optimal scale of around 40,000 participants during 1992-2000 (pointing to a U-shaped average cost function), which increases in subsequent years to size above the largest pension fund, pointing to monotonically decreasing average costs. These model-based outcomes are roughly in line with the results of a survivorship analysis. This article demonstrates the presence of adverse selection in the group insurance market. Conventional wisdom suggests that group insurance mitigates adverse selection because it minimizes individual choice. We complement this conventional wisdom by analyzing a group insurance scenario in which individual choice is excluded, and we find that group insurance alone is not effective enough to eliminate adverse selection; that is, between-group adverse selection exists. Between-group adverse selection, however, disappears over time if the group renews with the same insurer for a certain period. Our results thus indicate that experience rating and underwriting based on information that insurers learn over time are important in addressing adverse selection. The present study meta-analyzed 45 experiments with 959 subjects and 463 activation foci reported in 43 published articles that investigated the neural mechanism of moral functions by comparing neural activity between the moral task conditions and non-moral task conditions with the Activation Likelihood Estimation method. The present study examined the common activation foci of morality-related task conditions. In addition, the study compared the neural correlates of moral sensitivity with the neural correlates of moral judgment, which are the two functional components in the Neo-Kohlbergian model of moral functioning. The results showed that brain regions associated with the default mode network were significantly more active during morality-related task conditions than during non-morality task conditions. These brain regions were also commonly activated in both moral judgment and moral sensitivity task conditions. In contrast, the right temporoparietal junction and supramarginal gyrus were found to be more active only during conditions of moral judgment. These findings suggest that the neural correlates of moral sensitivity and moral judgment are perhaps commonly associated with brain circuitries of self-related psychological processes, but the neural correlates of those two functional components are distinguishable from each other. Coaches have the potential to influence athletes' moral development, especially at the collegiate levela powerful period of growth in young adults' lives. As central agents in athlete moral education, coaches' moral development and understanding of professionalism is currently unknown. The purpose of this study was to increase understanding of the ethical professional identity development of sport coaches. In-depth interviews based on moral exemplar and moral identity development theories were conducted with NCAA Division-I collegiate head coaches (n=12) in the United States who were peer nominated moral exemplars'. Interviews elicited themes of moral exemplarity, professionalism, and above average ethical identity development. Results can inform and improve coach education for current and future members of the profession. One hundred and fifty-six adolescents, drawn from a high school in a Midwestern suburb, provided judgments of a hypothetical incident of homophobic harassment with either a male or female victim. Participants also completed a revised version of the Macho Scale, measuring their endorsement of gender stereotypes (=.75). Without the interaction term, victim gender was not predictive of judgments of the harassment, however, endorsement of gender stereotypes decreased the odds of believing the behavior was completely wrong ((2) (1) = 9.18, p=.00). Once added, the interaction term was the only significant variable in the model, demonstrating that endorsement of gender stereotypes has an effect on judgments of homophobic harassment of male victims, but not female victims ((2) (1) = 4.78, p=.03). As more schools invest resources in anti-harassment initiatives, our findings suggest that discussion of gender and gender stereotypes is essential. The aim of the present study was to investigate how bullying incident participant roles and moral reasoning relate to each other in adolescents. To do so, we examined sociomoral judgments about hypothetical bullying incidents and moral disengagement in adolescents identified as bullies, defenders of the victim and passive bystanders. Six-hundred and twenty-six high school students (13- to 15-years-old) took part in this study and 131 were assigned a specific bullying incident participant role through peer nomination. Findings reveal that defenders of the victim show greater and more uniform moral sensibility than did both bullies and passive bystanders. Sociomoral reasoning helped differentiate between both bully subtypes (bully-leaders and bully-followers) and passive bystander beyond displaying greater moral disengagement than defenders did. This community-based research investigated the relationship among Holocaust knowledge, Holocaust education experiences, and citizenship values in adults residing in the US. This study contributes to the literature an inferential investigation that reports positive civic attitudes associated with Holocaust education. A moderate correlation was identified, with approximately 10% of the variance in citizenship scores explained by Holocaust knowledge. Multiple regression analyses revealed Holocaust knowledge as the strongest predictor of citizenship values, followed by gender, suburban/urban childhood community, and learning about the Holocaust in school, respectively. Of eight unique Holocaust education experiences examined, learning about the Holocaust in school was the strongest predictor of citizenship values, followed by hearing a Holocaust survivor testimony in person or via electronic media, and visiting a Holocaust museum, respectively. Findings can inform Holocaust education policy, research, and practice, including the potential role of Holocaust curriculum in the larger context of moral and civic education. Success and failure in entrepreneurship affects not only entrepreneurs but also many participants in their entrepreneurial relationships. Studies have led us to consider the social and moral dimensions within entrepreneurship education. Doubts arise, however, when one asks how moral principles can be included in entrepreneurship education in order to produce more socially responsible graduates. In the current debate, the role that religions may play in providing moral teachings for entrepreneurship is becoming increasingly important, and religious narratives as educational tools are growing in significance. In this article we present an integrative proposal using as an example Christian narratives taken from the Epistle of St James. In this mixed-methods study we investigated the development of a generalized ethics decision-making model that can be applied in considering ethical dilemmas related to student assessment. For the study, we developed five scenarios that describe ethical dilemmas associated with student assessment. Survey participants (i.e., educators) completed an online survey to express their decision-making process when faced with ethical dilemmas relating to student assessment. Based on the literature and the educators' written responses to the scenarios, elements to consider in an ethics decision-making model related to student assessment include the following: (1) the critical incident giving rise to the ethical dilemma; (2) identification of the conflict elements; (3) decisions about the ethicality of the elements; (4) justification of the decisions; (5) implications; and (6) alternative suggestions. This model offers guidance to educators in considering the dimensions of an ethical dilemma in assessment prior to making a decision. For this study the researchers designed learning activities to enhance students' high level cognitive processes. Students learned new information in a classroom setting and then applied and analyzed their new knowledge in familiar authentic contexts by taking pictures of objects found there, describing them, and sharing their homework with peers. An experiment was carried out in which 58 junior high school students were divided into a control (n = 30) and an experimental (n = 28) group. The control group studied and completed learning activities with traditional textbooks while the experimental group used electronic textbooks and a learning system, Virtual Pen for Tablet PC (VPenTPC), in order to gauge the feasibility of the proposed approach. The post-test results show a significant difference between the control and experimental groups. In our analysis of the various approaches students took to complete the task, we were able to identify thirty cognitive and metacognitive strategies for using mobile technology, from which we selected the ten most frequently used ones. The results show that low ability students make better use of strategies than their high ability peers, resulting in significant learning gains. The results also show that most students perceive VPenTPC positively. Based on these results, we suggest some implications along with conclusions and directions for future research. This study explored the use of wikis in a science inquiry-based project conducted with Primary 6 students (aged 11-12). It used an online wiki-based platform called PBworks and addressed the following research questions: (1) What are students' attitudes toward learning with wikis? (2) What are students' interactions in online group collaboration with wikis? (3) What have students learned with wikis in a science inquiry-based project in a primary school context? Analyses of the quantitative and qualitative data showed that with respect to the first research question, the students held positive attitudes toward the platform at the end of the study. With respect to the second research question, the students actively engaged in various forms of learning-related interactions using the platform that extended to more meaningful offline interactions. With respect to the third research question, the students developed Internet search skills, collaborative problem solving competencies, and critical inquiry abilities. It is concluded that a well-planned wiki-based learning experience, framed within an inquiry project-based approach facilitated by students' online collaborative knowledge construction, is conducive to the learning and teaching of science inquiry-based projects in primary school. Difficulties in learning statistics primarily at the college-level led to a reform movement in statistics education in the early 1990s. Although much work has been done, effective learning designs that facilitate active learning, conceptual understanding of statistics, and the use of real-data in the classroom are needed. Guided by Merrill's First Principles of Instruction (First Principles), a blended, introductory college-level statistics course that incorporated real data was designed and implemented. A single descriptive case design was used to investigate how the course design facilitated learning and the development of statistical conceptual understanding (i.e., statistical literacy, reasoning, and thinking skills). Results from both quantitative and qualitative data analyses indicated that the course designed using First Principles as a guide was effective in promoting students' conceptual understanding in terms of literacy, reasoning, and thinking statistically. However, students' statistical literacy, specifically, the understanding of statistical terminology did not develop to a satisfactory level as expected. Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from 68 pre-service teachers doing their student teaching. The results revealed significant progress in readiness for technology integration during student teaching and significant variability in individual change trajectories of readiness for technology integration. Two dummy variables, prior-teaching (0 = "having no prior teaching experience"; 1 = "having prior teaching experience") and grade-level (0 = "elementary level"; 1 = "secondary level"), were identified as significant in predicting the shape of individual change trajectories of readiness for technology integration. In phase-2 of qualitative investigation, follow-up interview data were collected from 11 pre-service teachers among those who participated in the online surveys. The interview data was analyzed both deductively and inductively yielding clues and insights for interpreting and understanding the quantitative results from phase-1. Based on its quantitative and qualitative results, this study made recommendations for future technology integration research and for improving pre-service teachers' technology use experience during student teaching. Challenges of broadening access, escalating cost, maintaining desirable quality and enhancing meaningful learning experiences in African higher education (HE) have spurred debates on how to restructure higher education delivery to meet the diverse needs of heterogeneous learners and adapt pedagogical models to the educational realities of low-income African countries. In view of these complexities, Massive Open Online Courses (MOOCs) have been advanced by Western Consortia, universities and online platform providers as panaceas for disrupting/transforming existing education models African universities. MOOCs have been touted as disruptive innovations with the potential to create new niche markets for HE courses, disrupt traditional models of instruction and content delivery and create new revenue streams for higher education. Yet academic elitism which manifests in the exclusive selection of top American universities to develop, host and deliver MOOCs, MOOC providers' use of university brand and reputation as benchmarks for charging recruitment fees on headhunters recruiting MOOC graduates and their complex business models involving the sale of students' big data (e.g. learning analytics) for profit seem to be inconsistent with claims about philanthropic and egalitarian drive of MOOCs. Drawing on disruptive innovation theory and a review of mainstream literature on MOOCs adoption in American and African tertiary sectors, this study argues that behind the MOOC rhetoric of disrupting and democratizing higher education lies the projection of top academic brands on the marketing pedestal, financial piggybacking on the hype and politics of academic exclusion. INTUITEL is a research project aiming to offer a personalized learning environment. The INTUITEL approach includes an Intelligent Tutoring System that gives students recommendations and feedback about what the best learning path is for them according to their profile, learning progress, context and environmental influences. INTUITEL combines efficient pedagogical-based recommendations with freedom of choice and it introduces this tutoring support in different Learning Management Systems. During the INTUITEL project various software and pedagogical testing procedures were defined to provide the development teams with feedback, both summative and formative. The current paper describes the initial user test, which was conducted at the University of Valladolid for the course "Network Design". The experiment was focused on real learners' reactions to INTUITEL recommendations received by an INTUITEL-enabled LMS. Nineteen students participated in a two phase testing procedure in order to analyze the learners' behavior with INTUITEL, as well as obtaining information about how learners perceive the influence and usefulness of the tutoring system in online learning courses. Results show that students with INTUITEL follow learning paths that are more suitable for them. Besides, the general satisfaction level of participants is high. Most learners appreciate INTUITEL, would follow its recommendations and consider the messages shown by INTUITEL as useful and caring. This report concerns use of a digital tutor to accelerate veterans' acquisition of expertise and improve their preparation for the civilian workforce. As background, it briefly discusses the need to improve veterans' employability, the technology of digital tutoring, its ability to produce advanced levels of technical expertise, and the design, development, and earlier assessment of a specific digital tutor, which motivated use of this tutor to develop veterans' technical expertise and employability. The report describes the tutor's use in an experimental program for veterans, an assessment of its success in preparing veterans for employment, and return on investment from its use compared to other investments in education and training. Sensory integration dysfunction (SID, also known as sensory processing disorder, SPD) is a condition that exists when a person's multisensory integration fails to process and respond adequately to the demands of the environment. Children with SID (CwSID) are also learners with disabilities with regard to responding adequately to the demands made by a learning environment, and usually have performance difficulties in one or more areas of life, such as productivity, leisure and play, or activities of daily living, and this can reduce their learning motivation. This study tries to develop a motion-sensing digital game-based SID therapy to help such children become more engaged in physical training, with the hope that by improving their bodily-kinesthetic intelligence these children can be more confident of facing various learning challenges, like those associated with social participation. This research applied the Microsoft Kinect system and a specially designed motion-sensing game related to SID, and used interviews to collect responses from the children and their parents. The Chinese version of the sensory profile and clinical observation were applied to evaluate the effects of the therapy, and the triangulation method applied in the data analysis reveals the improvements of all participants in eight clinical observation items. The results imply that our approach was able to increase the learning motivation and actions of the CwSID who participated in this study, with better results than those obtained in our earlier work, which used the Nintendo Wii device and its commercially available games. Informal science learning has drawn the attention of researchers, educators and museum administrators for a long time. However, the problem of how to better support visitors to be more engaged while visiting exhibits and improve informal science learning performance is still missing. Context-aware technologies have the advantages of fostering learning interest and providing real time feedback. Previous studies have examined the effectiveness of 5E Learning Cycle in science learning. To address the problem, this study aims to develop a mobile label assisted system using the 5E Learning Cycle approach based on iBeacon technology in a science museum. A total of 43 college students participated in this study. Participants from different majors were assigned to two groups in an effort to make the groups relatively equivalent in terms of student majors. One group was the experimental group (mobile label assisted visiting mode, n = 21), and the other one was the control group (traditional visiting mode, n = 22). From the results of learning performance, stay-time, behavioral pattern analysis, and interviews, it was found that the mobile label assisted system can effectively guide visitors to interact with exhibits, conduct thoughtful learning, and prolong the visiting stay-time. Visitors are willing to visit the science museum with it. This was one of the very few studies focusing on the application of iBeacon to design mobile label system in a science museum. It turned out that iBeacon technology has huge potential applications for the future science museum. This study examines the major factors that may hinder or enable the adoption of e-learning systems by university students in developing (Qatar) as well as developed (USA) countries. To this end, we used extended Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) with Trust as an external variable. By means of an online survey, data were collected from 833 university students from a university in Qatar and another from USA. Structural equation modelling was employed as the main method of analysis in this study. The results show that performance expectancy, hedonic motivation, habit and trust are significant predictors of behavioural intention (BI) in both samples. However, contrary to our expectation, the relationship between price value and BI is insignificant. Our results also show that effort expectancy and social influence lead to an increase in students' adoption of e-learning systems in developing countries but not in developed countries. Moreover, facilitating conditions increase e-learning adoption in developed countries which is not the case in developing countries. Overall, the proposed model achieves an acceptable fit and explains its variance for 68% of the Qatari sample and 63% of the USA sample. These results and their implications to both theory and practice are described. The One Laptop per Child (OLPC) initiative has been at the forefront of introducing low-cost computers in developing countries. We argue that the problem is not as much as a focus on the provision of affordable technologies, but the lack of consideration of deeply contextualized implementation design and the lack of understanding of psychological mechanisms at the user-level that influence learning impact. A longitudinal quasi-experimental design among nine rural Indian primary schools involved pre- and post- experiment measures conducted with both test (n = 126) and control groups (n = 79). The study objective was to prioritize local contexts during technology implementation design in order to attain educational impact in terms of improved learning outcomes for students. The Contextualized-OLPC education project utilized strategies identified by the Technology-Community-Management model to address contextually germane factors of teacher training, unbiased gender access, and local language use. A second objective was to assess impact of technology introduction while countering extant techno-determinist approaches of impact assessment. We first demonstrated that technological knowledge was associated positively with functional literacy. We situated the experiment in the social cognitive theory to demonstrate that computer self-efficacy mediates the relationship between technological literacy attained as a consequence of the Contextualized-OLPC education project and a specific learning outcome, functional literacy. Overall, the research illustrated that giving primacy to mere deployment of OLPC laptops has limited relevance to children, both in use and outcome. In support, the results demonstrated the role of contextualized technology in rural classrooms alongside an understanding of user psychology that influence learning impact. Age estimation based on the human face remains a significant problem in computer vision and pattern recognition. In order to estimate an accurate age or age group of a facial image, most of the existing algorithms require a huge face data set attached with age labels. This imposes a constraint on the utilization of the immensely unlabeled or weakly labeled training data, e.g., the huge amount of human photos in the social networks. These images may provide no age label, but it is easy to derive the age difference for an image pair of the same person. To improve the age estimation accuracy, we propose a novel learning scheme to take advantage of these weakly labeled data through the deep convolutional neural networks. For each image pair, Kullback-Leibler divergence is employed to embed the age difference information. The entropy loss and the cross entropy loss are adaptively applied on each image to make the distribution exhibit a single peak value. The combination of these losses is designed to drive the neural network to understand the age gradually from only the age difference information. We also contribute a data set, including more than 100 000 face images attached with their taken dates. Each image is both labeled with the timestamp and people identity. Experimental results on two aging face databases show the advantages of the proposed age difference learning system, and the state-of-the-art performance is gained. Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms. Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multiview interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies. Recently, feature fusion has demonstrated its effectiveness in image search. However, bad features and inappropriate parameters usually bring about false positive images, i.e., outliers, leading to inferior performance. Therefore, a major challenge of fusion scheme is how to be robust to outliers. Towards this goal, this paper proposes a rank-level framework for robust feature fusion. First, we define Rank Distance to measure the relevance of images at rank level. Based on it, Bayes similarity is introduced to evaluate the retrieval quality of individual features, through which true matches tend to obtain higher weight than outliers. Then, we construct the directed ImageGraph to encode the relationship of images. Each image is connected to its K nearest neighbors with an edge, and the edge is weighted by Bayes similarity. Multiple rank lists resulted from different methods are merged via ImageGraph. Furthermore, on the fused ImageGraph, local ranking is performed to re-order the initial rank lists. It aims at local optimization, and thus is more robust to global outliers. Extensive experiments on four benchmark data sets validate the effectiveness of our method. Besides, the proposed method outperforms two popular fusion schemes, and the results are competitive to the state-of-the-art. The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing. This paper advocates a novel video saliency detection method based on the spatial-temporal saliency fusion and low-rank coherency guided saliency diffusion. In sharp contrast to the conventional methods, which conduct saliency detection locally in a frame-by-frame way and could easily give rise to incorrect low-level saliency map, in order to overcome the existing difficulties, this paper proposes to fuse the color saliency based on global motion clues in a batch-wise fashion. And we also propose low-rank coherency guided spatial-temporal saliency diffusion to guarantee the temporal smoothness of saliency maps. Meanwhile, a series of saliency boosting strategies are designed to further improve the saliency accuracy. First, the original long-term video sequence is equally segmented into many short-term frame batches, and the motion clues of the individual video batch are integrated and diffused temporally to facilitate the computation of color saliency. Then, based on the obtained saliency clues, inter-batch saliency priors are modeled to guide the low-level saliency fusion. After that, both the raw color information and the fused low-level saliency are regarded as the low-rank coherency clues, which are employed to guide the spatial-temporal saliency diffusion with the help of an additional permutation matrix serving as the alternative rank selection strategy. Thus, it could guarantee the robustness of the saliency map's temporal consistence, and further boost the accuracy of the computed saliency map. Moreover, we conduct extensive experiments on five public available benchmarks, and make comprehensive, quantitative evaluations between our method and 16 state-of-the-art techniques. All the results demonstrate the superiority of our method in accuracy, reliability, robustness, and versatility. Recovering the image corrupted by additive white Gaussian noise (AWGN) and impulse noise is a challenging problem due to its difficulties in an accurate modeling of the distributions of the mixture noise. Many efforts have been made to first detect the locations of the impulse noise and then recover the clean image with image in painting techniques from an incomplete image corrupted by AWGN. However, it is quite challenging to accurately detect the locations of the impulse noise when the mixture noise is strong. In this paper, we propose an effective mixture noise removal method based on Laplacian scale mixture (LSM) modeling and nonlocal low-rank regularization. The impulse noise is modeled with LSM distributions, and both the hidden scale parameters and the impulse noise are jointly estimated to adaptively characterize the real noise. To exploit the nonlocal self-similarity and low-rank nature of natural image, a nonlocal low-rank regularization is adopted to regularize the denoising process. Experimental results on synthetic noisy images show that the proposed method outperforms existing mixture noise removal methods. Nowadays, multi-source image acquisition attracts an increasing interest in many fields, such as multi-modal medical image segmentation. Such acquisition aims at considering complementary information to perform image segmentation, since the same scene has been observed by various types of images. However, strong dependence often exists between multi-source images. This dependence should be taken into account when we try to extract joint information for precisely making a decision. In order to statistically model this dependence between multiple sources, we propose a novel multi-source fusion method based on the Gaussian copula. The proposed fusion model is integrated in a statistical framework with the hidden Markov field inference in order to delineate a target volume from multi-source images. Estimation of parameters of the models and segmentation of the images are jointly performed by an iterative algorithm based on Gibbs sampling. Experiments are performed on multi-sequence MRI to segment tumors. The results show that the proposed method based on the Gaussian copula is effective to accomplish multi-source image segmentation. With the goal of discovering the common and salient objects from the given image group, co-saliency detection has received tremendous research interest in recent years. However, as most of the existing co-saliency detection methods are performed based on the assumption that all the images in the given image group should contain co-salient objects in only one category, they can hardly be applied in practice, particularly for the large-scale image set obtained from the Internet. To address this problem, this paper revisits the co-saliency detection task and advances its development into a new phase, where the problem setting is generalized to allow the image group to contain objects in arbitrary number of categories and the algorithms need to simultaneously detect multi-class co-salient objects from such complex data. To solve this new challenge, we decompose it into two sub-problems, i.e., how to identify subgroups of relevant images and how to discover relevant co-salient objects from each subgroup, and propose a novel co-saliency detection framework to correspondingly address the two sub-problems via two-stage multi-view spectral rotation co-clustering. Comprehensive experiments on two publically available benchmarks demonstrate the effectiveness of the proposed approach. Notably, it can even outperform the state-of-the-art co-saliency detection methods, which are performed based on the image subgroups carefully separated by the human labor. Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss. Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features. There are a variety of grand challenges for multi-orientation text detection in scene videos, where the typical issues include skew distortion, low contrast, and arbitrary motion. Most conventional video text detection methods using individual frames have limited performance. In this paper, we propose a novel tracking based multi-orientation scene text detection method using multiple frames within a unified framework via dynamic programming. First, a multi-information fusion-based multi-orientation text detection method in each frame is proposed to extensively locate possible character candidates and extract text regions with multiple channels and scales. Second, an optimal tracking trajectory is learned and linked globally over consecutive frames by dynamic programming to finally refine the detection results with all detection, recognition, and prediction information. Moreover, the effectiveness of our proposed system is evaluated with the state-of-the-art performances on several public data sets of multi-orientation scene text images and videos, including MSRA-TD500, USTB-SV1K, and ICDAR 2015 Scene Videos. In this paper, we present a complete change detection system named multimode background subtraction. The universal nature of system allows it to robustly handle multitude of challenges associated with video change detection, such as illumination changes, dynamic background, camera jitter, and moving camera. The system comprises multiple innovative mechanisms in background modeling, model update, pixel classification, and the use of multiple color spaces. The system first creates multiple background models of the scene followed by an initial foreground/background probability estimation for each pixel. Next, the image pixels are merged together to form mega-pixels, which are used to spatially denoise the initial probability estimates to generate binary masks for both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined to separate foreground pixels from the background. Comprehensive evaluation of the proposed approach on publicly available test sequences from the CDnet and the ESI data sets shows superiority in the performance of our system over other state-of-the-art algorithms. We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings. Video coding focuses on reducing the data size of videos. Video stabilization targets at removing shaky camera motions. In this paper, we enable video coding for video stabilization by constructing the camera motions based on the motion vectors employed in the video coding. The existing stabilization methods rely heavily on image features for the recovery of camera motions. However, feature tracking is time-consuming and prone to errors. On the other hand, nearly all captured videos have been compressed before any further processing and such a compression has produced a rich set of block-based motion vectors that can be utilized for estimating the camera motion. More specifically, video stabilization requires camera motions between two adjacent frames. However, motion vectors extracted from video coding may refer to non-adjacent frames. We first show that these non-adjacent motions can be transformed into adjacent motions such that each coding block within a frame contains a motion vector referring to its adjacent previous frame. Then, we regularize these motion vectors to yield a spatially-smoothed motion field at each frame, named as CodingFlow, which is optimized for a spatially-variant motion compensation. Based on CodingFlow, we finally design a grid-based 2D method to accomplish the video stabilization. Our method is evaluated in terms of efficiency and stabilization quality, both quantitatively and qualitatively, which shows that our method can achieve high-quality results compared with the state-of-the-art methods (feature-based). Apology is an important area of research in crisis communication. Scholars have largely explored apology from an organization-centric, dyadic approach. We argue that this type of research has made unrealistic assumptions about a much more complex social system and may be challenged by increasingly interconnected social reality. This paper uses Structural Balance Theory and Stakeholder Network Management Theory to develop a model and several testable propositions to guide the way organizations respond to a crisis. (C) 2017 Elsevier Inc. All rights reserved. This research revisits source credibility based upon the popular PESO (Paid, Earned, Shared and Owned) source classification. More specific, this study examines source credibility and channel effectiveness in terms of moving consumers along the communication lifecycle model based upon their exposure to information embedded in paid (traditional advertising and native advertising), earned (traditional news story), shared (independent blogger) and owned (company blog) media. One thousand, five hundred respondents recruited from a consumer panel participated in this 2 (level of involvement) x 5 (source) experimental design study. When respondents were asked to self-report on their levels of trust with various sources, they indicated the highest level of trust with consumer reviews and earned media and the lowest level of trust with native advertising. The experimental design study yielded no major differences among the sources for the communication lifecycle variables. Native advertising was viewed as less credible than traditional advertising in the experimental design. There were no differences in perceived credibility based upon exposure to traditional advertising versus a news story, confirming prior academic research. Suggestions are offered for public relations practitioners on selecting sources for messaging to drive behavior. (C) 2017 Elsevier Inc. All rights reserved. This study explores the roles of corporations and a monitoring group in building the corporate social responsibility (CSR) agenda in the news media. Using agenda-building theory as a theoretical framework, we content analyzed 7672 press releases from 223 U.S. corporations and 1064 news articles covering those corporations from the New York Times and the Wall Street Journal and investigated the relationships among press releases, news articles related to the corporations, and the ratings of those corporations by KLD Research and Analytics, Inc., a corporate social performance monitoring group. The results showed stronger relationships between the KLD ratings and the news media coverage than between the press releases and the news media coverage at the first level of the CSR agenda, but there were variations at the second level of the CSR agenda. We discuss the theoretical and practical implications of these findings. (C) 2017 Elsevier Inc. All rights reserved. In this article, we investigate investor relations (IR) scholarship across business-centered and communication-centered fields of study from 1994 to 2016. This investigation's aims were to examine development almost a quarter of a century of IR scholarship and to use that to explore how to improve future IR practices and studies. We found that scholarly research in IR and public relations (PR) failed to display many signs of interaction with each other, rarely intersected with other disciplines, and reveal considerable potential for developing synergies from their existing differences. In conclusion, we argue that the current academic insularity also quarantines the practitioners. To address these issues we propose specific future collaborations to improve both IR's scholarly output and its practice. (C) 2017 Elsevier Inc. All rights reserved. The public relations industry expects graduates to be proficient at writing yet industry professionals still complain public relations graduates lack basic writing skills. By contrast, journalism graduates do not seem to experience the same criticisms. Using a pedagogical framework of student attainment, this study investigates public relations and journalism writing courses across 30 university courses to identify differences between the two disciplines, and implications for public relations writing education. The findings suggest public relations writing courses should adopt a bridging curriculum to support students to develop their writing skills in limited genres using authentic assessment. Strategic considerations should be covered in more advanced courses once the basic skills of public relations writing have been mastered. (C) 2017 Elsevier Inc. All rights reserved. The state of women's research in public relations is strong. However, different women's stories-as well as men's stories who are not part of the standard White, heterosexual, American experience-are severely underrepresented in public relations practice and research. This review of research from the past 11 years shows that the practice has significant room to grow in terms of welcoming and providing a successful, equitable workplace environment to practitioners from marginalized groups. Specifically, research about the experiences of women of color, LGBT practitioners, practitioners with disabilities, practitioners aged 55 and older, and international practitioners are imperative to understand why public relations continues to be a "lily-white" field of women. To this point, research needs to seriously engage in intersectional research that links diverse practitioners' experiences with negative outcomes (e.g., salary gaps, relegation to technical positions, etc.) and positive effects (e.g., role modeling, entrepreneurship, etc.) for the field and individual practitioners alike. Directions for future research and practical application include examining eurocentrism and systemic racism in the academic and professional fields, overcoming issues of conducting quantitative research as well as issues of valuing qualitative research, linking diversity initiatives to core public relations concerns like crises and corporate social responsibility, exploring other fields' responses to diversity issues, and obtaining external audits by advocacy groups. (C) 2017 Elsevier Inc. All rights reserved. Millennial public relations practitioners do not feel prepared to offer ethics counsel and do not expect to face ethical dilemmas at work. Through survey research with more than 200 young professionals, statistically significant differences were found regarding perceptions of readiness to offer ethics counsel based on the availability of a mentor, ethics training in college, and ethics training at work. Through the lens of social identity theory, significant differences were found based on familiarity and likelihood to use ethics resources provided by professional associations. Finally, confidence in discussing ethical concerns with their mentor or direct supervisor did impact their likelihood to offer ethics counsel. (C) 2017 Elsevier Inc. All rights reserved. Practitioners of public diplomacy often need to deal with complex international issues and they could learn from the issues management perspective. However, little effort has linked issues management with public diplomacy. This paper develops an analytical model that combines the relational concepts of issues management and public diplomacy, and applies that model to the case of the Chinese Ebola public diplomacy campaign in West Africa. The goal is to better understand how China mobilizes a wide range of resources (human resources, material resources, financial resources, etc.) and builds relationships with various actors to manage the Ebola issue. This study applied social network analysis and qualitative content analysis to offer a comprehensive understanding of the function and structure of Chinese Ebola public diplomatic networks. The study demonstrates the broad application value of issues management. (C) 2017 Elsevier Inc. All rights reserved. The Spanish Civil War occupies a very relevant place in the collective memory of Europeans. At the beginning of the War, Spanish filmmaker Luis Bunuel received instructions from the Spanish Ministry of Foreign Affairs to return to Paris and assist the Spanish embassy in various types of counterintelligence and propaganda work. As part of this, Bunuel organized and assembled footage of the Republicans. Unlike other productions compiled in Spain, the Republican propaganda films made in Paris were generally addressed at audiences in different European countries with the aim of breaking the doctrine of non-intervention in the conflict, and they reflect Bunuel's theories and conception of the documentary films. These films represent good examples of ethical propaganda and their aim was more informative than manipulative. From this standpoint, and in accordance with other research on public relations discourse and film, we argue that the films supervised by Bunuel in this period, and especially the most noteworthy documentary Espana leal en armas (1937), are examples of public relations films in wartime. (C) 2017 Elsevier Inc. All rights reserved. This article extends the scholarship in critical public relations by charting an alternative historiography of public relations. It opens up a radically different methodology for studying the history of PR by looking to contemporary works of historical fiction as compelling sources that speak about the interplay of dominance and resistance in the strategic communication interactions of colonial times. The nuanced critical reinterpretation of the past in novels, depicted through the eyes of fictional characters, provides a fresh perspective on the ways in which public relations was deployed by colonial political and business establishments and, more significantly, how subaltern publics used their own communication strategies to fight back. The analysis illustrates alternative ways of looking at PR that are as relevant today. (C) 2017 Elsevier Inc. All rights reserved. The progressive version of public relations history present it as a by-product of pluralist political systems or a democratic dividend. It has been claimed that public relations thrives within open media systems and market economies but struggles in highly controlled governmental systems (dictatorships, juntas, and closed economies). This paper considers how political history and political systems affected the formation of public relations practices in regions of Europe that, after 1945, were under military dictatorships (Spain and Portugal), a military junta (Greece) and were contained in the Soviet bloc. Using comparative history methodology, the notion that public relations operates solely in democracies is challenged, although it is conceded that practice thrived in post-war Western Europe but struggled to develop in parts of southern and eastern Europe. (C) 2017 Elsevier Inc. All rights reserved. In most Western countries, where direct advertising of prescription drugs (DTCA) is banned, the pharma industry relies primarily on PR activities to promote its products. Despite the pharma industry's ever-increasing share in framing media coverage of health issues, the strategies used in its press materials have not yet been systematically examined. This study uses framing theory to explore the PR strategies and tactics employed by pharmaceutical companies to promote their products in Israel, where DTCA is banned. Using a combination of qualitative and quantitative content analysis, we examined 1548 pharmaceutical press releases. The Israeli example can serve as a case study for understanding how the pharma industry operates in many countries around the world where DTCA is banned. Our findings show that strategies and tactics dubbed as "disease mongering" dominate the pharma industry's press releases. The four main strategies identified in this study are third party technique, disease branding, drug branding and Astroturfing. Some of the common PR strategies and tactics we found are not only unethical, but also opposed to the regulatory bodies in Israel, and would never receive approval, even in countries where DTCA is allowed. (C) 2017 Elsevier Inc. All rights reserved. This study investigates the interrelation of implicit frames in press releases by the two largest German banks (Deutsche Bank, Commerzbank) and the German financial media from 2007 until 2013. Findings suggest that an increase in the salience of certain frames in press releases by German banks resulted in a decrease of that same frame in the financial media the subsequent months. Furthermore, time series analyses indicate that the banks adopted frames that were present in the media the previous month. The results imply a resistance of German financial media towards the frames used by Deutsche Bank and Commerzbank. (C) 2017 Elsevier Inc. All rights reserved. This article reports the results of a research conducted among a group of 55 CEOs of public relations and communication consulting agencies in Colombia. It aims at determining if strategic orientation predominates in the services for which these agencies are hired. As it has been found, some of them focus their business on this perspective. Nevertheless, there is a higher volume of technical services, such as free press and journalistic media relations. The increase in services in recent years has been due, among other reasons, to the good moment of Colombian economy. The reasons for hiring an agency are its prestige and reputation. However, this result contrasts with the argument presented by the interviewed executives as the main impediment to their practice as consultants: the lack of knowledge of their work on the part of clients. (C) 2017 Elsevier Inc. All rights reserved. This study examined 41 embassy Twitter accounts representing Central-Eastern European and Western countries. Western embassies were more likely to have Twitter accounts and demonstrated more average followers, but a CEE account (the Polish embassy in the United States) had the highest influencer score. A content analysis of 482 tweets brought together relevant literature from public diplomacy and public relations scholarship. A significant association was found between the diplomatic approaches and public relations message strategies, thus identifying a relationship between disciplines that are frequently considered separately. With regard to public diplomacy strategies, Western embassies engaged primarily in advocacy, whereas CEE embassies engaged primarily in cultural diplomacy. Listening was the least likely approach to be taken by both Western and CEE embassy accounts. With regard to public relations strategies, Western and CEE embassy Twitter accounts primarily engaged in message strategies aimed at information sharing (versus facilitative, persuasive, cooperative, etc.). Overall, analyses indicated that embassy Twitter accounts primarily engaged in approaches that may lack strategy, despite their purpose being diplomatic communication. This research provides a basis for predictive, best practices research and recommendations that merge disciplines. (C) 2017 Elsevier Inc. All rights reserved. This paper studies how stakeholder relationships change when an organization undergoes a crisis as compared to routine circumstances. During crises, the stakeholder relationships are under pressure, and therewith the organization's reputation and the crisis intensity. This paper's purpose is to investigate how, during a crisis, pressure from both internal stakeholders (i.e., management and employees) and external stakeholders (i.e., news media and interested citizens) influences public-relations professionals' communicative relationships with these stakeholders. 444 PR European professionals, who experienced crises, were surveyed about crisis and routine times. Special focus was on the mediation role of time pressure and uncertainty. Structural-equation models revealed that, in crisis, the increased pressure from news media, citizens, and employees negatively affects the communicative relationship with these stakeholders, whereas management pressure was found to have a positive effect. This observation might point to organizational isolation on a managerial level in the initial crisis phase, partly as a result of stakeholder pressure. (C) 2017 Elsevier Inc. All rights reserved. People engage in communication on Facebook via three behaviors like, comment, and share. Facebook uses an algorithm that gives different weight to each behavior to determine what to show in user's screen, suggesting that the strategic implication of each behavior may differ from the other. This study investigates when each behavior can be encouraged by organizational messages, thereby making clearer distinctions between three behaviors. A content analysis of organizational messages was conducted, where the researchers assessed message features and related them to each behavior separately. The findings indicated that different message features generated different behaviors: Sensory and visual features led to like, rational and interactive to comment, and sensory, visual, and rational to share. This suggests that like is an affectively driven, comment is a cognitively triggered behavior, and share is either affective or cognitive or a combination of both. (C) 2017 Elsevier Inc. All rights reserved. Applying a multi-method approach, the purpose of this paper is to analyze the complex ways in which Belgian magazines deal with health information supplied by PR practitioners related to the pharmaceutical industry. First, we conducted two waves of quantitative content analysis of health items published in 2013 and 2015 in a representative sample of magazines to get an overview of the sourcing practices of Belgian magazine journalists as visible in the news output. Second, we included 16 in-depth interviews with leading magazine health journalists and their editors-in-chief to confront the findings of the content analyses and search for additional evidence of how the pharmaceutical industry directly and indirectly tries to influence health news. The findings confirm that academic and medical experts are the most important sources. They help to explain and contextualize often complex and technical health issues, and they credit authority and credibility to a journalist's story. In contrast, we found very little explicit references to pharmaceutical industry sources in journalistic content. Nevertheless, the findings of the interviews suggest that pharmaceutical PR creeps into health coverage in a more indirect and much more sophisticated manner, for instance by offering additional services such as contacts with scientists or patients. In addition, editors-in-chief admit they try to anticipate the needs and preferences of advertisers in aligning editorial and commercial content. We conclude that the influence of pharmaceutical PR in magazine health news is stronger than would be expected based solely on quantitative analyses of editorial content. (C) 2017 Elsevier Inc. All rights reserved. Background: The disparate health outcomes of African American mothers living with HIV are considerable. Multidimensional approaches are needed to address the complex social and economic conditions of their lives, collectively known as the social determinants of health. Objectives: The purpose of this study was to explore the social determinants of health for African American mothers living with HIV by examining how mothers describe their social location at the intersection of gender, race, and class inequality; HIV-related stigma; and motherhood. How they frame the impact of their social location on their health experiences is explored. Methods: This exploratory study included in-depth, semistructured interviews with 18 African American mothers living with HIV at three time points. We used an intersectional framework and frame analysis to explore the meaning of these constructs for participants. Results: Findings from 48 interviews include a description of the intersecting social determinants functioning as systems of inequality and the heterogeneous social locations. Three frames of social location were used to organize and explain how African American mothers living with HIV may understand their social determinants of health: (a) an emancipatory frame, marked by attempts to transcend the negative social connotations associated with HIV and socially constructed identities of race, gender, and class; (b) a maternal frame, marked by a desire to maintain a positive maternal identity and maternal child relations; and (c) an internalized frame, marked by an emphasis on the deleterious and stigmatizing effects of HIV, racial, gender, and class inequality. Discussion: The findings offer knowledge about the heterogeneity in how demographically similar individuals frame their social location as well as how the intersections of social determinants influence participant's health experiences. Potential health implications and interventions are suggested for the three frames of social location used to describe intersecting social determinants of health. The study offers an analytic approach for capturing the complexity inherent in intersectional methodologies examining the role of social determinants in producing health inequities. Background: Cognitive deficits are common, long-term sequelae in children and adolescents with congenital heart disease (CHD) who have undergone surgical palliation. However, there is a lack of a validated brief cognitive screening tool appropriate for the outpatient setting for adolescents with CHD. One candidate instrument is the Montreal Cognitive Assessment (MoCA) questionnaire. Objective: The purpose of the research was to validate scores from the MoCA against the General Memory Index (GMI) of the Wide Range Assessment of Memory and Learning, 2nd Edition (WRAML2), a widely accepted measure of cognition/memory, in adolescents and young adults with CHD. Methods: We administered the MoCA and the WRAML2 to 156 adolescents and young adults ages 14-21 (80 youth with CHD and 76 healthy controls who were gender and age matched). Spearman's rank order correlations were used to assess concurrent validity. To assess construct validity, the Mann Whitney Utest was used to compare differences in scores in youth with CHD and the healthy control group. Receiver operating characteristic curves were created and area under the curve, sensitivity, specificity, positive predictive value, and negative predictive value were also calculated. Results: The MoCA median scores in the CHD versus healthy controls were (23, range 15-29 vs. 28, range 22-30; p < .001), respectively. With the screening cutoff scores at <26 points for the MoCA and 85 for GMI (<1 SD, M = 100, SD = 15), the CHD versus healthy control groups showed sensitivity of .96 and specificity of .67 versus sensitivity of .75 and specificity of .90, respectively, in the detection of cognitive deficits. A cutoff score of 26 on the MoCA was optimal in the CHD group; a cutoff of 25 had similar properties except for a lower negative predictive value. The area under the receiver operating characteristic curve (95% CI) for the MoCA was 0.84 (95% CI [0.75, 0,93], p < .001) and 0.84 (95% CI [0.62, 1,001 p = .02) for the CHD and controls, respectively. Discussion: Scores on the MoCA were valid for screening to detect cognitive deficits in adolescents and young adults aged 14-21 with CHD when a cutoff score of 26 is used to differentiate youth with and without significant cognitive impairment. Future studies are needed in other adolescent disease groups with known cognitive deficits and healthy populations to explore the generalizability of validity of MoCA scores in adolescents and young adults. Background: Parental stress, optimism, and health-promoting behaviors (HPBs) are important predictors of the quality of life (QoL) of mothers. However, it is unclear how strongly these predictors affect the QoL of mothers. It is also unclear if the impact of these predictors on QoL differs between primiparous and multiparous mothers. In this study, we defined primiparous as "bearing young for the first time" and multiparous as "having experienced one or more previous childbirths." Objectives: The first objective of this study was to examine the relative effect of parental stress, optimism, and HPBs on the QoL of mothers. The second objective was to investigate if the effect of these predictors differed between primiparous and multiparous mothers. Methods: One hundred ninety-four Australian mothers (n = 87, 44.8% primiparous mothers) participated in an online survey that included the Parental Stress Scale, the Health-Promoting Lifestyle Profile II, the Revised Life Orientation Test, and the Quality of Life Enjoyment and Satisfaction Questionnaire. Results: All predictors (parental stress, optimism, and HPBs) significantly affected the QoL of mothers; higher levels of optimism, greater use of HPBs, and lower parental stress were associated with higher levels of QoL for all mothers. Parity did not affect the relationships. Discussion: This study sheds light on the nature and unique effect of parental stress, optimism, and HPBs on the QoL of mothers. Background: Understanding caregiver's perceptions of their family member's memory loss is a necessary step in planning nursing interventions to detect and address caregiver burden. Objective: The purpose of this study was to characterize caregivers' perceptions of their family members' memory loss and identify potential correlates within Leventhal's common sense model (CSM). Methods: This secondary analysis used baseline data from a larger randomized controlled trial. Patients with memory loss and their caregivers (N = 83 dyads) from the community were included, The adapted Brief Illness Perception Questionnaire (BIPQ) assessed caregivers' illness perceptions. Eight additional instruments measured correlates within the CSM. Responses were described; multiple linear regression was used to predict BIPQ dimension scores, and logistic regression was used to predict dichotomized BIPQ scores. Results: Most caregivers were female, White, and spouses of the patients; they reported a range of perceptions on the nine BIPQ dimensions. Patients' cognitive function consistently emerged as a significant correlate of caregivers' illness perceptions, explaining the most variance in caregivers' perceived consequences, identity, and treatment control (p < .01). Caregivers' reactions to patients' behavioral symptoms and caregivers' trait anxiety were associated with perceived illness coherence (p < .01), Caregivers with higher severity of daily hassles and White caregivers perceived that their family members' memory loss would last longer (p < .001). Discussion: Caregivers' perceptions of family members' memory loss varied; distinct dimensions of caregivers' illness perception were associated with a range of clinical and psychosocial factors. This exploratory study demonstrates the complexity of applying the CSM to caregivers of persons with memory loss. Background: The National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS) has self-reported health measures available for both pediatric and adult populations, but no pediatric measures are available currently in the sleep domains. Objective: The purpose of this observational study was to perform preliminary validation studies on age-appropriate, self-reported sleep measures in healthy adolescents. Methods: This study examined 25 healthy adolescents' self-reported daytime sleepiness, sleep disturbance, sleep-related impairment, and sleep patterns. Healthy adolescents completed a physical exam at the National Institutes of Health Clinical Center (Bethesda, MD), had no chronic medical conditions, and were not taking any chronic medications. The Cleveland Adolescent Sleepiness Questionnaire (CASQ), PROM IS Sleep Disturbance (v. 1,0; 8a), and PROMIS Sleep-Related Impairment (v. 1.0; 8b) questionnaires were completed, and sleep patterns were assessed using actigraphy. Results: Total scores on the three sleep questionnaires were correlated (all Spearman's r > .70, p < .001). Total sleep time determined by actigraphy was negatively correlated with the CASQ (p = .01), PROM IS Sleep Disturbance (p = .02), and PROMIS Sleep-Related Impairment (p = .02). Discussion: The field of pediatric sleep is rapidly expanding, and researchers and clinicians will benefit from well-designed, psychometrically sound sleep questionnaires. Findings suggest the potential research and clinical utility of adult versions of PROMIS sleep measures in adolescents. Future studies should include larger, more diverse samples and explore additional psychometric properties of PROMIS sleep measures to provide age-appropriate, validated, and reliable measures of sleep in adolescents, Background: The Western Institute of Nursing (WIN) celebrated its 60th anniversary and the 50th Annual Communicating Nursing Research Conference in April 2017. Purpose: The purpose of this article is to provide a brief history of the origin, development, and accomplishments of WIN and its Communicating Nursing Research conferences, Approach: Historical documents and conference proceedings were reviewed. Summary: WIN was created in 1957 as the Western Council on Higher Education for Nursing under the auspices of the Western Interstate Commission for Higher Education. The bedrock and enduring value system of the organization is the interrelated nature of nursing education, practice, and research. There was a conviction that people in the Western region of the United States needed nursing services of excellent quality and that nursing education must prepare nurses capable of providing that care. Shared goals were to increase the science of nursing through research and to produce nurses who could design, conduct, and supervise research all to the end of improving quality nursing care. These goals were only achieved by collaboration and resource sharing among the Western region states and organizations. Consistent with the goals, the first research conferences were held between 1957 and 1962. Conference content focused on seminars for faculty teaching research, on the design and conduct of research in patient care settings, and on identification of priority areas for research. The annual Communicating Nursing Research conferences began in 1968 and grew over the years to a total 465 podium and poster presentations on a wide array of research topics and an attendance of 926 in 2016. Conclusion: As WIN and its Communicating Nursing Research conferences face the next 50 years, the enduring values on which the organization was created will stand in good stead as adaptability, adjustments, and collaborative effort are applied to inevitable change for the nursing profession. It is the Western way. Purpose: This paper celebrates the 60th anniversary of the Western Institute of Nursing, the nursing organization representing 13 states in the Western United States, and envisions a preferred future for nursing practice, research, and education. Background: Three landmark calls to action contribute to transforming nursing and healthcare: the Patient Protection and Affordable Care Act of 2010; the Institute of Medicine report Future of Nursing: Leading Change, Advancing Health; and the report Advancing Healthcare Transformation: A New Era for Academic Nursing. Challenges abound: U.S. healthcare remains expensive, with poorer outcomes than other developed countries; costs of higher education are high; our profession does not reflect the diversity of the population; and health disparities persist. Pressing health issues, such as increases in chronic disease and mental health conditions and substance abuse, coupled with aging of the population, pose new priorities for nursing and healthcare. Discussion: Changes are needed in practice, research, and education. In practice, innovative, cocreated, evidence-based models of care can open new roles for registered nurses and advanced practice registered nurses who have knowledge, leadership, and team skills to improve quality and address system change. In research, data can provide a foundation for clinical practice and expand our knowledge base in symptom science, wellness, self-management, and end-of-life/palliative care, as well as behavioral health, to demonstrate the value of nursing care and reduce health disparities. In education, personalized, integrative, and technology-enabled teaching and learning can lead to creative and critical thinking/decision-making, ethical and culturally inclusive foundations for practice, ensure team and communication skills, quality and system improvements, and lifelong learning. Conclusion: The role of the Western Institute of Nursing is more relevant than ever as we collectively advance nursing, health, and healthcare through education, clinical practice, and research. This meta-analysis synthesized recent research on strategy instruction (SI) effectiveness to estimate SI effects and their moderators for two domains: second/foreign language and self-regulated learning. A total of 37 studies (47 independent samples) for language domain and 16 studies (17 independent samples) for self-regulated learning domain contributed effect sizes for this meta-analysis. Findings indicate that the overall effects of SI were large, 0.78 and 0.87, for language and self-regulated learning, respectively. A number of context (e.g., educational level, script differences), treatment (e.g., delivery agent), and methodology (e.g., pretest) characteristics were found to moderate SI effectiveness. Notably, the moderating effects varied by language versus self-regulated learning domains. The overall results identify SI as a viable instructional tool for second/foreign language classrooms, highlight more effective SI design features, and suggest a need for a greater emphasis on self-regulated learning in SI interventions and research. There has been a rapid growth of academic research and publishing in non-Western countries. However, academic journal articles in these peripheral countries suffer from low citation impact and limited global recognition. This critical review systematically analyzed 1,096 education research journal articles that were published in China in a 10-year span using a multistage stratified cluster and random sampling method and a validated rubric for assessing research quality. Our findings reveal that the vast majority of the articles lacked rigor, with insufficient or nonsystematic literature reviews, incomplete descriptions of research design, and inadequately grounded recommendations for translating research into practice. Acknowledging the differences in publishing cultures in the center-periphery divide, we argue that education research publications in non-Western countries should try to meet Western publishing standards in order to participate in global knowledge production and research vitality. Implications for emerging countries that strive to transform their research scholarship are discussed. The testing effect is a well-known concept referring to gains in learning and retention that can occur when students take a practice test on studied material before taking a final test on the same material. Research demonstrates that students who take practice tests often outperform students in nontesting learning conditions such as restudying, practice, filler activities, or no presentation of the material. However, evidence-based meta-analysis is needed to develop a comprehensive understanding of the conditions under which practice tests enhance or inhibit learning. This meta-analysis fills this gap by examining the effects of practice tests versus nontesting learning conditions. Results reveal that practice tests are more beneficial for learning than restudying and all other comparison conditions. Mean effect sizes were moderated by the features of practice tests, participant and study characteristics, outcome constructs, and methodological features of the studies. Findings may guide the use of practice tests to advance student learning, and inform students, teachers, researchers, and policymakers. This article concludes with the theoretical and practical implications of the meta-analysis. Billions of images are shared worldwide on the internet via social platforms like Instagram, Pinterest, Snapchat and Twitter every few days. The social web and mobile devices make it quicker and easier than ever before for young people to communicate emotions through digital images. There is a need for greater knowledge of how to educate children and young people formally in the sophisticated, multimodal language of emotions. This includes semiotic choices in visual composition, such as gaze, facial expression, posture, framing, actor-goal relations, camera angles, backgrounds, props, lighting, shadows and colour. In particular, enabling Indigenous students to interpret and communicate emotions in contemporary ways is vital because multimodal language skills are central to academic, behavioural and social outcomes. This paper reports original research of urban, Indigenous, upper primary students' visual imagery at school. A series of full-day, digital imagery workshops were conducted over several weeks with 56 students. The photography workshops formed part of a three-year participatory community research project with an Indigenous school in Southeast Queensland, Australia. The archived student images were organised and analysed to identify attitudinal meanings from the appraisal framework, tracing types and subtypes of affect, and their positive and negative forms. The research has significant implications for teaching students how to design high-quality, visual and digital images to evoke a wide range of positive and negative emotions, with particular considerations for Australian Indigenous students. In this article we explore teachers' beliefs regarding effective text for Pasifika students, a group at risk of underachieving nationally. We report the features of texts that teachers consider important when selecting texts for their Pasifika students. Primary school teachers (N = 11) were purposively selected for their demonstrated effectiveness in supporting Pasifika students' achievement in literacy. Teacher nominations and explanations of effective and less effective texts for Pasifika students were presented at small focus group discussions, and led to conversations about how teachers used those texts. Subsequently, a sample of text nominations was independently analysed and the results considered alongside reported beliefs. Findings suggest teachers draw on interactions between their knowledge of texts, their knowledge of students and curricular goals. Teachers' selections were largely instructional readers, most often narrative in structure. Teachers reported constraining the challenges of text for Pasifika students, to create controlled conditions for a focus on the learning of target skills. We explore the implications of teachers' choices of texts for literacy development, including the unintended risks of those instructional choices. The possibilities for learning and the constraints created through the selection of text for immediate short term goals are considered in terms of students' textual diet and their literacy development over time. This article draws on a research project undertaken in a state secondary school that explored ways of engaging students in the content area of science. The paper argues that high school teachers teaching in specialist areas can better cater for student needs through attention to a pedagogy that is literacy focused. This is particularly relevant in content area subjects in the secondary school where many teachers have not had access to pre-service literacy training and, traditionally, teaching approaches have been content focused. Moreover, contemporary schools are now places characterised by linguistic, cultural and social diversity and coupled with Australia's push for STEM (science, technology, engineering and mathematics), it is helpful if science teaching incorporates productive (student engagement) and inclusive (student diversity) approaches. A discursive analysis of classroom talk excerpts from three science lessons is used to make comparisons: one from early in the project where the nature of science teaching was investigated and two as a result of findings from investigating the first. The talk was coded using an IRE (initiation-response-evaluation) structure to show how student activity and engagement increased as a result of a pedagogical change. The findings of this research have implications for the way content areas are taught in some secondary schools. This paper investigates the language of examination reports for senior secondary English courses in New South Wales, Victoria, and South Australia. A combination of Legitimation Code Theory (LCT) and Systemic Functional Linguistics (SFL) is used to examine the types of knowledge and knower that are valued in examinations; and how language is used to describe successful and less successful writing, and the candidates who produce these texts. The analysis suggests that subject English values an elite code ( at least, in examination settings), in which both an `insightful' approach to texts and skilled writing justifying analysis is valued; and that students who are unable to take up these discursive practices are imagined as lazy and callow. The paper concludes with implications for teachers and examiners, arguing that teachers must make students aware of the `dual-sided' nature of subject English, and that examiners should be cognisant of potential bias in their view of responses and their writers. Regular engagement in recreational book reading remains beneficial beyond early childhood. While most of the research in reading motivation focuses on the early schooling years, regular recreational book reading remains a highly beneficial practice beyond childhood, as it continues to enhance literacy skills and may help to maintain cognitive stamina and health into old age. Understanding why some individuals are avid readers in adulthood can offer insight into how to foster greater frequency of reading through both early and later interventions. This paper reports on data collected in the 2015 International Study of Avid Book Readers, which posed the question `Why do you read books?' in order to capture self-reported motivations for reading from an adult sample. Qualitative data collected from 1,022 adult participants are analysed in order to explore the diverse and often interrelated motivations of adult avid book readers. Recurring motivations included perspective-taking; knowledge; personal development; mental stimulation; habit, entertainment and pleasure; escapism and mental health; books as friends; imagination and creative inspiration; and, writing, language and vocabulary. Findings offer a greater understanding of reading preferences and motivation of adult avid book readers, highlighting multiple potential points of engagement for fostering positive attitudes toward recreational book reading across the lifetime. As the population ages, continuity of care (CoC) has increasingly become a particular important issue. Articles published from 1994 to 2014 were identified from electronic databases. Studies with randomized controlled design and elderly adults with chronic illness were included if Short Form-36 (SF-36) was used as an outcome indicator to evaluate the effect of CoC. Seven studies were included for analysis with the sum of 1,394 participants. The results showed that CoC intervention can significantly improve physical function, physical role function, general health, social function, and vitality of QoL for elderly people with chronic disease. This study looks to describe health care professionals' knowledge regarding patient safety. A quantitative study using questionnaires was conducted in three multi-disciplinary hospitals in Western Lithuania. Data were collected in 2014 from physicians, nurses, and nurse assistants. The overall results indicated quite a low level of safety knowledge, especially in regard to knowledge concerning general patient safety. The health care professionals' background factors such as their profession, education, the information about patient safety they were given during their vocational and continuing education, as well as their experience in their primary speciality seemed to be associated with several patient safety knowledge areas. Despite a wide variation in background factors, the knowledge level of respondents was generally found to be low. This requires that further research into health care professionals' safety knowledge related to specific issues such as medication, infection, falls, and pressure sore prevention should be undertaken in Lithuania. This cross-sectional study explored the level of patient needs and satisfaction in women with day surgery. A consecutive sample of 233 women was recruited from a women's health care center in South Korea. Demographic and disease-specific characteristics, patient needs, and satisfaction were measured. Patient needs were evaluated based on patient-centered care framework; the average mean was 4.21 (.7) out of a possible 5. The mean score for overall patient satisfaction was 3.70 (.5) out of a possible 5. Among the five subdomains of patient needs, involvement of family and friends presented the highest mean score. The focus of day surgery care should respond to the care shift from hospital to home care, so that it should prepare family and friends to provide appropriate home care. This study reports high levels of patient needs and adds to the body of knowledge on perioperative nursing care interventions for women undergoing outpatient day surgery. Guided by the relational cultural theory, we conducted a qualitative study to examine the relationship experiences of African American transgender women living in North Carolina. A convenience sample of 15 transgender women participated in the study. Semi-structured interviews, guided by an investigator-developed interview guide, were used to explore the personal experiences of transgender women on individual, family, and organizational levels. The findings provide a scheme for understanding the process through which transgender women's relationships hinder or enhance their ability to connect with individuals, family, and organizations. Nurses can use these findings to better understand the connectedness that occurs or does not occur in transgender women's relationships and provide culturally competent care to empower them to become resilient. Adolescence is an unpredictable stage of life with varied and rapid changes. In Jordan, health-related quality of life (HRQoL) has been examined among diabetic and obese children and adolescents. The purpose of this study was to assess the HRQoL of Jordanian healthy adolescents. Three hundred fifty-four male and female adolescents whose ages ranged from 12 to 19 participated in the study. A descriptive comparative design was employed to investigate adolescents' HRQoL. The results revealed statistically significant differences in physical well-being, psychosocial well-being, and autonomy in favor of male adolescents. In addition, statistically significant differences were observed in favor of nonsmoker adolescents in psychosocial well-being, self-perception, parent relations and home life, financial resources, social relations and peers and school environment. In conclusion, the creation of a school health nurse role in Jordanian schools is crucial for helping adolescents improve their health. Self-management of osteoarthritis (OA) of the knee is important for treating this chronic disease. This study developed and psychometrically tested a new instrument for measuring adult patients' self-management needs of knee osteoarthritis (SMNKOA). The theoretical framework of self-care guided the development of the 35-item SMNKOA scale. Participants (N = 372) were purposively sampled from orthopedic clinics at medical centers in Taiwan. The content validity index was 0.83. Principal components analysis identified a three-factor solution, accounting for 53.19% of the variance. The divergent validity was -0.67; convergent validity was -0.51. Cronbach's alpha was .95, Pearson's correlation coefficient was .88, and the intraclass correlation coefficient was .95. The scale's reliability and validity supports the SMNKOA, as a tool to measure self-management needs of adults with knee OA. Nurses and other health care providers can use this instrument to evaluate knee OA patients and identify strategies for improving health-related outcomes and patient education. Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease that greatly impacts one's physical and psychosocial well-being. The purpose of this study was to explore the experiences and support needs of adult patients living with RA. A descriptive qualitative study was conducted, and 16 adults with RA were interviewed from October 2013 to January 2014. The transcribed data were analyzed using thematic analysis. Five themes were identified: altered physical capacity and well-being, psychological and emotional challenges, changes in social life, coping strategies, and support received and further support needs. This study provided insights into the experiences and support needs of patients with RA in Singapore. Physical and psychosocial challenges experienced by patients affected their daily and social activities. Patients' needs for variety of support should be addressed. In this paper we consider a predator-prey model with Leslie Gower functional response. We present the necessary and sufficient conditions for the nonexistence of limit cycles by the application of the generalized Dulac theorem. As a result, we give the necessary and sufficient conditions for which the local asymptotic stability of the positive equilibrium implies the global stability for this model. Our results extend and improve the results presented by Aghajani and Moradifam (2006) and Hsu and Huang (1995). (C) 2017 Elsevier Ltd. All rights reserved. This paper brings a revision of the so far known uniqueness result for a one-dimensional damped model of a suspension bridge. Using standard techniques, however with finer arguments, we provide a significant improvement and extension of the allowed interval for the stiffness parameter. (C) 2017 Elsevier Ltd. All rights reserved. On the basis of the dual variational formulation of a class of elliptic variational inequalities, a constitutive relation error is defined for the variational inequalities as an a posteriori error estimator, which is shown to guarantee strict upper bounds of the global energy-norm errors of kinematically admissible solutions. A numerical example is presented to validate the strictly bounding property of the constitutive relation error for the variational inequalities in question. (C) 2017 Elsevier Ltd. All rights reserved. We study the existence and the uniqueness of positive periodic solutions for a class of integral equations of the form phi(x) = integral([x,x+w]boolean AND G) K(x,y)[f(1)(y, phi(y-tau(y))) + f(2)(y, phi(y-tau(y)))]dy, x is an element of G, where G is a closed subset of R-N with periodic structure. Our analysis relies on the fixed point theory for mixed monotone operator in Banach space. (C) 2017 Elsevier Ltd. All rights reserved. In this letter, we obtain sharp estimates on the growth rate of solutions to a nonlinear ODE with a nonautonomous forcing, term. The equation is superlinear in the state variable and hence solutions exhibit rapid growth and finite-time blow-up. The importance of ODEs of the type considered here stems from the key role they play in understanding the asymptotic behaviour of more complex systems involving delay and randomness. (C) 2017 Elsevier Ltd. All rights reserved. Let a(X, y) be a nonnegative radical potential in a cylinder. In this paper, we study the solutions of the Dirichlet-Sch problem associated with a stationary Schrodinger operator. Existence of solutions for the Schrodinger equation and asymptotic properties as y -> +infinity and y -> -infinity are also considered under suitable conditions. (C) 2017 Elsevier Ltd. All rights reserved. Inviscid traveling waves are ghost-like phenomena that do not appear in reality because of their instability. However, they are the reason for the complexity of the traveling wave theory of reaction diffusion equations and understanding them will help to resolve related puzzles. In this article, we obtain the existence, the uniqueness and the regularity of inviscid traveling waves under a general monostable nonlinearity that includes non-Lipschitz continuous reaction terms. Solution structures are obtained such as the thickness of the tail and the free boundaries. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we investigate the existence of positive T-periodic solutions in a food web with one predator feeding on n preys by Leray-Schauder degree theory, which plays a significant role on further studying the uniqueness of limit cycles of this food web model. We start with a series of required spaces and operators and construct a bounded open set Omega over which the corresponding Leray-Schauder degree can be well-defined. Afterwards, by the invariance property of homotopy, we verify that the Leray-Schauder degree is not equal to 0 under certain conditions, thus implying the existence of positive T-periodic solutions. Finally, a numerical example is presented to illustrate our result. (C) 2017 Elsevier Ltd. All rights reserved. We consider a nonlinear Neumann problem driven by a nonhomogeneous differential operator and an indefinite potential. Using variational methods together with flow invariance arguments, we show that the problem has at least one nodal solution. The result presented in this paper gives an answer to the open question raised by Papageorgiou and Radulescu (2016). (C) 2017 Elsevier Ltd. All rights reserved. In this paper we study the well-posedness for some damped elastic systems in a Banach space. To illustrate our abstract results, we study the existence of classical solutions to some elastic systems with viscous and strong dampings. (C) 2017 Elsevier Ltd. All rights reserved. Content Based Image Retrieval (CBIR) has been widely studied in the last two decades. Unlike text based image retrieval techniques, visual properties of images are used to obtain high level semantic information in CBIR. There is a gap between low level features and high level semantic information. This is called semantic gap and it is the most important problem in CBIR. The visual properties were extracted from low level features such as color, shape, texture and spatial information in early days. Local Feature Descriptors (LFDs) are more successful to increase performance of CBIR system. Then, a semantic bridge is built with high level semantic information. Sparse Representations (SRs) have become popular to achieve this aim in the last years. In this study, CBIR models that use LFDs and SRs in literature are investigated in detail. The SRs and LFD extraction algorithms are tested and compared within a CBIR framework for different scenarios. Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Histograms of Oriented Gradients (HoG), Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) are used to extract LFDs from images. Random Features, K-Means and K-Singular Value Decomposition (K-SVD) algorithms are used for dictionary learning and Orthogonal Matching Pursuit (OMP), Homotopy, Lasso, Elastic Net, Parallel Coordinate Descent (PCD) and Separable Surrogate Function (SSF) are used for coefficient learning. Finally, three methods recently proposed in literature (Online Dictionary Learning (ODL), Locality-constrained Linear Coding (LLC) and Feature-based Sparse Representation (FBSR)) are also tested and compared with our framework results. All test results are presented and discussed. As a conclusion, the most successful approach in our framework is to use LLC for Coil20 data set and FBSR for Corel1000 data set. We obtain 89% and 58% Mean Average Precision (MAP) for Coil20 and Corel1000, respectively. (C) 2017 Elsevier Ltd. All rights reserved. The key of label propagation heavily depends on how to capture the manifold structure of the data, which usually is represented by the graph. In the semi-supervised multi-modality classification, exiting methods often optimize the linear relation of multi-graph for label propagation. However, the intrinsic manifold structure is not completely revealed by the linear fusion of multi-graph because the label changes in each iterating propagation dynamically influence the fusion relation of multi-graph. In other words, the fusion relation of multi-graph should be nonlinear because of the label changes in the propagation process, and can not be precisely described by the fixed linear relation in existing methods. To evaluate this nonlinear relationship influence on the classification performance of label propagation, we propose dynamic graph fusion label propagation (DGFLP) for the semi-supervised multi-modality classification. DGFLP is able to jointly consider the relation of multi-graph and the unique distribution of each graph, and models the various relevance of multi-graph in the propagation process. Moreover, the DGFLP alternately integrates the tradition label propagation and the new model function to describe the interaction between the multi-graph and label. The DGFLP solution provides not only the classification label but also the nonlinear relation that encodes the dynamical multi-graph relationship changes in label propagation. The experimental results demonstrate that DGFLP outperforms state-of-art methods on the ORL, AR, scenes 15, Caltech 101, and Caltech 256 databases. (C) 2017 Elsevier Ltd. All rights reserved. This paper aims to propose a new hyperspectral target-detection method termed the matched subspace detector with interaction effects (MSDinter). The MSDinter introduces "interaction effects" terms into the popular matched subspace detector (MSD), from regression analysis in multivariate statistics and the bilinear mixing model in hyperspectral unmixing. In this way, the interaction between the target and the surrounding background, which should have but not yet been considered by the MSD, is modelled and estimated, such that superior performance of target detection can be achieved. Besides deriving the MSDinter methodologically, we also demonstrate its superiority empirically using two hyperspectral imaging datasets. (C) 2017 The Authors. Published by Elsevier Ltd. Multiple Kernel Learning (MKL) literature has mostly focused on learning weights for base kernel combiners. Recent works using instance dependent weights have resulted in better performance compared to fixed weight MKL approaches. This may be attributed to the fact that, different base kernels have varying discriminative capabilities in distinct local regions of input space. We refer to the zones of classification expertize of base kernels as their "Regions of Success" (RoS). We propose to identify and model them (during training) through a set of instance dependent success prediction functions (SPF) having high values in RoS (and low, otherwise). During operation, the use of these SPFs as instance dependent weighing functions promotes locally discriminative base kernels while suppressing others. We have experimented with 21 benchmark datasets from various domains having large variations in terms of dataset size, interclass imbalances and number of features. Our proposal has achieved higher classification rates and balanced performance (for both positive and negative classes) compared to other instance dependent and fixed weight approaches. (C) 2017 Elsevier Ltd. All rights reserved. Extreme Learning Machine (ELM) is a popular machine learning method which can flexibly simulate the relationships of real-world classification applications. When facing problems (i.e., data sets) with a smaller number of samples (i.e., instances), ELM may often result in the overfitting trouble. In this paper, we propose a new Instance Cloned Extreme Learning Machine (IC-ELM for short) which can handle numerous different classification problems. IC-ELM uses an instance cloning method to balance the input data's distribution and extend the training data set, which alleviates the overfitting issue and enhances the testing classification accuracy. Experiments and comparisons on 20 UCI data sets, and validations on image and text classification applications, demonstrate that IC-ELM is able to achieve superior results compared to the original ELM algorithm and its variants, as well as several other classical machine learning algorithms. (C) 2017 Elsevier Ltd. All rights reserved. The recent proliferation of advertising (ad) videos has driven the research in multiple applications, ranging from video analysis to video indexing and retrieval. Among them, classifying ad video is a key task because it allows automatic organization of videos according to categories or genres, and this further enables ad video indexing and retrieval. However, classifying ad video is challenging compared to other types of video classification because of its unconstrained content. While many studies focus on embedding ads relevant to videos, to our knowledge, few focus on ad video classification. In order to classify ad video, this paper proposes a novel ad video representation that aims to sufficiently capture the latent semantics of video content from multiple views in an unsupervised manner. In particular, we represent ad videos from four views, including bag-of-feature (BOF), vector of locally aggregated descriptors (VLAD), fisher vector (FV) and object bank (OB). We then devise a multi-layer multi-view topic model, mlmv_LDA, which models the topics of videos from different views. A topical representation for video, supporting category-related task, is finally achieved by the proposed method. Our empirical classification results on 10,111 real-world ad videos demonstrate that the proposed approach effectively differentiate ad videos. (C) 2017 Elsevier Ltd. All rights reserved. Many ellipse detection methods have been proposed for detecting ellipses in images. However, they are unsuitable for industrial images due to low signal-to-noise ratios (SNR). This paper presents an ellipse detection method combining the advantages of Hough transform(HT) based methods and the advantages of edge following methods, which is capable of detecting fragmented ellipses and is both computational and memory efficient. Our method works in two steps. In the first step, an edge following method is proposed to quickly and accurately extract the majority of ellipses. For ellipses missed in the first step, candidate regions where each may contain one missed ellipse are extracted in the second step using cluster analysis, and then a HT based method is performed on these regions to extract the missed ellipses. This can not only guarantee the accuracy of the HT based method, but also save the memory and computation time. We test the performance of our method using both synthetic images and low SNR industrial images. Experimental results demonstrate that the proposed method performs far better than existing methods in terms of recall, precision, F-measure, and reliability. Especially in term of reliability, our method has achieved a very high value close to 1 while the reliabilities of state-of-the art methods are almost less than 0.5. (C) 2017 Elsevier Ltd. All rights reserved. Recently, learning-based hashing methods which are designed to preserve the semantic information, have shown promising results for approximate nearest neighbor (ANN) search problems. However, most of these methods require a large number of labeled data which are difficult to access in many real applications. With very limited labeled data available, in this paper we propose a semi-supervised hashing method by integrating manifold embedding, feature representation and classifier learning into a joint framework. Specifically, a semi-supervised manifold embedding is explored to simultaneously optimize feature representation and classifier learning to make the learned binary codes optimal for classification. A two-stage hashing strategy is proposed to effectively address the corresponding optimization problem. At the first stage, an iterative algorithm is designed to obtain a relaxed solution. At the second stage, the hashing function is refined by introducing an orthogonal transformation to reduce the quantization error. Extensive experiments on three benchmark databases demonstrate the effectiveness of the proposed method in comparison with several state-of-the-art hashing methods. (C) 2017 Elsevier Ltd. All rights reserved. The evaluation of classification performance is crucial for algorithm and model selection. However, a performance measure for multiclass classification problems (i.e., more than two classes) has not yet been fully adopted in the pattern recognition and machine learning community. In this work, we introduce the multiclass performance score (MPS), a generic performance measure for multiclass problems. The MPS was designed to evaluate any multiclass classification algorithm for any arbitrary testing condition. This measure handles the case of unknown misclassification costs and imbalanced data, and provides confidence indicators of the performance estimation. We evaluated the MPS using real and synthetic data, and compared it against other frequently used performance measures. The results suggest that the proposed MPS allows capturing the performance of a classification with minimum influence from the training and testing conditions. This is demonstrated by its robustness towards imbalanced data and its sensitivity towards class separation in feature space. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents a compact and efficient yet powerful binary framework based on image gradients for robust facial representation. It is termed as Binary Gradient Patterns (BGP). To discover underlying local structures in the gradient domain, image gradients are computed from multiple directions and encoded into a set of binary strings. Certain types of these binary strings have meaningful local structures and textures, as they detect micro oriented edges and retain strong local orientation, thus enabling great discrimination. Face representations by these structural BGP histograms exhibit profound robustness against various facial image variations, in particular illumination. The binary strategy realized by local correlations substantially simplifies the computational complexity and achieves extremely efficient processing with only 0.0032 s in Matlab for a typical image. Furthermore, the discrimination power of the BGP has been enhanced on a set of orientations of the image-gradient magnitudes. Extensive experimental results on various benchmarks demonstrate that the BGP-based representations significantly improve over the existing local descriptors and state-of-the-art methods in the terms of discrimination, robustness and complexity and in many cases the improvements are substantial. Combining with the deep networks, the proposed descriptors can further improve the performance of the deep networks on real-world datasets. Matlab codes for the BGP-based descriptors are available at: http://www.eee.manchester.ac.uk/our-researchfresearch-groupsisispiresearch-areasivipisoftwarei. (C) 2017 Elsevier Ltd. All rights reserved. This paper proposes a novel fuzzy model-based unsupervised learning algorithm with boundary correction for image segmentation. We propose a fuzzy Generalized Gaussian Density (GGD) segmentation model and the GGD-based agglomerative fuzzy algorithm for grouping image pixels. The merits of algorithm are that it is not sensitive to initial parameters and that the number of groups can be estimated via the validation technique. To minimize the objective function of the model, we define a dissimilarity measure based on the Kullback-Leibler divergence of the GGDs that computes the discrepancy between GGDs in the space of generalized probability distributions. To effectively segment images with various textures, we propose a two-stage fuzzy GGD segmentation algorithm. The first stage adopts the proposed fuzzy algorithm to obtain initial segmentation and the second stage improves initial segmentation by image boundary correction. Experimental results show that our proposed method has a promising performance compared with existing approaches. (C) 2017 Elsevier Ltd. All rights reserved. Text detection in mobile video is challenging due to poor quality, complex background, arbitrary orientation and text movement. In this work, we introduce fractals for text detection in video captured by mobile cameras. We first use fractal properties such as self-similarity in a novel way in the gradient domain for enhancing low resolution mobile video. We then propose to use k-means clustering for separating text components from non-text ones. To make the method font size independent, fractal expansion is further explored in the wavelet domain in a pyramid structure for text components in text cluster to identify text candidates. Next, potential text candidates are obtained by studying the optical flow property of text candidates. Direction guided boundary growing is finally proposed to extract multi-oriented texts. The method is tested on different datasets, which include low resolution video captured by mobile, benchmark ICDAR 2013 video, YouTube Video Text (YVT) data, ICDAR 2013, Microsoft, and MSRA arbitrary orientation natural scene datasets, to evaluate the performance of the proposed method in terms of recall, precision, F-measure and misdetection rate. To show the effectiveness of the proposed method, the results are compared with the state of the art methods. (C) 2017 Elsevier Ltd. All rights reserved. In recent years, significant advances in visual tracking have been made, and numerous outstanding algorithms have been proposed. However, the constraint between tracking accuracy and speed has not yet been comprehensively addressed. In this paper, to address the challenging aspects of visual tracking and, in particular, to achieve accurate real-time tracking, we propose a novel real-time kernel-based visual tracking algorithm based on superpixel clustering and hybrid hash analysis. By adopting superpixel clustering and segmentation, we reconstruct the appearance model of the target and its surrounding context in the initialization step. Via introducing the approach of overlap and intensity analysis, we divide the reconstructed model into several superpixel blocks. Based on the theory of circulant matrices and Fourier analysis, we build a Gaussian kernel correlation filter to roughly locate the position of each candidate block. To further improve the kernel correlation filter method, we compute each block's maximal response value in the confidence map and estimate each block's scale variation based on a peak value comparison. Additionally, we also propose a hybrid hash analysis strategy and integrate it with superpixel analysis for target blocks modification. By calculating a hybrid hash sequence based on L*A*B color and the discrete cosine transform, we conduct superpixel block modification to accurately locate the target and estimate the target's scale variation. Extensive experiments on visual tracking benchmark datasets show that our tracking algorithm outperforms the state-of-the-art algorithms and demonstrate its effectiveness and efficiency. (C) 2017 Elsevier Ltd. All rights reserved. There have been significant advances in deep learning based single-image super-resolution (SISR) recently. With the advantage of deep neural networks, deep learning based methods can learn the mapping from low-resolution (LR) space to high-resolution (HR) space in an end-to-end manner. However, most of them only use a single model to generate HR result. This brings two drawbacks: (1) the risk of getting stuck in local optima and (2) the limited representational ability of single model when handling various input LR images. To overcome these problems, we novelly suggest a general way through introducing the idea of ensemble into SR task. Furthermore, instead of simple averaging, we propose a back-projection method to determine the weights of different models adaptively. In this paper, we focus on sparse coding network and propose ensemble based sparse coding network (ESCN). Through the combination of multiple models, our ESCN can generate more robust reconstructed results and achieve state-of-the-art performance. (C) 2017 Elsevier Ltd. All rights reserved. Feature noise, namely noise on inputs is a long-standing plague to support vector machine(SVM). Conventional SVM with the hinge loss(C-SVM) is sparse but sensitive to feature noise. Instead, the pinball loss SVM(pin-SVM) enjoys noise robustness but loses the sparsity completely. To bridge the gap between C-SVM and pin-SVM, we propose the truncated pinball loss SVM((pin) over bar -SVM) in this paper. It provides a flexible framework of trade-off between sparsity and feature noise insensitivity. Theoretical properties including Bayes rule, misclassification error bound, sparsity, and noise insensitivity are discussed in depth. To train (pin) over bar -SVM, the concave-convex procedure(CCCP) is used to handle non-convexity and the decomposition method is used to deal with the subproblem of each CCCP iteration. Accordingly, we modify the popular solver LIBSVM to conduct experiments and numerical results validate the properties of (pin) over bar -SVM on the synthetic and real-world data sets. (C) 2017 Elsevier Ltd. All rights reserved. Sparse representation based face recognition (SRC) has been paid much attention in recent years. Representative algorithms are deformable sparse-representation based classification (DSRC) and shape-constrained texture matching, which focus on misalignment and shape change respectively. Although these algorithms obtain improved accuracy and robustness, their efficiency is not satisfied particularly when applied in a large-scale system. The main difficulty is the expensive calculation in aligning gallery images and the probe image. To solve these problems, this paper proposes a fast alignment strategy for sparse representation based algorithms. The key idea is to pre-compute the most expensive operation, Hessian matrix, which was needed to be calculated in each iteration. Subsequently, with help of the proposed fast alignment strategy, two algorithms, deformable SRC and shape-constrained texture matching, are extended to their fast versions, i.e., fast deformable SRC and fast shape-constrained texture matching. Experimental evaluations have been conducted on the public datasets such as Multi-PIE, FERET, and Cohn-Kanade. We demonstrate that the proposed alignment strategy greatly improves the efficiency of original algorithms without losing accuracy and robustness. (C) 2017 Elsevier Ltd. All rights reserved. We propose a robust method for the automatic identification of seed points for the segmentation of coronary arteries from coronary computed tomography angiography (CCTA). The detection of the aorta and the two ostia for use as seed points is required for the automatic segmentation of coronary arteries. Our method is based on a Bayesian framework combining anatomical and geometrical features. We demonstrate the robustness and accuracy of our method by comparison with two conventional methods on 130 CT cases. (C) 2017 Elsevier Ltd. All rights reserved. New methods for generating synthetic handwriting images for biometric applications have recently been developed. The temporal evolution of handwriting from childhood to adulthood is usually left unexplored in these works. This paper proposes a novel methodology for including temporal evolution in a handwriting synthesizer by means of simplifying the text trajectory plan and handwriting dynamics. This is achieved through a tailored version of the kinematic theory of rapid human movements and the neuromotor inspired handwriting synthesizer. The realism of the proposed method has been evaluated by comparing the temporal evolution of real and synthetic samples both quantitatively and subjectively. The quantitative test is based on a visual perception algorithm that compares the letter variability and the number of strokes in the real and synthetic handwriting produced at different ages. In the subjective test, 30 people are asked to evaluate the perceived realism of the evolution of the synthetic handwriting. (C) 2017 Elsevier Ltd. All rights reserved. The fuzzy c-partition entropy has been widely adopted as a global optimization technique for finding the optimal thresholds when performing multilevel gray image segmentation. Nevertheless, existing fuzzy c-partition entropy approaches generally have two limitations, i.e., partition number cneeds to be manually tuned for different input and the methods can process grayscale images only. To address these two limitations, an unsupervised multilevel segmentation algorithm is presented in this paper. The core step of our algorithm is a bi-level segmentation operator, which uses binary graph cuts to maximize both fuzzy 2-partition entropy and segmentation smoothness. By iteratively performing this bi-level segmentation operator, multilevel image segmentation is achieved in a hierarchical manner: Starting from the input color image, our algorithm first picks the color channel that can best segment the image into two labels, and then iteratively selects channels to further split each labels until convergence. The experimental results demonstrate the presented hierarchical segmentation scheme can efficiently segment both grayscale and color images. Quantitative evaluations over classic gray images and the Berkeley Segmentation Database show that our method is comparable to the state-of-the-art multi-scale segmentation methods, yet has the advantage of being unsupervised, efficient, and easy to implement. (C) 2017 Elsevier Ltd. All rights reserved. Face recognition under variable pose and lighting is still one of the most challenging problems, despite the great progress achieved in unconstrained face recognition in recent years. Pose variation is essentially a misalignment problem together with invisible region caused by self-occlusion. In this paper, we propose a lighting-aware face frontalization method that aims to generate both lighting-recovered and lighting-normalized frontalized images, based on only five fiducial landmarks. Basic frontalization is first performed by aligning a generic 3D face model into the input face and rendering it at frontal pose, with an accurate visible region estimation based on face borderline detection. Then we apply the illumination-invariant quotient image, estimated from the visible region, as a face symmetrical feature to fill the invisible region. Lighting-recovered face frontalization (LRFF) is conducted by rendering the estimated lighting on the invisible region. By adjusting the combination parameters, lighting-normalized face frontalization (LNFF) is performed by rendering the canonical lighting on the face. Although its simplicity, our LRFF method competes well with more sophisticated frontalization techniques, on the experiments of LFW database. Moreover, combined with our recently proposed LRA-based classifier, the LNFF based method outperforms the deep learning based methods by about 6% on the challenging experiment on Multiple PIE database under variable pose and lighting. (C) 2017 Elsevier Ltd. All rights reserved. We present an image-based 3D face shape reconstruction method which transfers shape cues inferred from source face images to guide the reconstruction of the target face. Specifically, a sparse face shape adaption mechanism is used to generate a target-specific reference shape by adaptively and selectively combining source face shapes. This reference shape can also facilitate the reconstruction optimization for the target shape. As an off-line process, each source shape has been derived from a set of given sufficient source images (more than 9) based on a non-Lambertian reflectance model. Such a process allows for the existence of cast shadow and specularity, and more accurately infers the source shape. Guided by the target-specific reference shape, the shape of a target face can be estimated using a small number of images (even only one). The proposed reconstruction method refers to a lighting estimation and an albedo estimation for the target face. No standard 3D shape (such as the high-precision scanned 3D face) is required in the reconstruction process. Compared to the state-of-the-arts including the Photometric Stereo, Tensor Spline, the single reference based method, and the GEM algorithm, the proposed sparse transfer model can produce visually better facial details and obtain smaller reconstruction errors. (C) 2017 Elsevier Ltd. All rights reserved. Two-dimensional principal component analysis (2DPCA) employs the squared F-norm as distance metric for feature extraction and is widely used in the field of pattern analysis and recognition, especially face image analysis. But it is sensitive to the presence of outliers due to the fact that squared F-norm remarkably enlarges the role of outliers in the criterion function. To handle this problem, we propose a robust formulation for 2DPCA, namely optimal mean 2DPCA with F-norm minimization (OMF-2DPCA). In OMF-2DPCA, distance in spatial dimensions (attribute dimensions) is measured in F-norm, while the summation over different data points uses 1-norm. Moreover, we center the data using the optimized mean rather than the fixed mean. This helps further improve robustness of our method. To solve OMF-2DPCA, we propose a fast iterative algorithm, which has a closed-form solution in each iteration. Experimental results on face image databases illustrate its effectiveness and advantages. (C) 2017 Elsevier Ltd. All rights reserved. Complex activity recognition is challenging since a complex activity can be performed in different ways, with each having its own configuration of primitive events and their temporal dependencies. To address such temporal relational variabilities in complex activity recognition, we propose a Bayesian network based probabilistic generative framework that employs Allen's interval relation network to represent local temporal dependencies in a generative way. By employing the Chinese restaurant process and introducing relation generation constraints, our framework can characterize these unique internal configurations of a particular complex activity as a joint distribution. Three concrete models are implemented based on our framework. Specifically, in this paper we improve two of our previous models and provide an enhanced model to handle temporal relational variabilities in complex activities more efficiently. Empirical evaluations on three benchmark datasets demonstrate the competitiveness of our framework. In particular, it is shown that our models are rather robust against errors caused by the low-level predictions from raw signals. (C) 2017 Elsevier Ltd. All rights reserved. Vast collections of documents available in image format need to be indexed for information retrieval purposes. In this framework, word spotting is an alternative solution to optical character recognition (OCR), which is rather inefficient for recognizing text of degraded quality and unknown fonts usually appearing in printed text, or writing style variations in handwritten documents. Over the past decade there has been a growing interest in addressing document indexing using word spotting which is reflected by the continuously increasing number of approaches. However, there exist very few comprehensive studies which analyze the various aspects of a word spotting system. This work aims to review the recent approaches as well as fill the gaps in several topics with respect to the related works. The nature of texts and inherent challenges addressed by word spotting methods are thoroughly examined. After presenting the core steps which compose a word spotting system, we investigate the use of retrieval enhancement techniques based on relevance feedback which improve the retrieved results. Finally, we present the datasets which are widely used for word spotting, we describe the evaluation standards and measures applied for performance assessment and discuss the results achieved by the state of the art. (C) 2017 Elsevier Ltd. All rights reserved. Recently, attempts have been made to collect millions of videos to train Convolutional Neural Network (CNN) models for action recognition in videos. However, curating such large-scale video datasets requires immense human labor, and training CNNs on millions of videos demands huge computational resources. In contrast, collecting action images from the Web is much easier and training on images requires much less computation. In addition, labeled web images tend to contain discriminative action poses, which highlight discriminative portions of a video's temporal progression. Through extensive experiments, we explore the question of whether we can utilize web action images to train better CNN models for action recognition in "videos. We collect 23.8K manually filtered images from the Web that depict the 101 actions in the UCF101 action video dataset. We show that by utilizing web action images along with videos in training, significant performance boosts of CNN models can be achieved. We also investigate the scalability of the process by leveraging crawled web images (unfiltered) for UCF101 and ActivityNet. Using unfiltered images we can achieve performance improvements that are on-par with using filtered images. This means we can further reduce annotation labor and easily scale-up to larger problems. We also shed light on an artifact of finetuning CNN models that reduces the effective parameters of the CNN and show that using web action images can significantly alleviate this problem. (C) 2017 Elsevier Ltd. All rights reserved. Human action recognition based on skeletons has wide applications in human computer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. What's more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method. (C) 2017 Elsevier Ltd. All rights reserved. The paper presents an analysis of ecological impact of the combined factor of natural and manmade contamination of near-Earth space on the environment around the Earth and human space activities. The near-Earth space (NES) is considered here as the enlarged notion by uniting the traditional cosmic space starting conditionally from about 100 km and the lower space filled with atmosphere. This makes analysis of some important characteristics (such as, for example, the astronomic transparency of the environment) more comprehensive and allows one evaluating those characteristics from one and the same position. Estimates are determined of the potential consequences of mutual collisions of dangerous space objects both among themselves and with operating space systems. Research extrapolating the existing objective estimates and supervision of pollution in low Earth orbits shows the probability of catastrophic growth in the number of objects and orbital debris in low orbits leading to the practical impossibility of further peaceful scientific exploration of space. The development of methodologies and technologies regarding the comprehensive investigation of shock impacts on space objects and systems becomes the actual task for the creation of conditions for the safe development of peaceful space explorations. Research involving the collision processes, including the dynamics and associated consequences, developing means and pathways of protection for spaceships, protection of structures and systems from the influence of high-speed objects, means and methods to remove debris from space, and the use of dangerous objects. The technology of measurements with applications in tensometry provides data acquisition concerning the shock impulse in a wide range of collision conditions, and ensuring a physical and constructive variety of the impacting objects is considered. Parameters of an elongated hyper-velocity projectile formed by a detonation of a cumulative charge are determined. The projectile's mass, impact velocity and energy were determined. Penetration pressures are calculated by the deceleration method. Values obtained are compared with commonly known data on phase states of interacting materials - melting and partial evaporation. An elongated projectile penetration through a screen protection into a target is analyzed. Experimental data obtained reveals high efficiency of a screen protection against an elongated hypervelocity projectile in an area of interacting materials evaporation. Various protection of optical devices, arranged in the orbital stations against "space debris" and probable terrorist acts in space are examined in the paper. The materials, transparent in visible spectrum and applied for development of transparent armor, are given. Variants and prospects for creation of new materials intended to protect optical devices in space are considered. The experimental studies of the process of hypervelocity (up to 6 km/s) impact of a mm-size projectile on a thin aluminum plate is described. The numerical simulation of this process is presented. The data on the evolution, structure, and composition of the debris cloud formed as a result of the impact are reported. Basic specific features of the debris cloud formation are revealed. The formation of high-velocity compact elements of shaped charges with a liner of a combined hemisphere cylinder shape has been analyzed by numerical simulations of a two-dimensional axisymmetric problem of continuum mechanics. This liner jet generator contains a part of a hemisphere, a truncated sphere or a slightly prolate ellipsoid and a cut-off part in the form of a cylinder. The abstract massive speed characteristics of the formed compact elements depend on changes in the geometric parameters of the combined cumulative liner. Variants combined the cumulative liner with an explosive device to ensure the formation of elements of a gradientless weight from 5 g to 15 g at velocities of 7.5-10 km/s. This simple explosive device can be used to simulate the conditions in the Earth's single and group impact of micrometeorites and space debris objects in rocket and space technology. Accumulation of microdamages as a result of intensive plastic deformation leads to a decrease in the average density of the high-velocity elements that are formed at the explosive collapse of the special shape metal liners. For compaction of such elements in tests of their spacecraft meteoroid protection reliability, the use of magnetic-field action on the produced elements during their movement trajectory before interaction with a target is proposed. On the basis of numerical modeling within the one-dimensional axisymmetric problem of continuum mechanics and electrodynamics, the physical processes occurring in the porous conducting elastoplastic cylinder placed in a magnetic field are investigated. Using this model, the parameters of the magnetic-pulse action necessary for the compaction of the steel and aluminum elements are determined. A study of explosive-throwing device (ETD) was undertaken to simulate the hypervelocity impact of space debris fragments (SDF) and meteoroids with spacecrafts. The principle of operation of an ETD is based on the cumulative effect in combination with the cut-off head of the cumulative jet, which enables one to simulate a compact particle, such as a meteoroid or a fragment of space debris. Different design schemes of ETD with different composition explosive charge initiation schemes with notably low speeds of the jet cut-off are explored, and a method to control the particle velocity is proposed. Numerical simulation of device modes and basic technical characteristics of experimental testing are investigated. Laboratory modelling of active space experiments is described. To accelerate an impactor, a specially developed electromagnetic railgun capable of accelerating 10-mg cubic impactors to velocities of up to 5.5 km/s was used. As targets, different materials, such as ice and moon-like regolith, were employed. To simulate a collision with a comet, a hypervelocity impact on water ice targets with different densities was performed. To simulate an impact on the Moon's surface, a special method for preparation of regolith targets with the compositions corresponding to the Moon's surface materials was developed. The impact experiments with regoliths of different densities were carried out. Analytical and experimental studies conducted at Semenov Institute of Chemical Physics for investigating the use of pyrotechnic compositions, i.e., thermites, to reduce the risk of the fall of thermally stable parts of deorbiting end-of-life LEO satellites on the Earth are described. The main idea was the use of passive heating during uncontrolled re-entry to ignite thermite composition, fixed on the titanium surface, with the subsequent combustion energy release to be sufficient to perforate the titanium cover. It is supposed, that thus destructed satellite parts will lose their streamline shape, and will burn out being aerodynamically heated during further descending in atmosphere (patent FR2975080). On the base of thermodynamic calculations the most promising thermite compositions have been selected for the experimental phase. The unique test facilities have been developed for the testing of the efficiency of thermite charges to perforate the titanium TA6V cover of 0.8 mm thickness under temperature/pressure conditions duplicated the uncontrolled re-entry of titanium tank after its mission on LEO. Experiments with the programmed laser heating inside the vacuum chamber revealed the only efficient thermite composition among preliminary selected ones to be Al/Co3O4. Experimental searching of the optimal aluminum powder between spherical and flaked nano- and micron-sized ones revealed the possibility to adjust the necessary ignition delay time, according to the titanium cover temperature dependency on deorbiting time. For the titanium tank the maximum temperature is 1100 degrees C at altitude 68 km and pressure similar to 60 Pa. Under these conditions Al/Co3O4 formulations with nano-Al spherical particles provide the ignition time to be 13.3 s, and ignition temperature as low as 592 +/- 5 degrees C, whereas compositions with the micron-sized spherical Al powder reveal these values to be much higher, i.e., 26.3 s and 869 +/- 5 degrees C, respectively. The analytical and experimental studies described in this paper provide a portion of the basic information required for the development of pyrotechnic device to reduce the risk of the fall of thermally stable parts of deorbiting end-of-life LEO satellites on the Earth. A fairly reliable method of fire detection at an early stage is suggested and proved. The method is based on the monitoring of the air chemical composition, which changes strongly due to thermal decomposition (pyrolysis) of overheated combustible materials that are starting to smolder. It is at this stage of incipient fire that adequate measures can be taken for fire extinction with simple tools or, in the case of overheating of electrical equipment, for switching off this equipment automatically using a signal from the fire protection system thereby eliminating the fire hazardous situation. Fire detectors and microcontrollers are designed. These gas fire detectors are particularly suitable at the objects with active ventilation, for example, at space vehicles. The facilities of the spaceport 'Vostochny" and the innovative technologies for fire safety to be implemented are considered. The planned approaches and prospects for fire safety ensuring at the facilities of the spaceport "Vostochny" are presented herein, based on the study of emergency situations having resulted in fire accidents and explosion cases at the facilities supporting space vehicles operation. (C) 2016 IAA. Published by Elsevier Ltd. All rights reserved. The article deals with innovative technical solutions that provide fire safety in inhabited pressurized compartments of manned spacecraft by means of a fireproof device of inhabited pressurized compartments via application of engineering means of fire prevention and fire spreading prevention by lowering fire load in an inhabited pressurized module up to the point when the maximum possible levels of fire factors in an inhabited pressurized compartment of a manned spacecraft are prevented. Represented technical solutions are used at the present time according to stated recommendations during provision of fire safety of equipment created by a number of Russian organizations for equipage of inhabited pressurized compartments of spacecraft of the Russian segment of International space station. There is an innovational fire-extinguishing technology implemented via equipage of inhabited pressurized modules of the space station "Mir" and compartments of the Russian segment of International space station by automatic fire extinguishing systems in an orbital flight. Fire-safety in inhabited pressurized compartments of spacecraft (further - InPC SC) became one of the most dangerous factors during an orbital flight after a number of fire-hazardous situations occurred in different countries during preparation and execution of spaceflights [1,2]. High fire-risk in InPC of manned SC is determined by the following specific peculiarities of a arrangement and usage conditions of these items: - atmosphere of inhabited compartments is considerably enriched with oxygen - up to 25-40%; - there are many structural non-metal materials (here and after - materials) in order to lower the weight of InPC SC, most part of these materials is combustible under a given concentration of oxygen (here and after C-ox) in the atmosphere of InPC SC; - ventilation flow (here and after - V-uf) under normal operation of ventilation means in InPC SC considerably increases a possibility of fast fire-spread in InPC. - inhabited pressurized compartments of SC are filled with electrical equipment, which elements during failures even in low-current circuits became fire sources in oxygen-rich atmosphere; - indoor spaces of inhabited pressurized compartments of SC, as a rule, have complicated figuration with isolated for usage of local fire extinguishing zones with elements of electrical devices. Synthesis of new highly energetic ionic liquids (ILs) is described, and their hypergolic ignition properties are tested. The synthesized ILs combine the advantages of conventional rocket propellants with the energy characteristics of acetylene derivatives. To this end, N-alkylated imidazoles (alkyl = ethyl, butyl) have been synthesized and alkylated with propargyl bromide. The desired ionic liquids have been produced by metathesis using Ag dicyanamide. Modified hypergolic drop tests with white fuming nitric acid have been performed for N-ethyl (IL-1) and N-butyl propargylimidazolium (IL-2) ionic liquids. In the modified drop tests, high-speed shadowgraph imaging is used to visualize the process, and the temperature rise due to ignition is monitored with a two-color photodetector. It is shown that the ignition delay is shorter for IL-1 as compared to IL-2. The ignition of IL-1 occurs in two stages, whereas the combustion of IL-2 proceeds smoothly without secondary flashes. The aim of the present paper is to study detonation initiation due to focusing of a shock wave reflected inside a cone. Both numerical and experimental investigations were conducted. Comparison of results made it possible to validate the developed 3-d transient mathematical model of chemically reacting gas mixture flows incorporating hydrogen - air mixtures. The results of theoretical and numerical experiments made it possible improving kinetic schemes and turbulence models. Several different flow scenarios were detected in reflection of shock waves all being dependent on incident shock wave intensity: reflecting of shock wave with lagging behind combustion zone, formation of detonation wave in reflection and focusing, and intermediate transient regimes. Two-phase systems that involve gas-particle or gas-droplet flows are widely used in aerospace and power engineering. The problems of weakening and suppression of detonation during saturation of a gas or liquid flow with the array of solid particles are considered. The tasks, associated with the formation of particles arrays, dust lifting behind a travelling shock wave, ignition of particles in high-speed and high temperature gas flows are adjoined to safety of space flight. The mathematical models of shock wave interaction with the array of solid particles are discussed, and numerical methods are briefly described. The numerical simulations of interaction between sub- and supersonic flows and an array of particles being in motionless state at the initial time are performed. Calculations are carried out taking into account the influence that the particles cause on the flow of carrier gas. The results obtained show that inert particles significantly weaken the shock waves up to their suppression, which can be used to enhance the explosion safety of spacecrafts. (C) 2016 IAA. Published by Elsevier Ltd. All rights reserved. A pintle injector is a movable injector capable of controlling injection area and velocities. Although pintle injectors are not a new concept, they have become more notable due to new applications such as planet landers and low-cost engines. However, there has been little consistent research on pintle injectors because they have many design variations and mechanisms. In particular, simulation studies are required for bipropellant applications. In this study, combustion simulation was conducted using methane and oxygen to determine the effects of injection condition and geometries upon combustion characteristics. Steady and two-dimensional axisymmetric conditions were assumed and a 6-step Jones-Lindstedt mechanism with an eddy-dissipation concept model was used for turbulent kinetic reaction. As a result, the results with wide flame angles showed good combustion performances with a large recirculation under the pintle tip. Under lower mass flow-rate conditions, the combustion performance got worse with lower flame angles. To solve this problem, decreasing the pintle opening distance was very effective and the flame angle recovered. In addition, a specific recirculation zone was observed near the post, suggesting that proper design of the post could increase the combustion performance, while the geometry without a recirculation zone had the poor performance. Physical details regarding base pressure low-frequency oscillations between rocket nozzles, their excitation and maintenance, are considered. Amplitude - frequency characteristics of these oscillations, as well as sequence of their type change, are studied. A single nozzle, a two-nozzle unit and a ring nozzle imitating multi-nozzle unit, are investigated in the present study. Metal particles are widely used in space engineering to increase specific impulse and to supress acoustic instability of intra-champber processes. A numerical analysis of the internal injection-driven turbulent gas particle flows is performed to improve the current understanding and modeling capabilities of the complex flow characteristics in the combustion chambers of solid rocket motors (SRMs) in presence of forced pressure oscillations. The two-phase flow is simulated with a combined Eulerian-Lagrangian approach. The Reynolds averaged Navier Stokes equations and transport equations of k-epsilon model are solved numerically for the gas. The particulate phase is simulated through a Lagrangian deterministic and stochastic tracking models to provide particle trajectories and particle concentration. The results obtained highlight the crucial significance of the particle dispersion in turbulent flowfield and high potential of statistical methods. Strong coupling between acoustic oscillations, vortical motion, turbulent fluctuations and particle dynamics is observed. The evolution of the incident shock in the plane overexpanded jet flow or in the axisymmetric one is analyzed theoretically and compared at the whole range of governing flow parameters. Analytical results can be applied to avoid jet flow instability and self-oscillation effects at rocket launch, to improve launch safety and to suppress shock-wave induced noise harmful to environment and personnel. The mathematical model of "differential conditions of dynamic compatibility" was applied to the curved shock in non-uniform plane or axisymmetrical flow. It allowed us to study such features of the curved incident shock and flow downstream it as shock geometrical curvature, jet boundary curvature, local increase or decrease of the shock strength, flow vorticity rate (local pressure gradient) in the vicinity of the nozzle lip, static pressure gradient in the compressed layer downstream the shock, and many others. All these quantities sufficiently depend on the flow parameters (flow Mach number, jet overexpansion rate, nozzle throat angle, and ration of gas specific heats). These dependencies are sometimes unusual, especially at small Mach numbers. It was also surprising that there is no great difference among all these flowfield features in the plane jet and in the axisymmetrical jet flow out of a nozzle with large throat angle, but all these parameters behave in a quite different way in an axisymmetrical jet at small and moderate nozzle throat angles. Spacecraft on board electronic devices are subjected to the effects of Space environment, in particular, electromagnetic radiation. The weight limitations for spacecraft pose an important material and structures problem: developing effective protection for on board electronic devices from high frequency electromagnetic radiation. In the present paper the problem of the effect of external high frequency electromagnetic field on electronic devices shielding located on orbital platforms is investigated theoretically. It is demonstrated that the characteristic time for the unsteady stage of the process is negligibly small as compared with characteristic time of electromagnetic field diffusion into a conductor for the studied range of governing parameters. A system of governing material parameters is distinguished, which contribute to protecting electronic devices from induced electrical currents. During interplanetary flight, after large solar flares, astronauts are subject to the impact of relativistic solar protons. These particles produce an especially strong effect during extravehicular activity or landing on Mars (in the future). The relativistic protons reach the orbits of the Earth and Mars with a delay of several hours relative to solar X-rays and TJV radiation. In this paper, we discuss a new opportunity to predict the most dangerous events caused by Solar Cosmic Rays with protons of maximum (relativistic) energy, known in the of solar-terrestrial physics as Ground Level Enhancements or Ground Level Events (GLEs). This new capability is based on a close relationship between the dangerous events and decrease of Total Solar Irradiance (TSI) which precedes these events. This important relationship is revealed for the first time. This research paper provides an illustration of how to use the Spark Plasma Sintering technology (SPS) for powder materials in order to obtain lightweight ceramics (based on alumina) and describes physical principles ensuring efficiency of high heating rates for sintering high-temperature ceramics (pure silicon carbide). Optimization of SPS modes helps to produce Al2O3/ZrO2 ceramics with grain size of less than 400 nm, microhardness H-v=24 GPa, and crack resistance K-IC=4.2 MPa m(1/2)., and ceramics of pure SiC with grain size less than 50 nm, microhardness H-v=21 GPa and crack resistance coefficient K-IC=3.5 MPa m(1/2). (C) 2016 IAA. Published by Elsevier Ltd. All rights reserved. Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD. Dual-tail approach was employed to design novel Carbonic Anhydrase (CA) IX inhibitors by simultaneously matching the hydrophobic and hydrophilic halves of the active site, which also contains a zinc ion as part of the catalytic center. The classic sulfanilamide moiety was used as the zinc binding group. An amino glucosamine fragment was chosen as the hydrophilic part and a cinnamamide fragment as the hydrophobic part in order to draw favorable interactions with the corresponding halves of the active site. In comparison with sulfanilamide which is largely devoid of the hydrophilic and hydrophobic interactions with the two halves of the active site, the compounds so designed and synthesized in this study showed 1000-fold improvement in binding affinity. Most of the compounds inhibited the CA effectively with IC50 values in the range of 7-152 nM. Compound 14e (IC50: 7 nM) was more effective than the reference drug acetazolamide (IC50: 30 nM). The results proved that the dual-tail approach to simultaneously matching the hydrophobic and hydrophilic halves of the active site by linking hydrophobic and hydrophilic fragments was useful for designing novel CA inhibitors. The effectiveness of those compounds was elucidated by both the experimental data and molecular docking simulations. This work laid a solid foundation for further development of novel CA IX inhibitors for cancer treatment. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of novel chalcone derivatives were designed and synthesized as potential antitumor agents. Structures of target molecules were confirmed by H-1 NMR,C-13 NMR and HR-MS, and evaluated for their in vitro anti-proliferative activities using mu assay. Among them, compound 12k displayed potent activity against the test tumor cell lines including multidrug resistant human cancer lines, with the IC50 values ranged from 3.75 to 8.42 mu M. In addition, compound 12k was found to induce apoptosis in NCI-H460 cells via the mitochondrial pathway, including an increase of the ROS level, loss of mitochondrial membrane potential, release of cytochrome c, down-regulation of Bcl-2, up-regulation of Bax, activation of caspase-9 and caspase-3, respectively. Moreover, the cell cycle analysis indicated that 12k effectively caused cell cycle arrest at G2/M phase. The results of tubulin polymerization assay displayed that 12k could inhibit tubulin polymerization in vitro. Furthermore, molecular docking study indicated that 12k can be binding to the colchicine site of tubulin. (C) 2017 Elsevier Masson SAS. All rights reserved. Novel 1H-benzo[d]immidazole-4-carboxamide derivatives bearing five-membered or six-membered N-heterocyclic moieties at the 2-position were designed and synthesized as PARP-1 inhibitors. Structure activity relationships were conducted and led to a number of potent PARP-1 inhibitors having IC50 values in the single or double digit nanomolar level. Some potent PARP-1 inhibitors also had similar inhibitory activities against PARP-2. Among all the synthesized compounds, compound 10a and 11e displayed strong potentiation effects on temozolomide (TMZ) in MX-1 cells (PF50 = 7.10, PF50 = 4.17). In vivo tumor growth inhibition was investigated using compound 10a in combination with TMZ, and it was demonstrated that compound 10a could strongly potentiate the cytotoxicity of TMZ in MX-1 xenograft tumor model. Two co-crystal structures of compounds 11b and 15e complexed with PARP-1 were achieved and demonstrated a unique binding mode of these benzo-imidazole derivatives. (C) 2017 Elsevier Masson SAS. All rights reserved. Histone deacetylases (HDACs) are attractive therapeutic targets for the treatment of cancer and other diseases. It has four classes (I-IV), among them especially class I isozyme are involved in promoting tumor cells proliferation, angiogenesis, differentiation, invasion and metastasis and also viable targets for cancer therapeutics. A novel series of coumarin-based benzamides was designed and synthesized as HDAC inhibitors. The cytotoxic activity of the synthesized compounds (8a-u) was evaluated against six human cancer cell lines including HCT116, A2780, MCF7, PC3, HL60 and A549 and a single normal cell line (Huvec). We evaluated their inhibitory activities against pan HDAC and HDAC1 isoform. Four compounds (8f, 8q, 8r and 8u) showed significant cytotoxicity with IC50 in the range of 0.53-57.59 mu M on cancer cells and potent pan-HDAC inhibitory activity (consists of HDAC isoenzymes) (IC50 = 0.80-14.81 mu M) and HDAC1 inhibitory activity (IC50 = 0.47-0.87 mu M) and also, had no effect on Huvec (human normal cell line) viability (IC50 > 100 mu M). Among them, 8u displayed a higher potency for HDAC1 inhibition with IC50 value of 0.47 +/- 0.02 mu M near equal to the reference drug Entinostat (IC50 = 0.41 +/- 0.06 mu M). Molecular docking studies and Molecular dynamics simulation of compound 8a displayed possible mode of interaction between this compound and HDAClenzyme. (C) 2017 Elsevier Masson SAS. All rights reserved. A library of over forty, novel, structurally diverse phosphonate analogs of sulforaphane (P-ITCs) were designed, synthesized and fully characterized. All compounds were evaluated for antiproliferative activity in vitro on Lovo and LoVo/DX colon cancer cell lines. All compounds exhibited high antiproliferative activity, comparable or higher to the activity of naturally occurring benzyl isothiocyanate and sulforaphane. Assessment of the mechanisms of action of selected compounds revealed their potential as inducers of G(2)/M cell cycle arrest and apoptosis. Further antiproliferative studies for selected compounds with the use of a set of selected cell lines derived from colon, lung, mammary gland and uterus as well as normal murine fibroblasts were performed. In vivo studies of the analyzed phosphonate analogs of sulforaphane showed lower activity in comparison with those of benzyl isothiocyanate. Our studies demonstrated that newly synthesized P-ITCs can be used for as a starting point for the synthesis of novel isothiocyanates with higher anticancer activity in the future. (C) 2017 Elsevier Masson SAS. All rights reserved. In order to develop novel long-acting GLP-1 derivatives, a peptide hybrid (1a) from human GLP-1 and Xenopus GLP-1 discovered in our previous research was selected as the lead compound. Exendin-4 inspired modification resulted in peptide 1b with enhanced glucose-lowering activity. Cysteine mutated 1b derivatives with reserved bioactivity were further site-specifically connected with mPEG(2000)-MAL to provide conjugates 3a-h, among which 3d and 3e were found to have significantly improved hypoglycemic activity and insulinotropic ability than GLP-1. The hypoglycemic durations of 3d and 3e were remarkably prolonged to similar to 20 h in type 2 diabetic db/db mice, compared with the 5.3 h of exendin-4 in the same test. Finally, chronic in vivo studies revealed that a once-daily treatment of 3d or 3e for five weeks resulted in recovered glucose-controlling ability of type 2 diabetic db/db mice, along with other benefits, such as reduced body weight gains, food intake amounts and HbA1c values. Collectively, our results suggest 3d and 3e as potential long-acting glucose-lowering agents for treating type 2 diabetes. (C) 2017 Elsevier Masson SAS. All rights reserved. We have synthesized bioactive 1,4-disubstituted 1,2,3-triazole analogues containing 2H-1,4-benzoxazin3-(4H)-one derivatives via 1,3-dipolar cycloaddition in the presence of CuI. All the reactions proceeded smoothly and afforded its desired products in excellent yields. Among these analogues, 3y exhibited a better cytotoxic effect on human hepatocellular carcinoma (HCC) Hep 3B cells and displayed less cytotoxicity on normal human umbilical vein endothelial cells, compared with Sorafenib, a targeted therapy for advanced HCC. 3y also induced stronger apoptosis and autophagy. Addition of curcumin enhanced 3y induced cytotoxicity by further induction of autophagy. Using gene expression signatures of 3y to query Connectivity Map, a glycogen synthase kinase-3 inhibitor (AR-A014418) was predicted to display similar molecular action of 3y. Experiments further demonstrate that AR-A014418 acted like 3y, and vice versa. Overall, our data suggest the chemotherapeutic potential of 3y on HCC. (C) 2017 Elsevier Masson SAS. All rights reserved. We designed new hypoxia-activated prodrugs by conjugating (1-methyl-2-nitro-1H-imidazol-5-yl) methanol with 7-ethyl-10-hydroxy camptothecin (SN-38). Initially, we improved the method of multi gram scale synthesis of (1-methyl-2-nitro-1H-imidazol-5-yl)methanol, which increased the yield to 42% compared to 8% by the original synthesis method. The improved method was used to synthesize evofosfamide (TH-302) and hypoxia-activated prodrugs of SN-38. Two different linkages between (1-methyl-2-nitro-1H-imidazol-5-yl)methanol and SN-38 were evaluated that afforded different hypoxia-selectivity and toxicity. Compound 16 (IOS), containing an ether linkage, was considered to be a promising hypoxia-selective antitumor agent. (C) 2017 Elsevier Masson SAS. All rights reserved. A multivalent phosphorus dendrimer 1G(3) and its corresponding Cu-complex, 1G(3)-Cu have been recently identified as agents retaining high antiproliferative potency. This antiproliferative capacity was preserved in cell lines overexpressing the efflux pump ABC B1, whereas cross-resistance was observed in Ovarian cancer cell lines resistant to cisplatin. Theoretical 3D models were constructed: the dendrimers appear as irregularly shaped disk-like nano-objects of about 22 angstrom thickness and 49 angstrom diameter, which accumulated in cells after penetration by endocytosis. To get insight in their mode of action, cell death pathways have been examined in human cancer cell lines: early apoptosis was followed by secondary necrosis after multivalent phosphorus dendrimers exposure. The multivalent plain phosphorus dendrimer 1G(3) moderately activated caspase-3 activity, in contrast with the multivalent Cu-conjugated phosphorus dendrimer 1G(3)-Cu which strikingly reduced the caspase-3 content and activity. This decrease of caspase activity is not related to the presence of copper, since inorganic copper has no or little effect on caspase-3. Conversely the potent apoptosis activation could be related to a noticeable translocation of Bax to the mitochondria, resulting in the release of AIF into the cytosol, its translocation to the nucleus and a severe DNA fragmentation, without alteration of the cell cycle. The multivalent Cu-conjugated phosphorus dendrimer is more efficient than its non-complexed analog to activate this pathway in close relationship with the higher antiproliferative potency. Therefore, this multivalent Cu-conjugated phosphorus dendrimer 1G(3)-Cu can be considered as a new and promising first-in-class antiproliferative agent with a distinctive mode of action, inducing apoptosis tumor cell death through Bax activation pathway. (C) 2017 Elsevier Masson SAS. All rights reserved. The inhibition of CYP17 to block androgen biosynthesis is a well validated strategy for the treatment of prostate cancer. Herein we reported the design, synthesis and structure activity relationship (SAR) study for a series of novel 1,2,3,4-tetrahydrobenzo[4,5]thieno[2,3-c]pyridine derivatives. Some analogs demonstrated a potent inhibition to both rat and human CYP17 protein and reduced testosterone production in human H295R cell line. Some analogs also showed high selectivity against other CYP enzymes such as 3A4, 1A2, 2C9, 2C19 and 2D6, which may limit side effects due to drug -drug interactions. Among these analogs, the most potent compound 9c showed 1.5 fold more potent against rat and human CYP17 protein than that of abiraterone (IC(5)0 = 16 nM and 20 nM vs. 25 nM and 36 nM respectively). In NCI-H295R cells, the inhibitory effect of compound 9c on testosterone production (52 +/- 2%) was also more potent than that of abiraterone (74 +/- 15%) at the concentration of mu M. Further, it was shown that 9c reduced plasma testosterone level in a dose-dependent manner in Sprague-Dawley rats. Thus, analog 9c maybe a potential agent used for the treatment of prostate cancer. (C) 2017 Elsevier Masson SAS. All rights reserved. A novel class of NO-donating protoberberine derivatives were synthesized and initially evaluated for their anti-hepatocellular carcinoma activities. Most of the compounds exhibited more potent activity against HepG2 cells than parent compounds berberine and palmatine. In particular, compound 15a exerted the strongest activity with an IC50 value of 1.36 mu M. Moreover, most compounds released moderate levels of NO in vitro, and the antitumor activity of 15a in HepG2 cells was remarkably diminished by an NO scavenger. Interestingly, compound 15a displayed a broad-spectrum antitumor efficacy and possessed good selectivity between tumor cells (HepG2, SMMC-7721, HCT-116, HL-60) and normal liver LO-2 cells. The mechanism studies revealed that 15a blocked the G2 phase of the cell cycle and induced apoptosis of HepG2 cells by mitochondrial depolarization. Furthermore, 15a inhibited tumor growth in H22 liver cancer xenograft mouse model by 62.5% (w/w), which was significantly superior to parent compound palmatine (41.6%, w/w). Overall, the current study may provide a new approach for the discovery of novel antitumor agents. (C) 2017 Elsevier Masson SAS. All rights reserved. Ureido-substituted benzenesulfonamides (USBs) show great promise as selective and potent inhibitors for human carbonic anhydrase hCA IX and XII, with one such compound (SLC-0111/U-F) currently in clinical trials (clinical trials.gov, NCT02215850). In this study, the crystal structures of both hCA II (off target) and an hCA IX-mimic (target) in complex with selected USBs (U-CH3, U-F, and U-NO2), at resolutions of 1.9 angstrom or better, are presented, and demonstrate differences in the binding modes within the two isoforms. The presence of residue Phe 131 in hCA II causes steric hindrance (U-CH3, 1765 nM; U-F, 960 nM; U-NO2, 15 nM) whereas in hCA IX (U-CH3, 7 nM; U-F, 45 nM; U-NO2, 1 nM) and hCA XII (U-CH3, 6 nM; U-F, 4 nM; U-NO2, 6 nM), 131 is a Val and Ala, respectively, allows for more favorable binding. Our results provide insight into the mechanism of USB selective inhibition and useful information for structural design and drug development, including synthesis of hybrid USB compounds with improved physiochemical properties. (C) 2017 Elsevier Masson SAS. All rights reserved. The growing incidents of cryptococcosis in immuno-compromised patients have created a need for novel drug therapies capable of eradicating the disease. The peptide-based drug therapy offers many advantages over the traditional therapeutic agents, which has been exploited in the present study by synthesizing a series of hexapeptides that exhibits promising activity against a panel, of Gram-negative and Gram-positive bacteria and various pathogenic fungal strains; the most exemplary activity was observed against Cryptococcus neoformans. The peptides 3, 24, 32 and 36 displayed potent anticryptococcal activity (IC50 = 0.4-0.46 mu g/mL, MIC = 0.63-1.25 mu g/mL, MFC = 0.63-1.25 mu g/mL), and stability under proteolytic conditions. Besides this, several other peptides displayed promising inhibition of pathogenic bacteria. The prominent ones include peptides 18-20, and 26 that exhibited IC50 values ranged between 2.1 and 3.6 mu g/mL, MICs of 5-20 mu g/mL and MBCs of 10-20 mu g/mL against Staphylococcus aureus and methicillin-resistant S. aureus. The detailed, mechanistic study on selected peptides demonstrated absolute selectivity towards the bacterial membranes and fungal cells by causing perturbations in the cell membranes, confirmed by the scanning electron microscopy and transmission electron microscopy studies. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of seventeen piperazine derivatives have been synthesized and biologically evaluated for the management of andropause-associated prostatic disorders and depression. Five compounds 16,19, 20, 21 and 22 significantly inhibited proliferation of androgen-sensitive LNCaP prostatic cell line with EC50 values of 12.4 mu M, 15.6 11.8 mu M, 10.4 mu M, 12.2 mu M respectively and decreased Ca2+ entry through adrenergic-receptor alpha(1A) blocking activity. Anti-androgenic behaviour of compound 19 and 22 was evident by decreased luciferase activity. The high EC50 value in AR-negative cells PC3 and DU145 suggested that the cytotoxicity of compounds was due to AR down regulation. Compound 19 reduced the prostate weight of rats by 53.8%. Further, forced-swimming and tail-suspension tests revealed antidepressant-like activity of compound 19, lacking effects on neuromuscular co-ordination. In silico ADMET predictions revealed that the compound 19 had good oral absorption, aqueous solubility, non-hepatotoxic and good affinity for plasma protein binding. Pharmacokinetic and tissue uptake of 19 at 10 mg/kg demonstrated an oral bioavailability of 35.4%. In silico docking studies predicted similar binding pattern of compound 19 on androgen receptor as hydroxyflutamide. Compound 19 appears to be a unique scaffold with promising activities against androgen associated prostatic disorders in males like prostate cancer and BPH and associated depression. (C) 2017 Elsevier Masson SAS. All rights reserved. 5-(2-(4-Methoxyphenyl)ethyl)-2-amino-3-methylcarboxylate thiophene (TR560) is the prototype drug of a recently discovered novel class of tumor-selective compounds that preferentially inhibit the proliferation of specific tumor cell types (e.g. leukemia/lymphoma). Here, we further increased tumor selectivity by simplification of the molecule through replacing the 4-methoxyphenyl moiety by an alkyl chain. Several 2-amino-3-methylcarboxylate thiophene derivatives containing at C-5 an alkyl group consisting of at least 6 (hexyl) to 9 (nonyl) carbon units showed pronounced anti-proliferative activity in the mid-nanomolar range with 500-to 1000-fold tumor cell selectivity. The compounds preferentially inhibited the proliferation of T-lymphoma CEM and Molt/4, prostate PC similar to 3, kidney Caki-1 and hepatoma Huh-7 tumor cells, but were virtually inactive against other tumor cell lines including B-lymphoma Raji and cervix carcinoma HeLa cells. The novel prototype drug 3j (containing a 5-heptyl chain) elicited a cytotoxic, rather than cytostatic activity, already after 4 h of exposure. The unusual tumor selectivity could not be explained by a differential uptake (or efflux) of the drug by sensitive versus resistant tumor cells. Exposure of a fluorescent derivative of 3j revealed pronounced uptake of the drug in the cytoplasm, no visible appearance in the nucleus, and a predominant localization in the endoplasmic reticulum. These observations may be helpful to narrow down the intracellular localization and identification of the molecular target of the 5 -substituted thiophene derivatives. (C) 2017 The Authors. Published by Elsevier Masson SAS. A series of novel 2-benzylthio-4-chloro-5-(5-substituted 1,3,4-oxadiazol-2-yl)benzenesulfonamides (4 27) have been synthesized as potential anticancer agents. MIT assay was carried out to determine the cytotoxic activity against three human cancer cell lines: colon cancer HCT-116, breast cancer MCF-7 and cervical cancer HeLa as well as to determine the influence on human keratinocyte cell line HaCaT. Relatively high (IC50: 7-17 mu M) cytostatic activity and selectivity against HeLa cell line was found for compounds 6, 7, 9-11 and 16. While compounds 23-27 bearing styryl moieties attached to a 1,3,4-oxadiazole ring at position 5, exhibited significant activity against two and/or three cancer cell lines with IC50: 11-29 mu M. Further quantitative structure-activity relationships based on molecular descriptors calculated by DRAGON software, were investigated by Orthogonal Projections to Latent Structures (OPLS) technique and Variable Influence on Projection (VIP) analysis. Considering molecular descriptors with the highest influence on projection (highest VIP values) lipophilicity of tested compounds was pointed as main factor affecting activity towards HCT-116 cell line, while structural parameters associated with presence of styryl substituent in position 5 of 1,3,4-oxadiazole ring were identified as essential for activity towards MCF-7 breast cancer. In vitro tests for metabolic stability in the presences of pooled human liver microsomes and NADPH showed that some of the most active compounds 26 and 27 presented favorable metabolic stability with t(1/2) in the range of 28.1-36.0 min. (C) 2017 Elsevier Masson SAS. All rights reserved. Chagas disease is one of the most important neglected parasitic diseases afflicting developed and undeveloped countries. There are currently limited options for inexpensive and secure pharmacological treatment. In this study, we employed a structure-based virtual screening protocol for 3180 FDA approved drugs for repositioning of them as potential trans-sialidase inhibitors. In vitro and in vivo evaluations were performed for the selected drugs against trypomastigotes from the INC-5 and NINOA strains of T cruzi. Also, inhibition of sialylation by the trans-sialidase enzyme reaction was evaluated using high-performance anion-exchange chromatography with pulse amperometric detection to confirm the mechanism of action. Results from the computational study showed 38 top drugs with the best binding-energies. Four compounds with antihistaminic, anti -hypertensive, and antibiotic properties showed better trypanocidal effects (LC50 range = 4.5-25.8 mu g/mL) than the reference drugs, nifurtimox and benznidazole (LC50 range = 36.1-46.8 mu g/mL) in both strains in the in vitro model. The anti-inflammatory, sulfasalazine showed moderate inhibition (37.6%) of sialylation in a trans-sialidase enzyme inhibition reaction. Sulfasalazine also showed the best trypanocidal effects in short-term in vivo experiments on infected mice. This study suggests for the first time that the anti-inflammatory sulfasalazine could be used as a lead compound to develop new trans-sialidase inhibitors. (C) 2017 Elsevier Masson SAS. All rights reserved. Several anthranilamide-based 2-phenylcyclopropane-l-carboxamides 13a-f, 1,1'-biphenyl-4-carboxamides 14a-f and 1,1'-biphenyl-2-carboxamides 17a-f were obtained by a multistep procedure starting from the (15,2S)-2-phenylcyclopropane-1-carbonyl chloride 11, the 1,1'-biphenyl-4-carbonyl chloride 12 or the 1,1-biphenyl-2-carbonyl chloride 16 with the appropriate anthranilamide derivative 10a-f. Derivatives13a-f, 14a-f and 17a-f showed antiproliferative activity against human leukemia K562 cells. Among these derivatives 13b, 14b and 17b exerted a particular cytotoxic effect on tumor cells. Derivative 17b showed a better antitumoral effect on K562 cells than 13b and 14b. Analyses performed to explore 17b mode of action revealed that it induced an arrest in G2/M phase of cell cycle which was consequent to DNA lesions as demonstrated by the increase in phospho-ATM and gamma H2AX, two known markers of DNA repair response system. The effect of 17b was also related to ROS generation, activation of JNK and induction of caspase-3 dependent apoptosis. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of copper(II) complexes with tripodal polypyridylamine ligands (derived from the parent ligand tris(2-pyridylmethyl)amine, tmpa) has been synthesized. Crystallographic characterization was possible for all complexes obtained. The copper(II) chloride complexes were investigated for their in vitro anticancer potential using human tumor cell lines containing examples of cervical, colon, ovarian cancers and melanoma. Some compounds showed a similar activity compared to that of cisplatin, however no systematic behavior could be observed. (C) 2017 Elsevier Masson SAS. All rights reserved. Ru(II)-arene complexes are attracting increasing attention due to their considerable antitumoral activity. However, it is difficult to clearly establish a direct relationship between their structure and anti proliferative activity, as substantial structural changes might not only affect their anticancer activity but also tightly control their activation site(s) and/or their biological target(s). Herein, we describe the synthesis and characterization of four ruthenium(II) arene complexes bearing bidentate N,O-donor Schiff-base ligands ([Ru(eta(6)-benzene)(N-O)Cl]) that display a significantly distinct antiproliferative activity against cancer cells, despite their close structural similarity. Furthermore, we suggest there is a link between their respective antiproliferative activity and their lipophilicity, as the latter affects their ability to accumulate into cancer cells. This lipophilicity-cytotoxicity relationship was exploited to design another structurally related ruthenium complex with a much higher antiproliferative activity (IC50 > 25.0 mu M) against three different human cancer cell lines. Whereas this complex shows a slightly lower activity than that of clinically approved cis-platin against the same human cancer cell lines, it displays a lower toxicity in zebrafish (Danio rerio) embryos at concentrations up to 20 mu M. (C) 2017 Elsevier Masson SAS. All rights reserved. The enzyme NQO1 is a potential target for selective cancer therapy due to its overexpression in certain hypoxic tumors. A series of prodrugs possessing a variety of cytotoxic diterpenoids (oridonin and its analogues) as the leaving groups activated by NQO1 were synthesized by functionalization of 3-(hydroxymethyl)indolequinone, which is a good substrate of NQO1. The target compounds (29a-m) exhibited relatively higher antiproliferative activities against NQO1-rich human colon carcinoma cells (HT-29) and human lung carcinoma (A549) cells (IC50 = 0.263-904 mu M), while NQO1-defficient lung adenosquamous carcinoma cells (H596) were less sensitive to these compounds, among which, compound 29h exhibited the most potent antiproliferative activity against both A549 and HT-29 cells, with IC50 values of 0386 and 0.263 mu M, respectively. Further HPLC and docking studies demonstrated that 29h is a good substrate of NQO1. Moreover, the investigation of anticancer mechanism showed that the representative compound 29h affected cell cycle and induced NQO1 dependent apoptosis through an oxidative stress triggered mitochondria-related pathway in A549 cells. Besides, the antitumor activity of 29h was also verified in a liver cancer xenograft mouse model. Biological evaluation of these compounds concludes that there is a strong correlation between NQO1 enzyme and induction of cancer cell death. Thus, this suggests that some of the target compounds activated by NQO1 are novel prodrug candidates potential for selective anticancer therapy. (C) 2017 Elsevier Masson SAS. All rights reserved. Despite the fact that Leishmania ssp are pteridine auxotrophs, Dihydrofolate Reductase-Thymidylate Synthase (DHFR-TS) inhibitors are ineffective against Leishmania major. On the other hand Pteridine Reductase 1 (PTR1) inhibitors proved to be lethal to the parasite. Aiming at identifying hits that lie outside the chemical space of known PTR1 inhibitors, pharmacophore models that differentiate true binders from decoys and explain the structure-activity relationships of known inhibitors were employed to virtually screen the lead-like subset of ZINC database. This approach leads to the identification of Z80393 (IC50 = 32.31 +/- 1.18 mu M), whose inhibition mechanism was investigated by Thermal Shift Assays. This experimental result supports a competitive mechanism and was crucial to establish the docking search space as well as select the best pose, which was then investigated by molecular dynamics studies that corroborate the hit putative binding profile towards LmPTR1. The information gathered from such studies shall be useful to design more potent non-nucleoside LmPTR1 inhibitors. (C) 2017 Elsevier Masson SAS. All rights reserved. In the present study a series of 4-methyl-2-aryl-5-(2-aryl/benzyl thiazol-4-yl) oxazole (4a-v) have been synthesized and evaluated for their preliminary antitubercular, antimicrobial and cytotoxicity activity. Among all the synthesized compounds, 4v reported comparable activity against dormant M. tuberculosis H37Ra and M. bovis BCG strains with respect to standard drug rifampicin. The active compounds from the antitubercular study were further tested for anti-proliferative activity against HeLa, A549 and PANC-1 cell lines using MU assay and showed no significant cytotoxic activity at the maximum concentration evaluated. Further, the synthesized compounds were found to have potential antibacterial activities with MIC range of 2.1-26.8 mu g/mL. High potency, lower cytotoxicity and promising antimycobacterial activity suggested that these compounds could serve as good leads for further optimisation and development. (C) 2017 Elsevier Masson SAS. All rights reserved. Enantiomers account for quite a large percentage of compounds in natural products. Our team is interested in the separation and biological activity of racemic compounds. In this report, four pairs of prenylated flavan enantiomers I(+/-)-1-(+/-)-4], including five new compounds, were isolated from the stem and root bark of Daphne giraldii, and separated successfully by using chiral chromatographic column. Their planar structures and absolute configurations were established by comprehensive spectroscopic analyses as well as circular dichroism (CD) spectroscopy. The isolates had a selective cytotoxicity towards hepatic carcinoma cell lines. Among them, new compound (+)-4 showed a more potent inhibitory effect on Hep3B cells with an IC50 value of 30.3 mu M, compared with its racemic mixture 4. Therefore, the action mechanism of (+)-4 in vitro was subsequently investigated. The morphological observation and Western blot analysis demonstrated that (+)-4 could markedly induce apoptosis through intrinsic and extrinsic pathways, and also cause autophagy by increasing the phosphorylation of AMP-activated protein kinase (AMPK) in Hep3B cells. After treatment with the autophagic inhibitor bafilomycin Al (Baf Al), (+)-4-induced apoptosis increased significantly, suggesting that the autophagy induced by (+)-4 performed a protective effect on apoptotic cell death. (C) 2017 Elsevier Masson SAS. All rights reserved. Inhibition of histone deacetylase (HDAC) has been regarded as a potential therapeutic approach for treatment of multiple diseases including cancer. Based on pharmacophore model of HDAC inhibitors, a series of quinoline-based N-hydroxycinnamamides and N-hydroxybenzamides were designed and synthesized as potent HDAC inhibitors. All target compounds were evaluated for their in vitro HDAC inhibitory activities and anti-proliferative activities and the best compound 4a surpass Vorinostat in both enzymatic inhibitory activity and cellular anti-proliferative activity. In terms of HDAC isoforms selectivity, compounds 4a exhibited preferable inhibition for class I HDACs, especially for HDAC8, the IC50 value (442 nM) was much lower than that of Vorinostat (7468 nM). Subsequently, we performed class I & Ha HDACs whole cell enzyme assay to evaluate inhibitory activity in whole cell context. Compounds 4a and 4e displayed much better cellular activity for class I HDACs than that for class Ha HDACs, which indicated that 4a and 4e might be potent class I HDAC inhibitors. Meanwhile, flow cytometry analysis showed that compound 4a and 4e can promote cell apoptosis in vitro. (C) 2017 Elsevier Masson SAS. All rights reserved. Taking into account the structure activity relationship information given by our previous studies, we designed and synthesized a small library of pyrazolylureas and imidazopyrazolecarboxamides fluorinated on urea moiety and differently decorated on pyrazole nucleus. All compounds were preliminary screened by Western blotting technique to evaluate their activity on MAPK and PI3K pathways by monitoring ERK1/2, p38MAPK and Akt phosphorylation, and also screened with a wound healing assay to assess their capacity in inhibiting endothelial cell migration, using human umbilical vein endothelial cells stimulated with VEGF. Pyrazoles and imidazopyrazoles did not show the same activity profile. SAR consideration showed that specific substituents and their position in pyrazole nucleus, as well as the type of substituent on the phenylurea moiety play a pivotal role in determining increase or decrease of kinases phosphorylation. On the other hand the loss of flexibility in imidazopyrazole derivatives is responsible for activity potentiation. Screening of the compound library for inhibition of endothelial cell migration, a function required for angiogenesis, showed significant activity for compound 3. This compound might interfere with cell migration by modulating the activity of different upstream target kinases. Therefore, compound 3 represents a potential inhibitor of angiogenesis. Furthermore, it may be used as a tool to identify unknown mediators of endothelial migration and thereby unveiling new therapeutic targets for controlling pathological angiogenesis in diseases such as cancers. (C) 2017 Elsevier Masson SAS. All rights reserved. A novel series of 2-(1H-pyrazol-4-y1)-1H-imidazo[4,5-f][1,101phenanthrolines were designed, synthesized and evaluated for their antitumor activity against lung adenocarcinoma by CCK-8 assay, electrophoretic mobility shift assay (EMSA), UV-melting study, wound healing assay and docking study. These compounds showed good inhibitory activities against lung adenocarcinoma. Especially compound 12c exhibited potential antiproliferative activity against A549 cell line with the half maximal inhibitory concentration (IC50) value of 1.48 mu M, which was a more potent inhibitor than cisplatin (IC50 = 12.08 mu M) and leading compound 2 (IC50 = 1.69 mu M), and the maximum cell inhibitory rate being up to 98.40%. Moreover, further experiments demonstrated that compounds 12a-d can strongly interact with telomeric DNA to stabilize G-quadruplex DNA with increased am values from 12.44 to 20.54 degrees C at a ratio of DNA to compound 1:10. These results implied that growth inhibition of A549 cells mediated by these phenanthroline derivatives is possibly positively correlated to the fact their interaction with telomeric G-quadruplexs. (C) 2017 Elsevier Masson SAS. All rights reserved. 34 Xanthones were synthesized by microwave assisted technique. Their in vitro inhibition activities against five cell lines growth were evaluated. The SAR has been thoroughly discussed. 7-Bromo-1,3-dihydroxy-9H-xanthen-9-one (3-1) was confirmed as the most active agent against MDA-MB-231 cell line growth with an IC50 of 0.46 +/- 0.03 mu M. Combination of 3-1 and 5,6-dimethylxanthone-4-acetic acid (DMXAA) showed the best synergistic effect. Apoptosis analysis indicated different contributions of early/late apoptosis and necrosis to cell death for both monomers and the combination. Western Blot implied that the combination regulated p53/MDM2 to a better healthy state. Furthermore, 3-1 and DMXAA arrested more cells on G2/M phase; while the combination arrested more cells on S phase. All the evidences support that the 3-1/DMXAA combination is a better anti-cancer therapy. (C) 2017 Elsevier Masson SAS. All rights reserved. Tyrosyl-tRNA synthetase (TyrRS) is an aminoacyl-tRNA synthetase family protein that possesses an essential role in bacterial protein synthesis. The synthesis, structure-activity relationship, and evolution of a novel series of adenosine-containing 3-arylfuran-2(5H)-ones as TyrRS inhibitors are described. Advanced compound d3 from this series exhibited excellent affinity for TyrRS with IC50 of 0.61 +/- 0.04 mu M. Bacterial growth inhibition assays demonstrated that d3 showed submicromolar antibacterial potency against Escherichia coli and Pseudomonas aeruginosa, and compared to the marketed antibiotics ciprofloxacin. (C) 2017 Elsevier Masson SAS. All rights reserved. With the aim to develop novel antiproliferative agents, a new series of eighteen dihydroxylated 2,6-diphenyl-4-chlorophenylpyridines were systematically designed, prepared, and investigated for their topoisomerase (topo) I and II alpha inhibitory properties and antiproliferative effect in three different human cancer cell lines (HCT15, T47D, and HeLa). Compounds 22-30 which possess a meta- or para-phenol on 2-, or 6-position of central pyridine ring showed significant dual topo I and topo IN inhibitory activities with strong antiproliferative activities against all the tested human cancer cell lines. However, compounds 13-21 which possess an ortho-phenol on 2-, or 6-position of central pyridine ring did not show significant topo I and topo II alpha inhibitory activities but displayed moderate antiproliferative activities against all the tested human cancer cell lines. Compound 23 exhibited the highest antiproliferative potency as much as 348.5 and 105 times compared to etoposide and camptothecin, respectively, in T47D cancer cell line. The structure-activity relationship study revealed that the para position of a hydroxyl group at 2-and 6-phenyl ring and chlorine atom at the para position of 4-phenyl ring of the central pyridine exhibited the most significant topo I and topo II alpha inhibition, which might indicate introduction of the chlorine atom at the phenyl ring of 4-pyridine have an important role as dual inhibitors of topo I and topo II alpha. Compound 30 which showed the most potent dual topo I and topo II alpha inhibition with strong antiproliferative activity in T47D cell line was selected to perform further study on the mechanism of action, which revealed that compound 30 functions as a potent DNA non-intercalative catalytic topo I and II alpha dual inhibitor. (C) 2017 Elsevier Masson SAS. All rights reserved. Human immunodeficiency virus (HIV) reverse transcriptase (RT) associated ribonuclease H (RNase H) remains the only virally encoded enzymatic function not clinically validated as an antiviral target. 2-Hydroxyisoquinoline-1,3-dione (HID) is known to confer active site directed inhibition of divalent metal-dependent enzymatic functions, such as HIV RNase H, integrase (IN) and hepatitis C virus (HCV) NS5B polymerase. We report herein the synthesis and biochemical evaluation of a few C-5, C-6 or C-7 substituted HID subtypes as HIV RNase H inhibitors. Our data indicate that while some of these subtypes inhibited both the RNase H and polymerase (pol) functions of RT, potent and selective RNase H inhibition was achieved with subtypes 8-9 as exemplified with compounds 8c and 9c. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of 7-azaindole derivatives bearing the dihydropyridazine scaffold were synthesized and evaluated for their c-Met kinase inhibitory, and antiproliferative activity against 4 cancer cell lines (HT29, A549, H460, U87MG) were evaluated in vitro. Most compounds showed moderate to excellent potency. Compared to foretinib, the most promising analog 34 (c-Met IC50: 1.06 nM, a multitarget tyrosine kinase inhibitor) showed a 6.4-, 7.8-, and 3.2-fold increase in activity against HT29, A549, and H460 cell lines, respectively. Structure activity relationship studies indicated that mono-EWGs (such as R-2 = F) at 4-position of moiety D was a key factor in improving the antitumor activity. (C) 2017 Elsevier Masson SAS. All rights reserved. The reactivity of Morita-Baylis-Hillman allyl acetates was employed to introduce phosphorus-containing functionalities to the side chain of the cinnamic acid conjugated system by nucleophilic displacement. The proximity of two acidic groups, the carboxylate and phosphonate/phosphinate groups, was necessary to form interactions in the active site of urease by recently described inhibitor frameworks. Several organophosphorus scaffolds were obtained and screened for inhibition of the bacterial urease, an enzyme that is essential for survival of urinary and gastrointestinal tract pathogens. alpha-Substituted phosphonomethyl- and 2-phosphonoethyl-cinnamate appeared to be the most potent and were further optimized. As a result, one of the most potent organophosphorus inhibitors of urease, alpha-phosphonomethyl-p-methylcinnamic acid, was identified, with K-i = 0.6 mu M for Sporosarcina pasteurii urease. High complementarity to the enzyme active site was achieved with this structure, as any further modifications significantly decreased its affinity. Finally, this work describes the challenges faced in developing ligands for urease. (C) 2017 Elsevier Masson SAS. All rights reserved. As a continuous effort to discover new potential anti-inflammatory agents, we systematically designed and synthesized sixty-one 2-benzylidene-1-indanone derivatives with structural modification of chalcone, and evaluated their inhibitory activity on LPS-stimulated ROS production in RAW 264.7 macrophages. Systematic structure-activity relationship study revealed that hydroxyl group in C-5, C-6, or C-7 position of indanone moiety, and ortho-, meta-, or para-fluorine, trifluoromethyl, trifluoromethoxy, and bromine functionalities in phenyl ring are important for inhibition of ROS production in LPS-stimulated RAW 264.7 macrophages. Among all the tested compounds, 6-hydroxy-2-(2-(trifluoromethoxy) benzylidene)-2,3-dihydro-1H-inden-1-one (compound 44) showed the strongest inhibitory activity of ROS production. Further studies on the mode of action revealed that compound 44 potently suppressed LPS-stimulated ROS production via modulation of NADPH oxidase. The findings of this work could be useful to design 2-benzylidene-indanone based lead compounds as novel anti-inflammatory agents. (C) 2017 Elsevier Masson SAS. All rights reserved. Tuberculosis is caused by Mycobacterium tuberculosis, an intracellular pathogen that can survive in host cells, mainly in macrophages. An increase of multidrug-resistant tuberculosis qualifies this infectious disease as a major public health problem worldwide. The cellular uptake of the antimycobacterial agents by infected host cells is limited. Our approach is to enhance the cellular uptake of the antituberculars by target cell-directed delivery using drug-peptide conjugates to achieve an increased intracellular efficacy. In this study, salicylanilide derivatives (2-hydroxy-N-phenylbenzamides) with remarkable antimycobacterial activity were conjugated to macrophage receptor specific tuftsin based peptide carriers through oxime bond directly or by insertion of a GFLG tetrapeptide spacer. We have found that the in vitro antimycobacterial activity of the salicylanilides against M. tuberculosis H(37)Rv is preserved in the conjugates. While the free drug was ineffective on infected macrophage model, the conjugates were active against the intracellular bacteria. The fluorescently labelled peptide carriers that were modified with different fatty acid side chains showed outstanding cellular uptake rate to the macrophage model cells. The conjugation of the salicylanilides to tuftsin based carriers reduced or abolished the in vitro cytostatic activity of the free drugs with the exception of the palmitoylated conjugates. The conjugates degraded in the presence of rat liver lysosomal homogenate leading to the formation of an oxime bond linked salicylanilide-amino acid fragment as the smallest active metabolite. (C) 2017 Published by Elsevier Masson SAS. NEDD8 activating enzyme (NAE) plays a critical role in various cellular functions in cancers. In this study, a target-based virtual screening was applied to discover benzothiazoles to be potent non-covalent NAE inhibitors. Further two round optimizations concluded a preliminary structure-activity relationship (SAR) of their derivatives. Three compounds (6k, 7b, ZM223) exhibited antitumor activities in nanomolar range. ZM223 showed excellent anticancer activity against HCT116 colon cancer cells with an IC50 value of 100 nM. Mechanistically, compounds 6k, 7b, and ZM223 caused a dose-response decrease in the level of NEDD8 and an increase in the downstream UBC12 protein. This scaffold represents a promising lead for developing non-sulfamide NAE inhibitors. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of new donepezil derivatives were designed synthesized and evaluated as multifunctional cholinesterase inhibitors against Alzheimer's disease (AD). In vitro studies showed that most of them exhibited significant potency to inhibit acetylcholinesterase and self-induced (beta-amyloid (A beta) aggregation, and moderate antioxidant activity. Especially, compound 5b presented the greatest ability to inhibit cholinesterase (IC50, 1.9 nM for eeAChE and 0.8 nM for hAChE), good inhibition of A beta aggregation (53.7% at 20 mu M) and good antioxidant activity (0.54 trolox equivalents). Kinetic and molecular modeling studies indicated that compound 5b was a mixed-type inhibitor, binding simultaneously to the catalytic active site (CAS) and the peripheral anionic site (PAS) of AChE. In addition, compound 5b could reduce PC12 cells death induced by oxidative stress and A beta (1-42). Moreover, in vivo experiments showed that compound 5b was nontoxic and tolerated at doses up to 2000 mg/kg. These results suggested that compound 5b might be an excellent multifunctional agent for AD treatment. (C) 2017 Elsevier Masson SAS. All rights reserved. Quinone methide (QM) formation induced by endogenously generated H2O2 is attractive for biological and biomedical applications. To overcome current limitations due to low biological activity of H2O2 activated QM precursors, we are introducing herein several new arylboronates with electron donating substituents at different positions of benzene ring and/or different neutral leaving groups. The reaction rate of the arylboronate esters with H2O2 and subsequent bisquinone methides formation and DNA cross-linking was accelerated with the application of Br as a leaving group instead of acetoxy groups. Additionally, a donating group placed meta to the nascent exo-methylene group of the quinone methide greatly improves H2O2-induced DNA interstrand cross-link formation as well as enhances the cellular activity. Multiple donating groups decrease the stability and DNA cross-linking capability, which lead to low cellular activity. A cell-based screen demonstrated that compounds 2a and 5a with a OMe or OH group dramatically inhibited the growth of various tissue-derived cancer cells while normal cells were less affected. Induction of H2AX phosphorylation by these compounds in CLL lymphocytes provide evidence for a correlation between cell death and DNA damage. The compounds presented herein showed potent anticancer activities and selectivity, which represent a novel scaffold for anticancer drug development. (C) 2017 Elsevier Masson SAS. All rights reserved. To systematically investigate the structure-activity relationships of 1,7-diarylhepta-1,4,6-trien-3-ones in three human prostate cancer cell models and one human prostate non-neoplastic epithelial cell model, thirty five 1,7-diarylhepta-1,4,6-trien-3-ones with different terminal heteroaromatic rings have been designed for evaluation of their anti-proliferative potency in vitro. These target compounds have been successfully synthesized through two sequential Horner-Wadsworth-Emmons reactions starting from the appropriate aldehydes and tetraethyl (2-oxopropane-1,3-diyl)bis(phosphonate). Their anti-proliferative potency against PC-3, DU-145 and LNCaP human prostate cancer cell lines can be significantly enhanced by the manipulation of the terminal heteroaromatic rings, further demonstrating the utility of 1,7-diarylhepta-1,4,6-trien-3-one as a potential scaffold for the development of anti prostate cancer agents. The optimal analog 40 is 82-, 67-, and 39-fold more potent than curcumin toward the three prostate cancer cell lines, respectively. The experimental data also reveal that the trienones with two different terminal aromatic rings possess greater potency toward three prostate cancer cell lines, but also have greater capability of suppressing the proliferation of PWR-1E benign human prostate epithelial cells, as compared to the corresponding counterparts with two identical terminal rings and curcumin. The terminal aromatic rings also affect the cell apoptosis perturbation. (C) 2017 Elsevier Masson SAS. All rights reserved. A double Ciaisen rearrangements synthetic strategy was established for the total synthesis of 4,4'-dimethyl medicagenin (compound 6c). A series of its analogs also were prepared, including two novel 3',5'-diprenylated chalcones, in which ring B was replaced by azaheterocycle. The structures of the twenty-two newly synthesized compounds were confirmed by H-1 NMR, C-13 NMR and ESI-MS. In vitro, the cytotoxicity of the target compounds was evaluated using cancer cells. Noticeably, compound 10 exhibited broad-spectrum cytotoxicity on PC3 prostate cancer cells, MDA-MB-231 breast cancer cells (MDA), HEL and K562 erythroleukemia cells with IC50 values of 2.92, 3.14,1.85 and 2.64 mu M, respectively. Further studies indicated that compound 10 induced apoptosis and arrested the cell cycle phase of the above mentioned four cancer cell lines. By contrast, compound 6g selectively displayed potent inhibitory activity against the proliferation of HEL cells with an IC50 value of 4.35 mu M. Compound 6g slightly induced apoptosis and arrested cell cycle phase of HEL cells. Preliminary structure-activity relationship studies indicated that, in all cancer cell lines evaluated, the 3-pyridinyl group was essential for cytotoxicity. (C) 2017 Published by Elsevier Masson SAS. Negative allosteric modulators of metabotropic glutamate receptor 5 (mGlu(5)) showed efficacy in a number of animal models of different CNS diseases including anxiety and depression. Virtually all of the compounds which reached the clinic belong to the same chemotype having an acetylenic linker that connects (hetero)cyclic moieties. Searching for new chemotypes we identified a morpholino-sulfoquinoline derivative (1) by screening our corporate compound deck. The HTS hit showed reasonable affinity and selectivity towards mGlu(5) receptors, however, its inferior metabolic stability prevented its testing in vivo. In a chemical program we aimed to improve the affinity, physicochemical properties and metabolic stability exploring three regions of the hit. Systematic variation of different amines at position 4 (region I) led to the identification of 4-methyl-piperidinyl analogues. Substituents of the quinoline core (region II) and the phenylsulfonyl moiety (region III) were mapped by parallel synthesis. Evaluation of both morpholino- and 4-methyl-piperidinyl-sulfoquinoline libraries of about 270 derivatives revealed beneficial substituent combinations in regions II and III. Blood levels of optimized 4-methyl-piperidinyl-sulfoquinolines, however, were still insufficient for robust in vivo efficacy. Finally, introducing 4-hydoxymethyl-piperidinyl substituent to region I resulted in new sulfoquinolines with greatly improved solubility and reasonable affinity coupled with affordable metabolic stability. The most promising analogues (24 and 25) showed high blood levels and demonstrated significant efficacy in the experimental model of anxiety. (C) 2017 Elsevier Masson SAS. All rights reserved. Glucokinase activators (GKAs) are among the emerging drug candidates for the treatment of type 2 diabetes (T2D). Despite effective blood glucose lowering in clinical trials, many pan-GKAs "acting both in pancreas and liver" have been discontinued from clinical development mainly because of their potential to cause hypoglycemia. Pan-GKAs over sensitize pancreatic GK, resulting in insulin secretion even at sub-normoglycemic level which might be a possible explanation for hypoglycemia. An alternative approach to minimize the risk of hypoglycemia is to use liver-directed GKAs, which are reported to be advancing well in clinical development. Here, we report the discovery and structure-activity relationship (SAR) studies on a novel 2-phenoxy-acetamide series with the aim of identifying a liver-directed GKA. Incorporation of a carboxylic acid moiety as an active hepatocyte uptake recognizing element at appropriate position of 2-phenoxy-acetamide core led to the identification of 26, a potent GKA with predominant liver-directed pharmacokinetics in mice. Compound 26 on oral administration significantly reduced blood glucose levels during an oral glucose tolerance test (oGTT) performed in diet-induced obese (DIO) mice, while showing no sign of hypoglycemia in normal C57 mice over a 10-fold dose range, even when dosed at fasted condition. Together, these data demonstrate a liver-directed GKA has beneficial effect on glucose homeostasis with reduced risk of hypoglycemia. (C) 2017 Elsevier Masson SAS. All rights reserved. Fluconazole (FLC) is the drug of choice when it comes to treat fungal infections such as invasive candidiasis in humans. However, the widespread use of FLC has resulted in the development of resistance to this drug in various fungal strains and, simultaneously has occasioned the need for new antifungal agents. Herein, we report the synthesis of 27 new FLC derivatives along with their antifungal activity against a panel of 13 clinically relevant fungal strains. We also explore their toxicity against mammalian cells, their hemolytic activity, as well as their mechanism of action. Overall, many of our FLC derivatives exhibited broad-spectrum antifungal activity and all compounds displayed an MIC value of <0.03 mu g/mL against at least one of the fungal strains tested. We also found them to be less hemolytic and less cytotoxic to mammalian cells than the FDA approved antifungal agent amphotericin B. Finally, we demonstrated with our best derivative that the mechanism of action of our compounds is the inhibition of the sterol 14 alpha-demethylase enzyme involved in ergosterol biosynthesis. (C) 2017 Elsevier Masson SAS. All rights reserved. Fourteen bergenin/cinnamic acid hybrids were synthesized, characterized and evaluated for their anti-tumour activity both in vitro and in vivo. The most potent compound, 5c, arrested HepG2 cells (IC50 = 4.23 +/- 0.79 mu M) in the G2/M phase and induced cellular apoptosis. Moreover, compound 5c was also found to suppress the tumour growth in Heps xenograft-bearing mice with low toxicity. In the mechanistic study, 5c administration ignited a mitochondria-mediated apoptosis pathway of HepG2 cell death. Furthermore, 5c activated Akt-dependent pathways and further decreased the expression of the Bc1-2 family of proteins. The downstream mitochondrial p53 translocation was also significantly activated, accompanied by an increase of the caspase-9, caspase-3 activation. These data imply that bergenin/cinnamic acid hybrids could serve as novel Akt/Bcl-2 inhibitors for further preclinical studies. (C) 2017 Elsevier Masson SAS. All rights reserved. Potential new EGFR(T790M) inhibitors comprised of structurally modified diphenylpyrimidine derivatives bearing a morpholine functionality (Mor-DPPYs) were used to improve the activity and selectivity of gefitinib-resistant non-small cell lung cancer (NSCLC) treatment. This led to the identification of inhibitor 10c, which displayed high activity against EGFR(T790M/L858R) kinase (IC50 = 0.71 nM) and repressed H1975 cell replication harboring EGFR(T790M) mutations at a concentration of 0.037 mu M. Inhibitor 10c demonstrated high selectivity (SI = 631.9) for T790M-containing EGFR mutants over wild type EGFR, suggesting that it will cause less side effects. Moreover, this compound also shows promising antitumor efficacy in a murine EGFR(T790M/L858R)-driven H1975 xenograft model without affecting body weight. This study provides new potential lead compounds for further development of anti-NSCLC drugs. (C) 2017 Elsevier Masson SAS. All rights reserved. Two thiazolidinedione scaffolds different in the position of the thiazolidinedione ring in the molecule were tested for in vitro cytotoxic activity in a panel of human cancer cell lines namely, prostate cancer cells PC-3, breast carcinoma cells MDA-MB-231, and fibrosarcoma cells HT-1080. Some of the target compounds of the A-series where the thiazolidinedione ring is terminal, displayed cytotoxic activity in the low micromolar range in the cell lines tested. Target thiazolidinediones of the B-series where the thiazolidinedione ring is located in the middle of the molecule showed cytotoxic activity comparable to that of their A-series counterparts. Our mechanistic studies indicated that the most cytotoxic compounds in this study have pro-apoptotic capacity. Key signaling mechanisms were investigated and found to vary depending on the target cell context, in line with previous observations regarding thiazolidinediones. (C) 2017 Elsevier Masson SAS. All rights reserved. The aim of this study was to investigate lipophilicity and cellular accumulation of rationally designed azithromycin and clarithromycin derivatives at the molecular level. The effect of substitution site and substituent properties on a global physico-chemical profile and cellular accumulation of investigated compounds was studied using calculated structural parameters as well as experimentally determined lipophilicity. In silico models based on the 3D structure of molecules were generated to investigate conformational effect on studied properties and to enable prediction of lipophilicity and cellular accumulation for this class of molecules based on non-empirical parameters. The applicability of developed models was explored on a validation and test sets and compared with previously developed empirical models. (C) 2017 Elsevier Masson SAS. All rights reserved. The present study reports the synthesis and anticancer activity evaluation of twelve novel silybin analogues designed using a ring disjunctive-based natural product lead (RDNPL) optimization approach. All twelve compounds were tested against a panel of cancer cells (i.e. breast, prostate, pancreatic, and ovarian) and compared with normal cells. While all of the compounds had significantly greater efficacy than silybin, derivative 15k was found to be highly potent (IC50 < 1 mu M) and selective against ovarian cancer cell lines, as well as other cancer cell lines, compared to normal cells. Preliminary mechanistic studies indicated that the antiproliferative efficacy of 15k was mediated by its induction of apoptosis, loss of mitochondrial membrane potential and cell cycle arrest at the sub-G1 phase. Furthermore, 15k inhibited cellular microtubules dynamic and assembly by binding to tubulin and inhibiting its expression and function. Overall, the results of the study establish 15k as a novel tubulin inhibitor with significant activity against ovarian cancer cells. (C) 2017 Elsevier Masson SAS. All rights reserved. This paper investigates the effects of multinationality on firm productivity, and contributes to the literature in two respects. First, we argue that multinationality affects productivity both directly and indirectly through higher incentives to invest in R&D. Second, we maintain that the multinational depth and breadth have different direct effects on productivity and R&D. Using data from the top R&D investors in the world, we propose an econometric model with an R&D and a productivity equation that both depend on multinationality. We find: i) multinational depth has a positive effect on productivity, while the effect of multinational breadth is negative; ii) multinationality (along both dimensions) has a positive effect on R&D intensity, translating into an indirect positive effect on productivity; iii) the positive indirect effect is however not large enough to compensate the negative direct effect of multinational breadth. (C) 2016 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). While the literature on capital adequacy and bank recapitalization agrees on the importance of a minimum capital requirement, recurring financial crises across the world do little to suggest that capital adequacy is enough protection for banks, even when they fully comply. By examining the case of regulation compelled banking recapitalizations in a cross-country context (during the period 1990Q1-2016Q2), we scrutinize the effectiveness of banking recapitalization on the economies of recently recapitalized countries. We provide implications for international business research, practice and policy by highlighting the need for countries adopting the Basel capital adequacy framework to pay attention to the peculiarities of their economies, the supporting regulatory mechanisms and their comparative spare capacities. (C) 2016 Elsevier Ltd. All rights reserved. International alliances have been studied in considerable depth, but almost entirely as host market entry options. And while much global value production is done through international alliances, the organizational forms used to control dispersed value chains are often reduced to "make or buy"-that is, captive operations vs. market-based outsourcing. We examine how strategic purpose (vertical or offshore production vs. horizontal or production for local market entry) affects the choice of cooperative governance form. We contend that an offshore production role, as opposed to a market entry strategy, makes an alliance more likely to be governed as a contractual alliance than as a joint venture. Data on 261 cross-border alliances in the major appliances industry largely support our hypotheses. Further, strategic purpose moderates the effects of alliance activities and of the institutional environment of the host country on the choice of governance form. (C) 2016 Elsevier Ltd. All rights reserved. Are Born Globals really different from firms with other start-up histories? We address this question based on a unique longitudinal data set that tracks all Danish manufacturing start-ups founded between 1994 and 2008 (23,201 firms). This novel application of register data allows us to provide the first detailed account of Born Globals compared to proper control groups of other start-ups. Chiefly we investigate firm performance, which in turn permits interference on socioeconomic impact. We find that the occurrence of BGs is not specific to certain sectors, nor does their frequency change in light of rapid ICT progress. However, we find that Born Globals have significantly higher turnover and employment levels as well as job growth rates. Moreover, they show a considerably wider market reach, but little to no productivity advantage compared to firms with less or later internationalization. Thus, Born Globals are special in some but not all aspects. (C) 2016 Elsevier Ltd. All rights reserved. Recognizing that cross-border mergers and acquisitions (M&As) are not alike in terms of how the payment method is structured, this paper investigates the role of stock payment that results in ownership sharing with foreign targets. Based on our empirical analysis using a sample of 4720 cross border M&A deals during 1997-2012, we find that stock payment in cross-border M&As has a detrimental effect on shareholder value because of the negative signaling effect. We further show that stock payment can be beneficial when a foreign target located in a weaker institutional environment and when the cultural distance is larger. This implies that stock payment can be beneficial in deals with particularly large information asymmetry and agency problem. Upon the completion of a cross-border M&A, the acquirer and foreign target will form a new principal-agent relationship and our findings propose that stock payment can serve as an effective incentive mechanism that aligns the goals of the acquirer and target. (C) 2016 Elsevier Ltd. All rights reserved. This paper uses firm-level data to investigate the effects of monetary uncertainty and political instability on the extensive and intensive margins of trade (exports and imports) in Eastern Europe and Central Asia. The former is related to exchange rate volatility, currency regimes and internal hedging, whereas the latter is related to the political regime in question. We consider 26 countries, most of them small, with poorly-developed financial markets, and in which exchange rate fluctuations and political instability are relatively high. The main results are threefold. First, we find a robust negative effect of exchange rate volatility on firms' probability of exporting (extensive margin) and on their export intensity (intensive margin). Second, we find a significant positive impact of binding currency agreements in the form of euro or ERM II membership, mainly on the extensive margin of trade. Moreover, we show that being party to those agreements allows firms to engage in international trade with a lower degree of internal hedging. Third, we make the case that political stability is related to an increase in exports. (C) 2016 Elsevier Ltd. All rights reserved. This study investigates the antecedents of trust in International Joint Ventures (IJVs). Building on social exchange theory (SET) and transaction cost theory (TCT), we develop an integrated framework in which trust development requires two sets of antecedents: (1) social antecedents (prior alliance experience with partner, partner's cultural sensitivity and reputation, inter-partner communication, expected longevity of the IJVs), and (2) structural antecedents (interdependence, ownership share, resource complementary). The developed framework is tested using web-survey data collected from 89 IJVs established by Nordic firms in Asia, Europe and America. Empirical data analysis based on structural equation modelling suggests that a partner's cultural sensitivity and reputation, inter-partner communication, and expected longevity are the social antecedents from SET that enhance trust. From TCA, the structural antecedent of resource complementary develops trust, while balanced interdependence and balanced ownership are unrelated to trust. These findings have important implications for managers planning to form and manage IJVs. (C) 2017 Elsevier Ltd. All rights reserved. Innovation productivity differs across economies and latecomer countries are working hard to close the gap with developed countries. An investigation of 80 countries in the years of 1981-2010 shows that international patenting activities vary across countries. We also find that both thigh-tech related international export and inward foreign direct investment significantly contributes to emerging countries' ability to produce cutting-edge technologies, but this effect does not exist for leading innovator countries. Moreover, although this study shows strong intellectual property rights (IPRs) protection is highly correlated with international patenting activities in leading innovator countries, it has a negative impact on emerging innovator countries' national innovative capacity. The findings thus help better understand the role of international economic activities and IPR in enhancing national innovative capacity, and facilitate emerging countries' effort to catch up with leading innovator countries. (C) 2016 Elsevier Ltd. All rights reserved. From a perspective of the resource-based view, this paper analyses the inter-connection between technology adoption and creation in affiliates of multinational enterprises (MNEs) in an emerging economy. Operating below the international technological frontier, multinational affiliates are more motivated to adopt technologies already existent from their MNEs than create new technologies, as the former already gives them competitive advantages over local firms. When technology creation is required, multinational affiliates will adopt further technology-based resources from their MNEs as they are unavailable in an emerging economy. As a result, technology adoption is a necessary but not sufficient condition for multinational affiliates to conduct technology creation. Given that networks are particularly important for working around institutional voids in the context of an emerging economy, this paper also investigates the different roles of R&D support from internal and external networks of multinational affiliates in technology adoption and creation. Hypotheses are tested and partially supported based on unique data from 465 multinational affiliates in China. (C) 2016 Elsevier Ltd. All rights reserved. Research on international entrepreneurship field is still in its infancy, where innovation is particularly emphasized in the pursuit of opportunities. The literature exhibits considerable gaps related to how newly born international firms, known as born globals, can overcome their asset-constrained conditions to enhance performance. In this regard, numerous scholars have proposed controversial arguments, such as that born global should leverage the highest levels of innovation for superior performance. However, innovation is asset-consuming and born global firms are asset-constrained. Drawing from the competitive advantage theory, we develop and test an original framework exploring the role of a balanced innovation approach, the ambidextrous innovation of born global firms. Our study reveals how the moderation effect of ambidextrous innovation could strengthen the link between marketing capabilities and positional advantage. Also, our findings suggest that positional advantage has an important mediating role in the relationship between marketing capabilities and export venture performance. Likewise, positional advantage mediates the relationship between competitive strategy and export venture performance. Additionally, we propose that competitive strategy mediates the relationship between marketing capabilities and positional advantage. This study presents valuable managerial implications to monitor closely ambidextrous innovation as key decision-making input for performance enhancement of born global firms. The empirical context for the study is a sample of high-technology born global firms from Mexico an important emerging market yet relatively understudied. (C) 2016 Elsevier Ltd. All rights reserved. This study examines whether local conditions (i.e., location-bound advantages and local density) are significantly related to foreign subsidiary performance in an emerging market. We also explore the moderating effect of entry timing on the relationship between local conditions and foreign subsidiary performance. Analysis of a longitudinal dataset of 357 foreign subsidiaries in an emerging market (the People's Republic of China) from 2006 to 2012 provides evidence that location-bound advantage is positively related to foreign subsidiary performance and local density is negatively related to foreign subsidiary performance. Furthermore, these relationships are significantly contingent on timing of entry. Our findings highlight the importance of local conditions and entry timing mitigating liability of foreignness and enhancing foreign subsidiary performance. (C) 2016 Elsevier Ltd. All rights reserved. This paper examines the impact of sub-national institutions on the performance of foreign firms in China. Building on institutional theory, we envisage that the negative effect of sub-national institutional constraints is moderated by firm size and age, entry mode, and market orientation. Our hypotheses are tested on a large-firm-level dataset of about 29,000 foreign firms in 120 cities in China within the period of 1999-2005. We find that firm size and age both have a diminishing positive impact on foreign firm performance; moreover, there is a U-shaped relationship between firm age and foreign firm performance in cities with higher level institutional constraints. We also find that joint ventures help mitigate the negative impact of sub-national institutional constraints on foreign firm performance when the level of institutional constraints is higher. (C) 2016 Elsevier Ltd. All rights reserved. The marketing literature has provided a limited examination of the concept of liking, and even this has mainly occurred within business-to-business or advertising contexts. In this paper, the authors propose a model of the intervening role of liking in the customer-service provider relationship in two countries, China and Greece. The antecedents of liking include three key service constructs, namely customer education, customer participation, and service quality. The outputs of liking are proposed to be affective trust and affective commitment, which in turn influence (behavioral) loyalty. The research model is tested using samples from China (N = 277) and from Greece (N = 306). The model is largely supported in both samples. Therefore, the authors suggest that liking in financial services has an important role in the customer-service provider relationship. Implications for international businesses are discussed. (C) 2016 Published by Elsevier Ltd. This paper examines how Chinese firms acquire knowledge and experience in international markets by attracting returnees using an original firm level survey from Guangdong province. It finds that there is a strong and positive relation between a firm's choice of hiring returnees and its propensity to embark in FDI. Moreover, it shows that not all returnees contribute equally to firms' internationalization. It is mainly those individuals in the most strategic functions, such as management and sales to determine both the propensity and the level of overseas direct investment. Finally, it finds that the presence of returnees is particularly effective for less experienced firms since it can help reduce the time taken to build capabilities and provide direct access to the knowledge necessary to invest abroad. (C) 2016 Elsevier Ltd. All rights reserved. Culture likely affects the choice of negotiation strategies significantly, and culture-dependent preferences for negotiation strategies could lead to conflict when negotiations cross borders. Negotiators often regard some degree of adaptation to the culture of their negotiation partner as a solution to such conflicts. The authors test this suggested solution in an asymmetric setting, in which a solo (outnumbered) negotiator faces a team. Two studies that employ web-based negotiation simulations show that only solo negotiators adapt to the negotiation strategies of their team counterpart. In a third study that uses a symmetric (solo-solo) setting, the adaptation effect disappears. These studies thus illustrate the greater social impact of teams versus solo negotiators. For outnumbered negotiators, adaptation is particularly beneficial (i.e., increases negotiation profit) if it involves an increased use of integrative strategies. The degree to which negotiators succeed in intercultural negotiations thus depends on their counterpart's (team's) culture. (C) 2016 Elsevier Ltd. All rights reserved. State-of-the-art performance in human action recognition is achieved by the use of dense trajectories which are extracted by optical flow algorithms. However, optical flow algorithms are far from perfect in low-resolution (LR) videos. In addition, the spatial and temporal layout of features is a powerful cue for action discrimination. While, most existing methods encode the layout by previously segmenting body parts which is not feasible in LR videos. Addressing the problems, we adopt the Layered Elastic Motion Tracking (LEMT) method to extract a set of long-term motion trajectories and a long-term common shape from each video sequence, where the extracted trajectories are much denser than those of sparse interest points (SIPs); then we present a hybrid feature representation to integrate both of the shape and motion features; and finally we propose a Region-based Mixture Model (RMM) to be utilized for action classification. The RMM encodes the spatial layout of features without any needs of body parts segmentation. Experimental results show that the approach is effective and, more importantly, the approach is more general for LR recognition tasks. (C) 2017 Published by Elsevier B.V. In this paper, a chaotic multiple instance learning tracker based on chaos theory for a robust and efficient online tracking is introduced. In this method, chaotic characteristics can be utilized for representing the target as well as the updating appearance model, which has not been used for the tracking task. The computational architecture of the method is organized as follows. (1) Chaotic representation: a chaotic model can capture the complex dynamics of the target region to train the weak classifiers. Our representation can balance the global and local features to handle fast motion, partial occlusion, and illumination changes. (2) Importance of instance: fractal dimension of the dynamic model can be adjusted as instance weight for efficient online learning. (3) Chaotic approximation: A robust chaotic approximation to update the appearance model is introduced, which is crucial to select the discriminative and robust features. Chaotic online learning quickly explores the feature space to update the appearance model of the target by means of a chaotic map. The experimental results reveal that the proposed method is more effective and robust than the state-of-the-art trackers on various challenging sequences. Indeed, the efficiency of the proposed method is attributed to its strong online updating of chaotic policy as well as desirable target representation of chaotic model. (C) 2017 Elsevier B.V. All rights reserved. Accurate and timely traffic flow forecasting is critical for the successful deployment of intelligent transportation systems. However, it is quite challenging to develop an efficient and robust forecasting model due to the inherent randomness and large variations of traffic flow. Recently, the stacked autoencoder has been proven promising for traffic flow forecasting but still exists some drawbacks in certain conditions. In this paper, a training samples replication strategy is introduced to train a series of stacked autoencoders and an adaptive boosting scheme is proposed to ensemble the trained stacked autoencoders to improve the accuracy of traffic flow forecasting. Furthermore, sufficient experiments have been conducted to demonstrate the superior performance of the proposal. (C) 2017 Elsevier B.V. All rights reserved. Wavelet Neural Networks (WNNs) are complex artificial neural systems and their training can be a challenge. In the past, most common training schemes for WNNs, such as gradient descent, have been restricted to training only a subset of differentiable parameters. In this paper, we propose an evolutionary method to train both differentiable and non-differentiable parameters using the concept of Cartesian Genetic Programming (CGP). The approach was evaluated on the two-spiral task and on real-world datasets for the detection of breast cancer and Parkinson's disease. In our experiments, the performance of the proposed method was comparable to several standard methods of classification. On the breast cancer dataset, the performance was better than other non-ensemble and multistep processing methods. The experimental results show how the performance of WNNs depends on the number of wavelons used. The presented case studies demonstrate that the proposed WNNs perform competitively in comparison to several other methods and results reported in literature. (C) 2017 Elsevier B.V. All rights reserved. This letter shortly presents an FPGA implementation method of the hyperbolic tangent and sigmoid activation functions for artificial neural networks. A kind of a direct implementation of the functions is proposed. The implementation results show that the obtained accuracy of the method is relatively high compared to other published solutions. (C) 2017 Elsevier B.V. All rights reserved. This paper focuses on the adaptive control design for a class of high order Markovian jump nonlinear systems with unmodeled dynamics and unknown dead-zone inputs. The unknown parameter vector, the dynamic uncertainties, the unknown nonlinear functions and the actuator dead-zone nonlinearities are all allowed to be randomly varying with the Markovian modes. By introducing the bound estimation approach, the effect of randomly jumping unknown parameters and the varying dead-zone nonlinearities are tackled. Moreover, aiming at the unmodeled dynamics and completely unknown nonlinear functions which have Markovian jumping features, several two-layer neural networks (NNs) are introduced for each mode and the adaptive backstepping control law is finally established. The stochastic stability analysis for the closed-loop system are also performed. At last, a numerical example is provided to illustrate the efficiency and advantages of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. Recently, nonlinear expectile regression becomes popular because it can not only explore nonlinear relationships among variables, but also describe the complete distribution of a response variable conditional on covariates information. In contrast, the traditional nonlinear expectile regression mainly confronts two shortcomings. First, it is difficult to select an appropriate form of nonlinear function. Second, it ignores the interaction effects among covariates. In this paper, we develop a new expectile regression neural network (ERNN) model by adding a neural network structure to expectile regression approach. The proposed ERNN model is flexible and can be used to explore potential nonlinear effects of covariates on expectiles of the response. The ERNN model can be easily estimated through standard gradient based optimization algorithms and output conditional expectile functions directly. The advantage of ERNN model is illustrated by Monte Carlo simulation studies. The numerical results show that the ERNN model outperforms the conventional expectile regression and support vector machine models in terms of predictive ability with both in-sample and out-of-sample test. We also apply the ERNN model to the predictions of concrete compressive strength and housing price. It turns out the marginal contribution of each predictor to the conditional expectile of a response is useful for decision-making. (C) 2017 Elsevier B.V. All rights reserved. In case of outlier(s) it is inevitable that the performance of the fuzzy time series prediction methods is influenced adversely. Therefore, current prediction methods will not be able to provide satisfactory accuracy rates for defuzzified outputs (predictions) when the data has outlier(s). In this study, not only to be able to sort out this problem but also to be able to improve the forecasting accuracy, we propose a combined robust approach for fuzzy time series by assessing how the prediction performance of the methods will be affected from the outlier(s). In the proposed model, different from the current models, both crisp values and membership values are used as inputs and also real time series observations are taken as outputs. The proposed model therefore does not require defuzzification transaction and uses single multiplicative neuron model to determine the fuzzy relations and a robust fitness function in its training process. While performing the training process of this model by particle swarm optimization within a combined single optimization process, using crisps values and membership values together provides successful results by getting further information. The various implementations are illustrated to show that the proposed model could obtain more accurate and robust results in forecasting.an example is illustrated to show that the proposed method could obtain more accurate and robust results in forecasting.an example is illustrated to show that the proposed method could obtain more accurate and robust results in forecasting.an example is illustrated to show that the proposed method could obtain more accurate and robust results in forecasting. (C) 2017 Elsevier B.V. All rights reserved. In usual real-world clustering problems, the set of features extracted from the data has two problems which prevent the methods from accurate clustering. First, the features extracted from the samples provide poor information for clustering purpose. Second, the feature vector usually has a high-dimensional multi-source nature, which results in a complex cluster structure in the feature space. In this paper, we propose to use a combination of multi-task clustering and fuzzy co-clustering techniques, to overcome these two problems. In addition, the Bregman divergence is used as the concept of dissimilarity in the proposed algorithm, in order to create a general framework which enables us to use any kind of Bregman distance function, which is consistent with the data distribution and the structure of the clusters. The experimental results indicate that the proposed algorithm can overcome the two mentioned problems, and manages the complexity and weakness of the features, which results in appropriate clustering performances. (C) 2017 Elsevier B.V. All rights reserved. Although many algorithms have been proposed, no single algorithm is better than others on all types of problems. Therefore, the search characteristics of different algorithms that show complementary behavior can be combined through portfolio structures to improve the performance on a wider set of problems. In this work, a portfolio of the Artificial Bee Colony, Differential Evolution and Particle Swarm Optimization algorithms was constructed and the first parallel implementation of the population-based algorithm portfolio was carried out by means of a Message Passing Interface environment. The parallel implementation of an algorithm or a portfolio can be performed by different models such as master-slave, coarse-grained or a hybrid of both, as used in this study. Hence, the efficiency and running time of various parallel implementations with different parameter values and combinations were investigated on benchmark problems. The performance of the parallel portfolio was compared to those of the single constituent algorithms. The results showed that the proposed models reduced the running time and the portfolio delivered a robust performance compared to each constituent algorithm. It is observed that the speedup gained over the sequential counterpart changed significantly depending on the structure of the portfolio. The portfolio is also applied to a training of neural networks which has been used for time series prediction. Result demonstrate that, portfolio is able to produce good prediction accuracy. (C) 2017 Elsevier B.V. All rights reserved. Cosegmentation is the task of simultaneously segmenting multiple images that contain common or similar foreground objects. The assumption that common objects appear in multiple images provides a weak form of supervised prior information, thus cosegmentation usually performs better than unsupervised segmentation methods. Recently, a multi-task learning based cosegmentation method was proposed, and it can simultaneously segment more than two images and easily add different types of prior. However, it has the shortcoming of information loss in the initialization of the multi-task classification model. To ameliorate this problem, in this paper, we propose a novel multi-task ranking SVM model which incorporates multi-task learning and learning to rank into a unified framework. The proposed model is trained using the relative order information between the cosaliency score of pixel pairs. In addition, an optimization algorithm is proposed to optimize the multi-task ranking SVM model based on the alternative direction method of multipliers (ADMM), which ensures that the proposed method is faster than most of state-of-the-art cosegmentation approaches. Finally, the proposed method is evaluated on two widely used benchmark datasets, i.e. CMU iCoseg and MSRC. The experiment results show that the proposed approach is effective and performs better than most of the state-of-the-art works. (C) 2017 Elsevier B.V. All rights reserved. This paper proposes a dissipativity-based state estimation methodology for static neural networks with time-varying delay. An Arcak-type observer is used to construct the estimation error system. To reduce the conservatism of observer design, a Lyapunov-Krasovskii functional (LKF) is adopted to fully exploit the available characteristics about activation function. In addition, a relaxed constraint condition is put forward to keep the whole LKF positive without requiring parts of involved matrices to be positive. By adopting the LKF and constraint condition, estimation conditions with a strict dissipative performance are obtained, which ensures the asymptotic stability of estimation error system. The computation of gain matrices about observer can be transformed into a convex optimization problem. Two examples are given to illustrate the validity and advantage of provided methodology. (C) 2017 Elsevier B.V. All rights reserved. The performance of many machine learning algorithms depends crucially on the hyperparameter settings, especially in Deep Learning. Manually tuning the hyperparameters is laborious and time consuming. To address this issue, Bayesian optimization (BO) methods and their extensions have been proposed to optimize the hyperparameters automatically. However, they still suffer from highly computational expense when applying to deep generative models (DGMs) due to their strategy of the black-box function optimization. This paper provides a new hyperparameter optimization procedure at the pre-training phase of the DGMs, where we avoid combining all layers as one black-box function by taking advantage of the layer-by-layer learning strategy. Following this procedure, we are able to optimize multiple hyperparameters in an adaptive way by using Gaussian process. In contrast to the traditional BO methods, which mainly focus on the supervised models, the pre-training procedure is unsupervised where there is no validation error can be used. To alleviate this problem, this paper proposes a new holdout loss, the free energy gap, which takes into account both factors of the model fitting and over-fitting. The empirical evaluations demonstrate that our method not only speeds up the process of hyperparameter optimization, but also improves the performances of DGMs significantly in both the supervised and unsupervised learning tasks. (C) 2017 Elsevier B.V. All rights reserved. This paper focuses on the global mean square exponential stability of stochastic neural networks with retarded and advanced argument. By employing the theory of differential equations with piecewise constant argument of generalized type, several sufficient conditions in form of algebraic inequalities are proposed to ensure the existence and uniqueness of solution. Considering that the piecewise alternately retarded and advanced argument exists, we estimate dynamic effect of system status in the current time and in the deviating function. Theoretical analysis of global mean square exponential stability is carried out by the stability theory of stochastic differential equations. Finally, numerical examples are exploited to illustrate the effectiveness of the results established. (C) 2017 Elsevier B.V. All rights reserved. Gold immunochromatographic strip (GICS) assay provides a quick, convenient, single-copy and on-site approach to determine the presence or absence of the target analyte when applied to an extensive variety of point-of-care tests. It is always desirable to quantitatively detect the concentration of trace substance in the specimen so as to uncover more useful information compared with the traditional qualitative (or semi-quantitative) strip assay. For this purpose, this paper is concerned with the GICS image denoising and deblurring problems caused by the complicated environment of the intestine/intrinsic restrictions of the strip characteristics and the equipment in terms of image acquisition and transmission. The gradient projection approach is used, together with the total variation minimization approach, to denoise and deblur the GICS images. Experimental results and quantitative evaluation are presented, by means of the peak signal-to-noise ratio, to demonstrate the performance of the combined algorithm. The experimental results show that the gradient projection method provides robust performance for denoising and deblurring the GICS images, and therefore serves as an effective image processing methodology capable of providing more accurate information for the interpretation of the GICS images. (C) 2017 Elsevier B.V. All rights reserved. This paper proposes a niching evolutionary algorithm with adaptive negative correlation learning, denoted as NEAJNCL, for training the neural network ensemble. In the proposed NEA_ANCL, an adaptive negative correlation learning, in which the penalty coefficient A is set to dynamically change during training, has been developed. The adaptation strategy is based on a novel population diversity measure with the purpose of appropriately controlling the trade-off between the diversity and accuracy in the ensemble. Further, a modified dynamical fitness sharing method is applied to preserve the diversity of population during training. The proposed NEA_ANCL has been evaluated on a number of benchmark problems and compared with related ensemble learning algorithms. The results show that our method can be used to design a satisfactory NN ensemble and outperform related works. (C) 2017 Elsevier B.V. All rights reserved. Visual saliency has been increasingly studied in relation to image quality assessment. Incorporating saliency potentially leads to improved ability of image quality metrics to predict perceived quality. However, challenges to optimising the combination of saliency and image quality metrics remain. Previous psychophysical studies have shown that distortion occurring in an image causes visual distractions, and alters gaze patterns relative to that of the image without distortion. From this, it can be inferred that the measurable changes of gaze patterns driven by distortion may be used as a proxy for the likely variation in perceived quality of natural images. In this paper, rather than using saliency as an add-on to image quality metrics, we investigate the plausibility of approximating picture quality based on measuring the deviation of saliency induced by distortion. First, we designed and conducted a large-scale eye-tracking experiment to clarify the knowledge on the relationship between the deviation of saliency and the variability of image quality. We then used the results to devise an algorithm which predicts perceived image quality based on visual distraction. Experimental results demonstrate this can provide good results of image quality prediction. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates the adaptive neural network optimal output feedback control design problem for nonlinear continuous-time systems with actuator saturation. The system dynamics and states of the controlled system are unknown. A neural network state observer is constructed to estimate the system states. This paper uses two neural networks, one is used to construct the neural network state observer, the other (critic neural network) is used to approximate the cost functions, which comprise an observer-critic architecture. In this architecture, the critic neural network weights are tuned based on both the current data and the previous data, thus the conditions of the persistent excitation in the previous literatures are relaxed. By utilizing adaptive dynamic programming approach, a new observer-based optimal control scheme is developed. It is proved that the proposed adaptive neural network output feedback optimal control scheme can ensure that the whole closed-loop system is stable. Moreover, the estimate errors of the critic neural network weights are asymptotically stable. A simulation example is given to validate the effectiveness of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. In the paper, the quaternion-valued neural networks (QVNNs) with non-differentiable time-varying delays are considered. Firstly, by using the method of plural decomposition, we decompose the QVNNs into two complex-valued neural networks. Some sufficient criteria in linear matrix inequality (LMI) form are derived to guarantee the existence and uniqueness of the equilibrium point for considered QVNNs by using the homeomorphism mapping principle of complex domain. Secondly, based on applying the free weighting matrix method and constructing appropriate Lyapunov-Krasovskii functional, several conditions are established in LMIs to ensure the the global mu-stability of QVNNs. Finally, by employing the predictor corrector approach, two numerical examples are provided to show the feasibility and availability of the obtained result. (C) 2017 Elsevier B.V. All rights reserved. Various techniques have been developed in recent years to simulate crowds, and most of them focus on collision avoidance while ignoring basic statistical spatiotemporal properties that crowd should possess. In order to improve the quality of crowd simulations, in this paper, we investigate some statistical characteristics of pedestrians in unstructured scenes using captured motion trajectories. Each trajectory is first represented as a four-dimensional vector, following which trajectories with the same entrance/exit areas are clustered to form motion patterns using the fuzzy c-means (FCM) algorithm. Since errors arise during tracking, outliers are eliminated using the local outlier factor (LOF) algorithm, and the refined velocity field can then be obtained. Finally, for each motion pattern, we find and confirm the following three spatiotemporal statistical properties of pedestrians: 1. The distribution of path length obeys the power law. 2. Pedestrians' speeds follow a Gaussian distribution. 3. Pedestrians tend to maintain a lower speed in entrance exit areas and a higher one in the middle of a given path. (C) 2017 Elsevier B.V. All rights reserved. Aim: This study aimed to test the effectiveness of a bundle combining best available evidence to reduce the incidence of incontinence-associated dermatitis occurrences in critically ill patients. Methods: The study used a before and after design and was conducted in an adult intensive care unit of an Australian quartenary referral hospital. Data, collected by trained research nurses, included demographic and clinical variables, skin assessment, incontinence-associated dermatitis presence and severity. Data were analysed using descriptive and inferential statistics. Results: Of the 207 patients enrolled, 146 patients were mechanically ventilated and incontinent thus eligible for analysis, 80 with 768 days of observation in the after/intervention group and 66 with 733 days of observation in the before group. Most patients were men, mean age 53 years. Groups were similar on demographic variables. Incontinence-associated dermatitis incidence was lower in the intervention group (15%;12/80) compared to the control group (32%; 21/66) (p = 0.016). Incontinence-associated dermatitis events developed later in the intensive care unit stay in the intervention group (Logrank = 5.2, p = <0.022). Conclusion: This study demonstrated that the use of a bundle combining best available evidence reduced the incidence and delayed the development of incontinence-associated dermatitis occurrences in critically ill patients. Systematic ongoing patient assessments, combined with tailored prevention measures are central to preventing incontinence-associated dermatitis in this vulnerable patient group. (C) 2016 Elsevier Ltd. All rights reserved. Background: The Nursing profession is struggling to return to basic nursing care to maintain patients' safety. "Interventional patient hygiene" (IPH) is a measurement model for reducing the bioburden of both the patient and health care worker, and its components are hand hygiene, oral care, skin care/antisepsis, and catheter site care. Objectives: To identify the level of nurses' practice and knowledge about interventional patient hygiene and identify barriers for implementing interventional patient hygiene in critical care units. Methodology: A descriptive research design was used and three tools were applied in this study: "The Interventional Patient Hygiene Observational Checklist", "The Interventional Patient Hygiene Knowledge Questionnaire" and "The Barriers for Implementing Interventional Patient Hygiene in Critical Care Units". Results: The mean percentage nurses' knowledge score is higher than the mean percentage practice score in all items (hand hygiene (71.28 +/- 25.46, compared with 46.15 +/- 17.87), oral care (100:0 +/- 0.0, compared with 25.32 +/- 24.25), catheter care (75.76 +/- 9.40, compared with 8.97 +/- 24.14) and skin care (47.80 +/- 6.79, compared with 26.28 +/- 16.57). Barriers for implementing hand hygiene are workload (71.79%), insufficient resources (61.53%), and lack of knowledge (10.25%). Conclusion: The mean percentage IPH knowledge score is higher than the mean percentage IPH practice score of all IPH items. Barriers for implementing IPH include workload, insufficient resources, and lack of knowledge/training. (C) 2016 Elsevier Ltd. All rights reserved. Background: Workplace stress can affect nurse satisfaction. Aroma therapy as a therapeutic use of essential oil can be beneficial in reducing stress. Purpose: Assess perceived stress pre-post introduction of Essential Oil Lavender among registered nurses, charge nurses, and patient care technicians in a trauma,intensive care unit, surgical specialty care unit and an orthopedic trauma unit. Methods: Pre-post intervention with a quasi-experimental design. After a pre-survey, Essential Oil Lavender was diffused 24h per day over 30 days in a designated nursing area that all nurses were not required to enter on each unit. Results: Dependent sample t-test for "how often do nurses feel stressed a work in a typical week" revealed pre-survey mean 2.97(SD = 0.99) which was significantly higher than post-survey mean 2.70 (SD = 0.92) with significance, t(69)= 236, p = 0.021, suggesting a difference in how often staff felt stressed at work in a typical week, trending down from "feeling stressed half of time" to "once in a while". There were no statistically significant differences in pre-post survey scores for TICU, TOU, or SSC as separate units. Relevance: Use of essential oils to decrease work-related stress among nursing staff may improve retention, workplace environment, and increase nurse satisfaction. (C) 2017 Elsevier Ltd. All rights reserved. Background: Healthcare associated infections from indwelling urinary catheters lead to increased patient morbidity and mortality. Aim: The purpose of this study was to determine if direct observation of the urinary catheter insertion procedure, as compared to the standard process, decreased catheter utilization and urinary tract infection rates. Methods: This case control study was conducted in a medical intensive care unit. During phase I, a retrospective data review was conducted on utilsiation and urinary catheter infection rates when practitioners followed the institution's standard insertion algorithm. During phase II, an intervention of direct observation was added to the standard insertion procedure. Results: The results demonstrated no change in utilization rates, however, CAUTI rates decreased from 2.24 to 0 per 1000 catheter days. Conclusion: The findings from this study may promote changes in clinical practice guidelines leading to a reduction in urinary catheter utilization and infection rates and improved patient outcomes. (C) 2016 Elsevier Ltd. All rights reserved. Background: Trauma patient management is complex and challenging for nurses in the Intensive Care Unit. One strategy to promote quality and evidence based care may be through utilising specialty nursing experts both internal and external to the Intensive Care Unit in the form of a nursing round. Inter Specialty Trauma Nursing Rounds have the potential to improve patient care, collaboration and nurses' knowledge. Objectives: The purpose of this quality improvement project was to improve trauma patient care and evaluate the nurses perception of improvement. Methods: The project included structured, weekly rounds that were conducted at the bedside. Nursing experts and others collaborated to assess and make changes to trauma patients' care. The rounds were evaluated to assess the nurse's perception of improvement. Results: There were 132 trauma patients assessed. A total of 452 changes to patient care occurred. On average, three changes per patient resulted. Changes included nursing management, medical management and wound care. Nursing staff reported an overall improvement of trauma patient care, trauma knowledge, and collaboration with colleagues. Conclusions: Inter Specialty Trauma Nursing Rounds utilizes expert nursing knowledge. They are suggested as an innovative way to address the clinical challenges of caring for trauma patients and are perceived to enhance patient care and nursing knowledge. (C) 2017 Elsevier Ltd. All rights reserved. Objectives: To increase adherence with intensive care unit mobility by developing and implementing a mobility training program that addresses nursing barriers to early mobilisation. Design: An intensive care unit mobility training program was developed, implemented and evaluated with a pre-test, immediate post-test and eight-week post-test. Patient mobility was tracked before and after training. Setting: A ten bed cardiac intensive care unit. Main outcome measures: The training program's efficacy was measured by comparing pre-test, immediate post-test and 8-week post-test scores. Patient mobilisation rates before and after training were compared. Protocol compliance was measured in the post training group. Results: Nursing knowledge increased from pre-test to immediate post-test (p < 0.0001) and pre-test to 8-week post-test (p<0.0001), Mean test scores decreased by seven points from immediate post-test (80 1 12) to 8-week post-test (73 114). Fear significantly decreased from pre-test to immediate post-test (p = 0.03), but not from pre-test to 8-week post-test (p = 0.06) or immediate post-test to 8-week post-test (p = 0.46). Post training patient mobility rates increased although not significantly (p = 0.07). Post training protocol compliance was 78%. Conclusion: The project successfully increased adherence with intensive care unit mobility and indicates that a training program could improve adoption of early mobility. (C) 2016 Elsevier Ltd. All rights reserved. Objective: This study evaluates rural hospital staff perceptions of a telemedicine ICU (Tele-ICU) before and after implementation. Methods: We conducted a longitudinal qualitative study utilising semistructured group or individual interviews with staff from three rural ICU facilities in the upper Midwest of the United States that received Tele-ICU support. Interviews occurred pre-implementation and at two time points post-implementation. Interviews were conducted with: ICU administrators (n = 6), physicians (n =3), nurses (n = 9), respiratory therapists (n = 5) and other (n =1) from July 2011 to May 2013. Transcripts were analysed for thematic content. Findings: Overall, rural ICU staff viewed Tele-ICU as a welcome benefit for their facility. Major themes included: (1) beneficial where recruitment and retention of staff can be challenging; (2) extra support for day shifts and evening, night and weekend shifts; (3) reduction in the number of transfers larger tertiary hospitals in the community; (4) improvement in standardisation of care; and (5) organisational culture of rural ICUs may lead to under-utilisation. Conclusions: ICU staff at rural facilities view Tele-ICU as a positive, useful tool to provide extra support and assistance. However, more research is needed regarding organisational culture to maximise the potential benefits of Tele-ICU in rural hospitals. Published by Elsevier Ltd. Objectives: Assessment and management of symptoms exhibited by infants can be challenging, especially at the end-of-life, because of immature physiology, non-verbal status, and limited symptoms assessment tools for staff nurses to utilize. This study explored how nurses observed and managed infant symptoms at the end-of-life in a neonatal intensive care unit. Methodology/DesignMethods: This was a qualitative, exploratory study utilizing semi-structured faceto-face interviews, which were tape-recorded, transcribed verbatim, and then analyzed using the Framework Approach. Setting: The sample included 14 staff nurses who cared for 20 infants who died at a large children's hospital in the Midwestern United States. Main outcome measures: Nurses had difficulty recalling and identifying infant symptoms. Barriers to symptom identification were discovered based on the nursing tasks associated with the level of care provided. Results: Three core concepts emerged from analyses of the transcripts: Uncertainty, Discomfort, and Chaos. Nurses struggled with difficulties related to infant prognosis, time of transition to end-of-life care, symptom recognition and treatment, lack of knowledge related to various cultural and religious customs, and limited formal end-of-life education. Conclusion: Continued research is needed to improve symptom assessment of infants and increase nurse comfort with the provision of end-of-life care in the neonatal intensive care unit. (C) 2016 Elsevier Ltd. All rights reserved. Background: There are few reports of the effectiveness or satisfaction with simulation to learn cardiac surgical resuscitation skills. Objectives: To test the effect of simulation on the self-confidence of nurses to perform cardiac surgical resuscitation simulation and nurses' satisfaction with the simulation experience. Methods: A convenience sample of sixty nurses rated their self-confidence to perform cardiac surgical resuscitation skills before and after two simulations. Simulation performance was assessed. Subjects completed the Satisfaction with Simulation Experience scale and demographics. Results: Self-confidence scores to perform all cardiac surgical skills as measured by paired t-tests were significantly increased after the simulation (d = -0.50 to 1.78). Self-confidence and cardiac surgical work experience were not correlated with time to performance. Total satisfaction scores were high (mean 80.2, SD 1.06) indicating satisfaction with the simulation. There was no correlation of the satisfaction scores with cardiac surgical work experience (tau = -0.05, ns). Conclusion: Self-confidence scores to perform cardiac surgical resuscitation procedures were higher after the simulation. Nurses were highly satisfied with the simulation experience. (C) 2016 Elsevier Ltd. All rights reserved. Background: Writing a diary for intensive care patients has been shown to facilitate patientrecovery and prevent post-traumatic stress following hospitalisation. Aim: This study aimed to describe the experiences of critical care nurses' (CCNs') in writing personal diaries for ICU patients.. Method: The study was conducted with a qualitative design. Ten CCNs from two hospitals participated. Data were collected with semi-structured interviews and analysed using a qualitative thematic content analysis. Findings: The result consists of a theme: Patient diary: a complex nursing intervention in all its simplicity, as well as four categories: Writing informatively and with awareness shows respect and consideration; The diary is important for both patient and CCN; To jointly create an organisation that facilitates and develops the writing; Relatives' involvement in the diary is a matter of course. Conclusion: CCNs are aware of the diary's importance for the patient and relatives, but experience difficulties in deciding which patients should get this intervention and how to prioritize it. Writing a personal diary for an ICU patient is a nursing intervention that is complicated in its simplicity. (C) 2016 Elsevier Ltd. All rights reserved. Background: Family members could play an important role in preventing and reducing the development of delirium in Intensive Care Units (ICU) patients. This study sought to assess the feasibility of design and recruitment, and acceptability for family members and nurses of a family delivered intervention to reduce delirium in ICU patients. Method: A single centre randomised controlled trial in an Australian medical/surgical ICU was conducted. Sixty-one family members were randomised (29 in intervention and 32 in non-intervention group). Following instructions, the intervention comprised the family members providing orientation or memory clues (family photographs, orientation to surroundings) to their relative each day. In addition, family members conducted sensory checks (vision and hearing with glasses and hearing aids); and therapeutic or cognitive stimulation (discussing family life, reminiscing) daily. Eleven ICU nurses were interviewed to gain insight into the feasibility and acceptability of implementing the intervention from their perspective. Results: Recruitment rate was 28% of eligible patients (recruited n = 90, attrition n = 1). Following instruction by the research nurse the family member delivered the intervention which was assessed to be feasible and acceptable by family members and nurses. Protocol adherence could be improved with alternative data collection methods. Nurses considered the activities acceptable. Conclusion: The study was able to recruit, randomise and retain family member participants. Further strategies are required to assess intervention fidelity and improve data collection. (C) 2017 Elsevier Ltd. All rights reserved. Introduction: Breathlessness is a prevalent and distressing symptom in intensive care, underestimated by nurses and physicians. Therefore, to develop a more comprehensive understanding of this problem, the study had two aims: to compare patients' self-reported scores of breathlessness obtained during mechanical ventilation (MV) with experiences of breathlessness later recalled by patients and: to explore the lived experience of breathing during and after MV. Method: A qualitatively driven sequential mixed method design combining prospective observational breathlessness data at the end of a spontaneous breathing trial (SBT) and follow up data from 11 post discharge interviews. Findings: Four out of six patients who reported breathlessness at the end of an SBT did not remember being breathless in retrospect. Experiences of breathing intertwined with the whole illness experience and were described in four themes: existential threat; the tough time; an amorphous and boundless body and getting through. Conclusion: Breathing was not always a clearly separate experience, but intertwined with the whole illness experience. This may explain the poor correspondence between patients' and clinicians assessments of breathlessness. The results suggest patients' own reports of breathing should form part of nursing interventions and follow-up to support patients' quest for meaning. (C) 2017 Elsevier Ltd. All rights reserved. Objective: To explore nurses' experiences and perceptions of delirium, managing delirious patients, and screening for delirium, five years after introduction of the Confusion Assessment Method for Intensive Care into standard practice. Research design and setting: Twelve nurses from a medical-surgical intensive care unit in a large teaching hospital attended two focus group sessions. The collected qualitative data was thematically analysed using Braun and Clarke's framework (2006). Findings: The analysis identified seven themes: (1) Delirium as a Secondary Matter (2) Unpleasant Nature of Delirium (3) Scepticism About Delirium Assessment (4) Distrust in Delirium Management (5) Value of Communication (6) Non-pharmacological Therapy (7) Need for Reviewed Delirium Policy. Conclusion: Nurses described perceiving delirium as a low priority matter and linked it to work culture within the intensive care specialty. Simultaneously, they expressed their readiness to challenge this culture and to promote the notion of providing high-quality delirium care. Nurses discussed their frustrations related to lack of confidence in assessing delirium, as well as lack of effective therapies in managing this group of patients. They declared their appreciation for non -pharmacological interventions in treatment of delirium, suggested improvements to current delirium approach and proposed introducing psychological support for nurses dealing with delirious patients. (C) 2017 Elsevier Ltd. All rights reserved. Polarimetric satellite-borne synthetic aperture radar (PoISAR) is expected to provide land usage information globally and precisely. In this paper, we propose a unsupervised double-stage learning land state classification system using a self-organizing map (SOM) that utilizes ensemble variation vectors. We find that the Poincare sphere parameters representing the polarization state of scattered wave have specific features of the land state, in particular, in their ensemble variation rather than spatial variation. Experiments demonstrate that the proposed PoISAR double-stage SOM system generate new classes appropriately, resulting in successful fine land classification and/or appropriate new class generation. (C) 2017 Elsevier B.V. All rights reserved. This paper proposes a method to estimate the expected value of the Euclidean distance between two possibly incomplete feature vectors. Under the Missing at Random assumption, we show that the Euclidean distance can be modeled by a Nakagami distribution, for which the parameters we express as a function of the moments of the unknown data distribution. In our formulation the data distribution is modeled using a mixture of Gaussians. The proposed method, named Expected Euclidean Distance (EED), is validated through a series of experiments using synthetic and real-world data. Additionally, we show the application of EED to the Minimal Learning Machine (MLM), a distance-based supervised learning method. Experimental results show that EED outperforms existing methods that estimate Euclidean distances in an indirect manner. We also observe that the application of EED to the MLM provides promising results. (C) 2017 Elsevier B.V. All rights reserved. A dynamic binary neural network is a simple two-layer network with a delayed feedback and is able to generate various binary periodic orbits. The network is characterized by the signum activation function, ternary connection parameters, and integer threshold parameters. The ternary connection brings benefits to network hardware and to computation costs in numerical analysis. In order to stabilize a desired binary periodic orbit, a simple evolutionary algorithm is presented. The algorithm uses individuals corresponding to the ternary connection parameters and one zero element is inserted into each individual. Each individual is evaluated by two feature quantities that characterize the stability of the periodic orbit. The zero-insertion is able to reinforce the stability and is convenient to reduce power consumption in a hardware. Applying the algorithm to a class of periodic orbits, the stabilization capability is investigated. Some of the periodic orbits are applicable to control signals of switching power converters. (C) 2017 Elsevier B.V. All rights reserved. The task of segmenting nuclei and cytoplasm in Papanicolau smear images is one of the most challenging tasks in automated cervix cytological analysis owing to the high degree of overlapping, the multiform shape of the cells and their complex structures resulting from inconsistent staining, poor contrast, and the presence of inflammatory cells. This article presents a robust variational segmentation framework based on superpixelwise convolutional neutral network and a learned shape prior enabling an accurate analysis of overlapping cervical mass. The cellular components of Pap image are first classified by automatic feature learning and classification model. Then, a learning shape prior model is employed to delineate the actual contour of each individual cytoplasm inside the overlapping mass. The shape prior is dynamically modeled during the segmentation process as a weighted linear combination of shape templates from an over-complete shape dictionary under sparsity constraints. We provide quantitative and qualitative assessment of the proposed method using two databases of 153 cervical cytology images, with 870 cells in total, synthesised by accumulating real isolated cervical cells to generate overlapping cellular masses with a varying number of cells and degree of overlap. The experimental results have demonstrated that our methodology can successfully segment nuclei and cytoplasm from highly overlapping mass. Our segmentation is also competitive when compared to the state-of-the-art methods. (C) 2017 Elsevier B.V. All rights reserved. In this paper we aim at tackling the problem of searching for novel and high-performing product designs. Generally speaking, the conventional schemes usually optimize a (multi) objective function on a dynamic model/simulation, then perform a number of representative real-world experiments to validate and test the accuracy of the some product performance metric. However, in a number of scenarios involving complex product configuration, e.g. optimum vehicle design and large-scale spacecraft layout design, the conventional schemes using simulations and experiments are restrictive, inaccurate and expensive. In this paper, in order to guide/complement the conventional schemes, we propose a new approach to search for novel and high-performing product designs by optimizing not only a proposed novelty metric, but also a performance function which is learned from historical data. Rigorous computational experiments using more than twenty thousand vehicle models over the last thirty years and a relevant set of well-known gradient-free optimization algorithms shows the feasibility and usefulness to obtain novel and high performing vehicle layouts under tight and relaxed search scenarios. The promising results of the proposed method opens new possibilities to build unique and high performing systems in a wider set of design engineering problems. (C) 2017 Elsevier B.V. All rights reserved. The binary classification problem where an input is classified as belonging or not to a certain class, the so-called Target Class (TC), is approached here. This problem can be stated as a basic hypothesis test: X is from the TC (H-0) vs. X is not from the TC (H-1), where X is the classifier input. When probabilistic models are used (e.g., Hidden Markov Models or Gaussian Mixture Models), the likelihood ratio, p(X/H-0)/p(X/H-1), is an alternative widely used to improve the classification. However, as far as we know, this ratio is not usually applied with distance-based classifiers (e.g., Dynamic Time Warping). Following that idea, here we propose making the decision based not only on the score ("score" being the classifier output) assuming X to be from the TC (H-0), but also using the score assuming X is not from the TC (H-1), by means of the ratio between both scores: the score ratio. The proposal is tested in biometric person authentication using manuscript signature, with three different state-of-the-art systems based on distance classifiers. Different alternatives for applying the proposal are shown in order to reduce the computer load, should it prove necessary. Using the score ratio has led to improvements in most of the tests performed. The best verification results were achieved using our proposal, with the best ones without the score ratio being improved by an average of 22%. (C) 2017 Elsevier B.V. All rights reserved. Modern airplanes and ships are equipped with radars emitting specific patterns of electromagnetic signals. The radar antennas are detecting these patterns which are required to identify the types of emitters. A conventional way of emitter identification is to categorize the radar patterns according to the sequences of radar frequencies, differences in time of arrivals, and pulse widths of emitting signals by human experts. In this respect, this paper proposes a method of classifying the radar patterns automatically using the network of calculating the p-values for testing the hypotheses of the types of emitters referred to as the class probability output network (CPON). The proposed method also provides a new way of identifying the trained and untrained emitter types. Through the simulation for radar pattern classification, the effectiveness of the proposed approach has been demonstrated. (C) 2017 Elsevier B.V. All rights reserved. Although deep learning shows high performance in pattern recognition and machine learning, the reasons remain unclarified. To tackle this problem, we calculated the information theoretical variables of the representations in the hidden layers and analyzed their relationship to the performance. We found that entropy and mutual information, both of which decrease in a different way as the layer deepens, are related to the generalization errors after fine-tuning. This suggests that the information theoretical variables might be a criterion for determining the number of layers in deep learning without fine-tuning that requires high computational loads. (C) 2017 Elsevier B.V. All rights reserved. Eye movement data collection is very expensive and laborious. Moreover, there are usually missing values. Assuming that we are collecting eye movement data from a set of images viewed by different users, there is a possibility that we will not able to collect the data of every user from every image-one or more views may not be represented in the image. We assume that the relationships among the views can be learnt from the whole collection of views (or items). The task is then to reproduce the missing part of the incomplete items from the relationships derived from the complete items and the known part of these items. Using certain properties of tensor algebra, we showed that this problem can be formulated consistently as a regression type learning task. Furthermore, there is a maximum margin based optimisation framework in which this problem can be solved in a tractable way. This problem is similar to learning to predict where a person is looking in an image. Therefore, we proposed an algorithm called "Tensor-based Multi-View Learning"(TMVL) in this paper. Furthermore, we also proposed a technique for improving prediction by introducing a new feature set obtained from Kronecker decomposition of the image fused with user's eye movement data. Using this new feature can improve prediction performance markedly. The proposed approach was proven to be more effective than two well-known saliency detection techniques. (C) 2017 Elsevier B.V. All rights reserved. We present in this paper a multi-shot human re-identification system from video sequences based on interest points (IPs) matching. Our contribution is to take advantage of the complementary of person's appearance and style of its movement that leads to a more robust description with respect to various complexity factors. The proposed contributions include person's description and features matching. For person's description, we propose to exploit a fusion strategy of two complementary features provided by appearance and motion description. We describe motion using spatiotemporal IPs, and use spatial IPs for describing the appearance. For feature matching, we use Sparse Representation (SR) as a local matching method between IPs. The fusion strategy is based on the weighted sum of matched IPs votes and then applying the rule of majority vote. This approach is evaluated on a large public dataset, PR1D-2011. The experimental results show that our approach clearly outperforms current state-of-the-art. (C) 2017 Elsevier B.V. All rights reserved. This paper implements a real-time and directional counting algorithm using the Graphic Processing Unit (GPU) Programming for the purpose of detecting and counting people. We use the Compute Unified Device Architecture (CUDA) as the environment of the GPU programming. The proposed algorithm is implemented for detecting and counting people employing the single virtual line and two virtual lines, respectively, using three video streams and two GPU graphic cards GeForce GT 630 and GeForce GTX 550Ti. We first test the video streams on the algorithm by using GeForce GT 630 together with applying the single virtual line and two virtual lines, respectively. Then, we repeat the same procedures for the GPU graphic card GeForce GTX 550Ti. The obtained experimental results show that our proposed algorithm running on GPU can be successfully programmed and implemented for people detecting and counting problems. (C) 2017 Elsevier B.V. All rights reserved. We have developed a cellular neural network formed by simplified processing elements composed of thin-film transistors. First, we simplified the neuron circuit into a two-inverter two-switch circuit and the synapse device into only a transistor. Next, we composed the processing elements of thin-film transistors, which are promising for giant microelectronics applications, and formed a cellular neural network by the processing elements. Finally, we confirmed that the cellular neural network can learn multiple logics even in a small-scale neural network. Moreover, we verified that the cellular neural network can simultaneously recognize multiple simple alphabet letters. These results should serve as the theoretical bases to realize ultra-large scale integration for brain-type integrated circuits. (C) 2017 Elsevier B.V. All rights reserved. A boosting-based method of learning a feed-forward artificial neural network (ANN) with a single layer of hidden neurons and a single output neuron is presented. Initially, an algorithm called Boostron is described that learns a single-layer perceptron using AdaBoost and decision stumps. It is then extended to learn weights of a neural network with a single hidden layer of linear neurons. Finally, a novel method is introduced to incorporate non-linear activation functions in artificial neural network learning. The proposed method uses series representation to approximate non-linearity of activation functions, learns the coefficients of nonlinear terms by AdaBoost. It adapts the network parameters by a layer-wise iterative traversal of neurons and an appropriate reduction of the problem. A detailed performances comparison of various neural network models learned the proposed methods and those learned using the least mean squared learning (LMS) and the resilient back-propagation (RPROP) is provided in this paper. Several favorable results are reported for 17 synthetic and real-world datasets with different degrees of difficulties for both binary and multi-class problems. (C) 2017 Published by Elsevier B.V. This article explores the changing character and consequences of state authorities' evolving relationships with universities in the United States, Germany, and Norway-typical cases for different national worlds of higher education. It argues that across the three OECD countries, welfare states have strengthened market principles in university governance, yet shaped competition in different ways. This conceptualization of institutional changes makes two seemingly conflicting perspectives compatible: one diagnosing national convergence on academic capitalism and one arguing for lasting divergence across national political economic regimes. Upon proposing ideal-typical trajectories of market-making institutional liberalization, the article explores path-dependent movement toward varieties of academic capitalism in the three countries. The findings on the socio-economic effects of this transformation suggest the need to moderate expectations on the ability of reformed higher education systems to contain contemporary societies' centrifugal forces. This paper directs attention to important changes in the role and funding of elite private universities in the USA. At the center of these changes is the private endowment-an institution that has for much of its history been a pivotal element of innovation and autonomy, but which is recently tilting towards the production and reproduction of oligarchic institutional conditions. In the context of an explosion of wealth inequality in winner-take-all markets where elite higher education serves to provide coveted access to rare positional goods, the in perpetuity endowment-as currently configured-allows a small group of globally leading institutions to become rentiers who can support themselves nearly exclusively through the returns on their endowed capital. With that, a century-old dynamic of innovation and change of American higher education is at risk of collapsing. Where the elite private universities used to act as the head of Riesman's snake-like procession, pulling the majority of American universities along in a process of isomorphic emulation, the emerging gulf between a handful of academic rentiers and the rest of the academic body (including many world-renowned, but not super-rich universities) threatens to cut that head off from the body, leaving the majority of the remaining institutions scrambling for survival at the mercy of the dictates of academic capitalism. We review policy options capable of taming the run-away endowment and place the issue in the larger context of the tension between Madisonian and Jeffersonian democratic imperatives. This article begins with a brief review of research on the development of ideas about the knowledge-based economy (analysed here as 'economic imaginaries') and their influence on how social forces within and beyond the academy have attempted to reorganize higher education and research in response to real and perceived challenges and crises in the capitalist order since the mid-1970s. This provides the historical context for three 'thought experiments' about other aspects of the development of academic capitalism. The first involves a reductio ad absurdum argument about different potential steps in the economization, marketization and financialization of education and research and is illustrated from recent changes in higher education. The second maps actual strategies of the entrepreneurial university and their role in shaping academic capitalism. The third speculates on possible forms of 'political' academic capitalism and their changing places in the interstices of the other trends posited in these thought experiments. The article ends with suggestions for a research agendum that goes beyond thought experiments to substantive empirical investigations. The article offers a socio-economic explanation of the much-discussed proliferation of evaluations, performance indicators, rankings and ratings in higher education and research. The aim is to show that these social technologies not only restructure the word of knowledge via status competitions but also serve to align academic stratification with socio-economic inequality. The theoretical framework is derived from critical analyses of the knowledge economy and from the credentialist theory of Randall Collins. Both accounts are further elaborated. With regard to the knowledge economy, the argument is that status hierarchies enable a privileged and profitable use of knowledge even where it is not feasible to establish intellectual property rights. In order to establish this argument, credentialism is extended from a theory about the labour market privileges of graduates to a theory about the social valuation of knowledge producers, knowledge products and knowledge institutions in general. Three main propositions are developed and defended: (1) A capitalist knowledge economy can only work as a status economy where income levels of qualified work and the exploitation of intellectual assets depend on accepted entitlements; (2) basic infrastructures of assessing the status of knowledge and knowledge workers are cultivated in higher education and research; (3) by codifying trust in knowledge, these academic (e)valuations facilitate its private appropriation in reputational capitalism. In this article, we apply Max Weber's ideal types of fief and benefice feudalism to elite and non-elite chemistry departments in the USA. We develop a theoretical analogy of academic feudalism in regard to three dimensions: power relations, engagement with companies, and the impact of structural changes on the autonomy of scholars. We use a mixed methods approach to track changes in productivity and industrial collaboration on a departmental level and the researcher's understanding of research autonomy on the individual level. On the departmental level, our findings suggest that scholars located at elite departments are able to utilize federal and industrial resources to increase publications over time. On the individual level, we establish that researchers in both segments perceive their autonomy as being very high, whereas practical autonomy differs according to department. While scholars at elite departments remain relatively autonomous in practice, scholars at non-elite departments often tend to tailor their research to specific requirements to receive funding. From the 1990s onwards, economics departments in Europe have changed toward a culture of "excellence." Strong academic hierarchies and new forms of academic organization replace "institutes" and "colleges" by fully equipped "economics departments." This article seeks to demonstrate how and why hierarchization, discourses of excellence and organizational change takes place in European economics departments. The concept of "elitism dispositif" will be developed in order to understand these changes as a discursive as well as power-related phenomenon based on rankings, on the formation of new academic classes as well as on the construction of an elite myth. An elitism dispositif is defined as a discursive power apparatus that transforms symbolic differences among researchers, constructed by rankings, into material inequalities, based on an unequal distribution of academic capital between departments and researchers. Based on an empirical study, the article will focus on a selection of economics departments in Germany and in the UK, in order to study the emergence of an "elite class" as well as the functioning of an "excellence culture" that is based on discourses of power and inequality. This article seeks to shed light on current dynamics of stratification in changing higher education and proposes an analytical perspective to account for these dynamics based on Martin Trow's work on "the analysis of status." In research on higher education, the term "stratification" is generally understood as a metaphor that describes a stable vertical order. In sectors that are experiencing considerable change, such an order is still in the making. In following Trow, we propose to look at stratification as an open ordering process that constructs verticality. We distinguish between sector and field stratification, i.e., between stratification through coercive regulation by the state and through status judgements by a wide range of stakeholders. Within the last decade, field stratification has grown in importance as governments in continental Europe have provided universities with more leeway. Specific devices (rankings, etc.) channel such judgements and construct images of how a field appears. By applying this concept to two empirical cases from German higher education, we will show how devices redefine verticality in higher education through specific field images. First, master rankings in business administration/economics expand the topological boundaries to include degree programs outside national sectors, raise the importance of alumni and increase the recruitment of female students. Second, the Excellence Initiative triggers the construction of a new unregulated sector of doctoral education; excellent graduate schools model themselves along the scales of the field image as selective, interdisciplinary, international, and part of a holistic university image. Before the 2000s and the buzz surrounding global rankings, many countries witnessed the emergence and development, starting in the 1970s, of academic media rankings produced primarily by press organisations. This domestic, media-based production, despite the relative lack of attention paid by the social sciences, has been progressively integrated into the functioning of higher education institutions. Examining the emergence and production of academic media rankings in two French magazines between 1976 and 1989, this paper analyses how the media has become a legitimate producer of academic rankings. A micro, sociotechnical history of this production, inspired by the theory of academic capitalism, by communication and media studies and by valuation studies, highlights three principal ideas: First, the production of academic media rankings in France relies on the ability of media organisations to involve the state and the academic institutions themselves. Second, a multidimensional market is instituted by the production of academic media rankings. Third, the concept of "configuration of values" is proposed, with three configurations identified: the configuration of value of opinion, the configuration of value of productivity and the configuration of value of activity. Academic careers are social processes which involve many members of large populations over long periods of time. This paper outlines a discursive perspective which looks into how academics are categorized in academic systems. From a discursive view, academic careers are organized by categories which can define who academics are (subjectivation) and what they are worth (valuation). The question of this paper is what institutional categorizations such as status and salaries can tell us about academic subject positions and their valuation. By comparing formal status systems and salary scales in France with those in the U.S., Great Britain and Germany, this paper reveals the constraints of institutional categorization systems on academic careers. Special attention is given to the French system of status categories which is relatively homogeneous and restricts the competitive valuation of academics between institutions. The comparison shows that academic systems such as the U.S. which are characterized by a high level of heterogeneity typically present more negotiation opportunities for the valuation of academics. From a discursive perspective, institutional categories, therefore, can reflect the ways in which academics are valuated in the inter-institutional job market, by national bureaucracies or in professional oligarchies. Academic mobility has existed since ancient times. Recently, however, academic mobility-the crossing of international borders by academics who then work 'overseas'-has increased. Academics and the careers of academics have been affected by governments and institutions that have an interest in coordinating and accelerating knowledge production. This article reflects on the relations between academic mobility and knowledge and identity capital and their mutual entanglement as academics move, internationally. It argues that the contemporary movement of academics takes place within old hierarchies among nation states, but such old hierarchies intersect with new academic stratifications which will be described and analysed. These analytical themes in the article are supplemented by excerpts from interviews of mobile academics in the UK, USA, New Zealand, Korea and Hong Kong as selected examples of different locales of academic capitalism. In the Austrian business cycle theory, monetary expansion lowers the interest rate and sends misleading relative price signals to investors, who then make investments that turn out to be unprofitable. One criticism of the theory is that if malinvestment is predictable, investors should understand their businesses well enough to see and avoid the temptation to be lured into unprofitable investments. A broader understanding of the Austrian school's framework explains why malinvestment takes place. The economy is a complex order, and while the theory explains that malinvestment will rise during the expansionary phase, it cannot identify which investment projects will eventually become unprofitable, nor can investors themselves tell ahead of time. Furthermore, applying the fallacy of composition, it may be that one investor could profitably invest based on those price signals, but all investors cannot. Monetary expansion lowers the informational content of prices, making it more likely that unprofitable investments will take place. Even if investors become more cautious, the percentage of investment projects that eventually will prove unprofitable will rise. We review the post-crisis literature that engages Austrian business cycle theory and we discuss what is being said that is correct, what is being said that is incorrect, and what is not being said that ought to be said. This last category is important due to the fact that the post-crisis literature engaging Austrian business cycle theory has not addressed advances in the theory made since the days of Mises and Hayek. We also highlight three key areas of contemporary economics where Austrian business cycle theory has the potential to do significant work. Advocates of digital privacy law believe it is necessary to correct failures in the market for digital privacy. Though legislators allegedly craft digital privacy regulation to protect consumers, some advocates have understated the dangers that digital privacy law may engender. This paper provides evidence for Kirzner's "perils of regulation" in the digital privacy arena. The regulatory process fails to simulate the market process, stifles entrepreneurial discovery, and creates opportunities for superfluous discovery. My research suggests that policy-makers should consider a more holistic accounting of the costs before imposing additional digital privacy regulation. Title II, the Controlled Substances Act (CSA), of the Comprehensive Drug Abuse Prevention and Control Act of 1970 (CDAPCA) created the present system of drug scheduling and regulation. This paper illustrates how the CSA created the incentives for induced 'malnovation' (innovation intended to circumvent legislation, and thus foil policymakers' intended ends) into drug markets, namely "designer drugs." As a result of this induced malnovation, drug markets have not only increased in the variance of products available that are often sold under similar street names, but there is also a tendency towards creating more dangerous drugs in an attempt to stay outside of the regulation. The "subsistence fund" was once an integral part of Austrian business cycle theory to indicate the resource constraint on the ability to complete investments. Early agrarian and industrial economies were constrained by resource availability in a manner consistent with that alluded to by the subsistence fund. This link became more tenuous as the growth of the financial economy in the twentieth century removed the apparent importance of pre-saved goods to complete investments. At this point the subsistence fund came to be used only as a metaphor and was jettisoned from Austrian business cycle theory. The present paper points to the merits of the subsistence fund in explaining the turning point of the business cycle as compared to alternative explanations. It also works out the deficiencies in historical expositions of the Austrian theory based on the subsistence fund, and traces the evolution of the resource constraint at the core of Austrian economistsA ' treatment of the business cycle. In 1997, two commercial geostationary satellites experienced a new phenomenon: sustained solar array arcing. Although arcing on solar arrays in space had been expected from ground tests and space flight experiments, it was heretofore unknown that arcs into the space plasma could turn into arcs between adjacent solar array strings at high interstring voltages. Experiments validated the concept, and design changes were made to succeeding satellites that have prevented similar occurrences. Here, the dates and times of 32 sustained arcing string failures from 1997 to 2002 are reported, and NASCAP-2k charging simulations are used to try to determine thresholds necessary for the arcs to occur. Surprisingly, in some cases, conditions for sustained arcs last for a longer time when the environment is relatively benign. A plausible scenario for primary arcs in sustained arcing conditions is presented that is based on charging time histories from NASCAP-2K simulations. The charging scenario involves solar array voltage turn-on after eclipse or upon unshunting the array, and it hypothesizes lower than usual arc voltage thresholds immediately after differential string voltages are activated. Laboratory testing under both geosynchronous-Earth-orbit and low-Earth-orbit conditions is cited to show plausibility of this argument. Improved design and testing rules for spacecraft to avoid sustained arcing with a minimum of overdesign are presented. This paper presents a partial reconstruction of the rotational dynamics of the Philae spacecraft upon landing on comet 67P/Churyumov-Gerasimenko as part of the European Space Agency's Rosetta mission. The motion is analyzed, as are the events triggered by the failure to fix the spacecraft to the comet surface at the time of the first touchdown. Dynamic trajectories obtained by numerical simulation of a seven-degree-of-freedom mechanical model of the spacecraft are fitted to directions of incoming solar radiation inferred from in situ measurements of the electric power provided by the solar panels. The present results include estimations of the amplitude of precessional motion and a lower bound of the angular velocity of the lander immediately after its first touchdown. The present study also gives insight into the effect of the programmed turnoff of the stabilizing gyroscope after touchdown; the important dynamical consequences of a small collision during Philae's journey; and the probability that a similar landing scenario harms the operability of this type of spacecraft. Excess power degradation on global positioning system (GPS) solar arrays has been ascribed to arc-induced contamination. Arcs have also been proposed as one source of spurious signals in Los Alamos National Laboratory's radiofrequency detectors on GPS satellites. Both ideas may be confirmed by detecting such arcs with large ground-based optical- and radiotelescopes. Correlation of these signals with each other and/or the event times onboard would cement both hypotheses. In this paper, preliminary positive results of a coordinated campaign of large optical- and radiotelescope observations on tracked GPS satellites are presented. Coordinated observations were carried out with the Arecibo 305m and long wavelength array radiotelescopes and a 3.5m optical telescope. Correlations of event rates, with predictions based on the U.S. Air Force, Aerospace, National Reconnaissance Office AE9/AP9/SPM empirical standard trapped radiation climatology model, show that daily variations in the undispersed event rates track variations in the charging electron flux with a delay appropriate for solar cell cover-glass conduction times. Additionally, Arecibo observations at 327MHz show narrow autocorrelation features (similar to 140 mu s wide) for one GPS satellite on two different days. Ground testing of a GPS-like solar array reveals arc voltage thresholds and contamination rates sufficient to produce the observed power degradation. Implications are discussed for GPS spacecraft and other satellites. The way to solve the problem of satellite constellation design was outlined in the 1960s, recognizing the importance of satellite coverage (continuous or periodic) function and allowing interpretation of the operation of different types of space systems. Due to the fact that Earth periodic coverage optimization is extremely complex, for many years, the solutions of this problem have been searched for among a priori fixed constellation types successfully implemented before for continuous coverage, with continuous coverage seeming to be much easier than periodic coverage. In this study, it is shown that the technological advance in satellite constellation design for periodic coverage could be achieved by considering it as a unique and separate problem. The introduction in the route theory for satellite constellation design for Earth periodic coverage that aims at creating methods for optimization of arbitrary constellations, which is an alternative to the traditional approach that considers narrow classes of constellations to be analyzed, is described. The so-called route constellation is presented as a mathematical abstraction for approximation of arbitrary satellite constellation. The theory elements of the optimization procedure in the infinite domain of route constellations are introduced. Previously unknown regularities in Earth periodic coverage and in localization of optimal low-Earth-orbit satellite constellation parameters are presented and illustrated. As interest for disaggregated satellite architectures increases, better analytic tools to evaluate the architectures are necessary. This research fills a void for launch vehicle selection capable of determining the optimal assignment for a heterogeneous, multiorbit, multifunction disaggregated satellite constellation. The novel approach successfully assigns a mix of satellites to launch vehicles and simultaneously assigns launch locations and orbits using a binary integer linear program optimally solved with CPLEX. Assumptions for each vehicle are captured. A superior level of precision is achieved by interpolation for the lift capacities of each vehicle. The formulation is applied to two different mission areas: weather and navigation. For the weather mission, the formulation was applied to a previously determined multiorbit, multifunction disaggregated design. The algorithm successfully assigned 12 satellites with two different masses in four different orbits to four launch vehicles: ridesharing heterogeneous satellites going to the same orbit. The navigation mission application successfully demonstrated optimal launch manifests for 11 variations in configuration, mass, and orbit of homogeneous disaggregated navigation designs. Multimode spacecraft micropropulsion systems that include a high-thrust chemical mode and high-specific impulse electric mode are assessed with specific reference to CubeSat-sized satellite applications. Both cold-gas butane propellant and ionic liquid chemical monopropellant modes are investigated alongside pulsed plasma, electrospray, and ion electric thruster modes. These systems are studied by varying electric propulsion usage percent and calculating the payload mass fraction and thruster burn time for missions requiring 250, 500, and 1000m/s delta-V. Systems involving chemical monopropellants have the highest payload mass fractions for a reference mission of 500m/s delta-V and 6U-sized CubeSat, where 1U is a 10 cm x 10 cm x 10 cm volume, for electric propulsion usage below 70% of total delta-V; whereas for higher electric propulsion usage, cold-gas thrusters deliver a higher payload mass fraction due to lower system inert mass. Due to the combination of a shared propellant for both propulsive modes, low inert mass, high electric thrust, and specific impulse near optimum for the system, the monopropellant/electrospray system has the highest mission capability in terms of delta-V for missions lasting less than 150 days. Pyroshock events in space vehicles lead to high-magnitude short-duration structural transient events that can cause malfunction or degradation of electronic components, cracks or fractures in brittle materials, local plastic deformation, or materials to experience accelerated fatigue life. As a result, analysts in the launch vehicle industry commonly define environments creating criteria for component testing or load factor development to ensure that designs are capable of surviving these events. With the conservatism in these environment definitions and the inherent high-magnitude response of the excitation, the need for knockdown factors exists. In the 1960s, Martin Marietta performed testing to develop a set of knockdown factors looking at shock attenuation with distance. With the advancement of data acquisition technology, however, the need to validate their work exists. This work looks at full-scale separation pyroshock test data collected by Orbital ATK during NASA's Ares I-X program and compares it to the historical standard provided by Martin Marietta. The available data generally suggest that the historical standard is not a conservative estimate of the attenuation with distance for shock loading. The implication of this occurs when using the attenuation standard for load reduction; hardware could be put at risk. Further testing to validate or update the standard should be conducted. Atmospheric density, pressure, and temperature are reconstructed along the entry trajectory of the 2012 Mars Science Laboratory mission using forebody pressure data from a single pressure sensor that was located near the stagnation point. Atmospheric reconstruction covers an altitude range of 64-9.5 km, which for Mars Science Laboratory corresponds to Mach numbers between 30 and 1.85. Stagnation pressure coefficients are modeled with numerical solutions of one-dimensional normal shock-wave relations, which ignore viscosity and heat transfer. High-temperature effects of dissociation and vibrational excitation are considered while assuming thermochemical equilibrium. This atmospheric reconstruction based on stagnation pressure data and equilibrium flow modeling is in good agreement with previous studies that used Mars Science Laboratory pressure data from multiple (seven) forebody sensors and three-dimensional nonequilibrium Navier-Stokes modeling. The combined accuracy of the present reconstruction method and the flow model is found to be 1% in hypersonic flight at Mach 28-4 and below 2% during supersonic flight. The method is independent of vehicle geometry, and stagnation pressure coefficients computed along the Mars Science Laboratory entry trajectory are shown to be valid for a wide range of Mars entry missions. In this study, the feasibility and utility of using a maneuverable nanosatellite laser guide star from a geostationary equatorial orbit have been assessed to enable ground-based, adaptive optics imaging of geosynchronous satellites with next-generation extremely large telescopes. The concept for a satellite guide star was first discussed in the literature by Greenaway and Clark in the early 1990s (PHAROS: An Agile Satellite-Borne Laser Guidestar, Proceedings of SPIE, Vol. 2120, 1994, pp.206-210), and expanded upon by Albert in 2012 (Satellite-Mounted Light Sources as Photometric Calibration Standards for Ground-Based Telescopes, Astronomical Journal, Vol. 143, No. 1, 2012, p. 8). With a satellite-based laser as an adaptive optics guide star, the source laser does not need to scatter, and is well above atmospheric turbulence. When viewed from the ground through a turbulent atmosphere, the angular size of the satellite guide star is much smaller than a backscattered source. Advances in small-satellite technology and capability allowed the revisiting of the concept on a 6U CubeSat, measuring 10 x 20 x 30 cm. It is shown that a system that uses a satellite-based laser transmitter can be relatively low power (similar to 1 W transmit power) and operated intermittently. Although the preliminary analysis indicates that a single satellite guide star cannot be used for observing multiple astronomical targets, it will only require a little propellant to relocate within the geosynchronous belt. Results of a design study on the feasibility of a small-satellite guide star have been presented, and the potential benefits to astronomical imaging and to the larger space situational awareness community have been highlighted. In this paper, an extensive investigation of the separation process of the first two stages of a carrier rocket that employs solid rocket motors for the lower stage is presented. As the reference vehicle, the VEGA rocket is used. The effect of the plumes of first-stage retrorockets on upper-stage aerodynamics and aerothermal loads is analyzed by means of wind-tunnel testing in the hypersonic wind tunnel H2K of DLR, German Aerospace Center. Aerodynamic coefficients are determined by force measurements. In addition, pressure distributions on the upper-stage surface and schlieren images for flow visualization are recorded. Infrared thermography measurements are conducted to determine the effect on aerothermal loads. Different flow conditions are achieved by variation of Reynolds number, retrorocket injection pressure ratio, and angle of attack. Results showed extensive flow separation around almost the entire upper stage by the retrorocket plumes already at low injection pressure ratios. During angle-of-attack sweeps, sudden changes in the flow structure occurred accompanied by strong changes in aerodynamic forces at values of alpha approximate to +/- 2 deg. This behavior was found to be influenced by hysteresis effects. Pose tracking control of cooperative spacecraft rendezvous and docking systems subject to coupled and uncertain dynamics is studied in this paper. An adaptive sliding mode trajectory tracking controller is proposed to ensure asymptotic convergence of pose tracking errors and adaptive estimations in Lyapunov framework. An adaptation scheme of the controller gain is derived via Lyapunov redesign method to improve the control performance of the closed-loop system. Sliding mode control guarantees the robustness for dealing with parametric uncertainty and external disturbance. The boundary-layer technique is employed to overcome control chattering phenomenon. A numerical example demonstrates the effectiveness of the proposed controller. It attaches profound importance to conduct a survey on drag reduction and a thermal protection mechanism applied to the nose tip of hypersonic reentry vehicle due to severe aerodynamic drag and heating. The combinational forward-facing cavity and opposing jet configuration is an effective concept, and its performance could be partially improved when a maximum thrust nozzle contour is employed to substitute the conventional cavity configuration. However, the novel concept would not necessarily generate higher drag and heat reduction efficiency than the conventional one. Therefore, based on the verification of a numerical method, multiobjective design optimization of the combinational novel cavity and opposing jet concept is conducted to minimize both the drag force coefficient and heat load in the current study. The jet total pressure ratio and geometric dimensions are selected as design variables, and the sampling points are obtained numerically by using an optimal Latin hypercube design method. The multi-island genetic algorithm coupled with the kriging surrogate model integrated in Isight 5.5 has been employed to establish the approximate model and solve the Pareto-optimal front. The operating conditions located on the front are proved accurate by a computational fluid dynamics method, and higher drag and heat reduction efficiency can be realized than the conventional configuration at a relatively lower jet total pressure. The roll damping dynamic stability derivative for a modified basic finner missile at transonic and supersonic speeds is calculated using quasi-stationary and unsteady computational fluid dynamics models based on the Reynolds-averaged Navier-Stokes equations. The quasi-steady solution is time independent in a noninertial body-fixed steadily rotating reference frame including additional acceleration terms because of the transformation from the steady to the moving reference frame. In the unsteady approach, forced sinusoidal oscillations about the longitudinal body axis are modeled by employing a dynamic grid with a rigid-body option and a sliding interface between the moving and stationary parts of the grid. The numerical results are compared with experimental ones from the Arnold Engineering Development Center and T-38 wind tunnels. It is shown that the quasi-steady approach provides good approximation for the derivative if the spin rate is large enough, whereas the computational time is substantially shorter than for the transient model. In space surveillance, long-term orbital propagation is often required to track space objects when the measurement updates are scarce. It is a challenging uncertainty propagation problem and can be addressed by many different numerical integration rules, such as the unscented transformation, the Gauss-Hermite quadrature rule, and the cubature rule. The conventional cubature rule and the unscented transformation are only the third-degree numerical rules that may not be precise enough for the long-term uncertainty propagation. The Gauss-Hermite quadrature rule is accurate to arbitrary degrees. However, it suffers the curse-of-dimensionality problem. To balance the computational complexity and accuracy, the high-degree sparse-grid quadrature rule and the high-degree spherical-radial cubature rule have been proposed in recent years. Unfortunately, the weights of these two rules may become negative, which can lead to a negative-definite covariance matrix and degrade the performance of uncertainty propagation. In this paper, two new compact quadrature rules together with an adaptive Gaussian-mixture model are proposed to propagate the uncertainty through the nonlinear orbital dynamics, which can achieve high degrees of propagation accuracy while maintaining all positive weights. High-Earth-orbit propagation and low-Earth-orbit propagation examples are used to demonstrate the superb performance of the proposed compact quadrature rules. An effective performance-matching design framework for solid rocket motor tailored toward satisfying various thrust-performance requirements is presented in this research paper through an innovative and specialized general-design approach developed to evaluate the general-design parameters. During the general-design stage, a combination of grain web and area ratio is selected as the design variables to be adjusted to obtain the general parameters. Based on the general parameters obtained, a grain-design stage incorporates the level-set method and simulates solid-propellant evolution and internal ballistic analysis, thereby obtaining the thrust performance. Grain-design effectiveness is determined by how closely the designed solid-rocket-motor performance matches and compares to a prespecified thrust curve. An efficient sequential-field-approximate-optimization algorithm is proposed and used to minimize the average rms error between the desired and designed thrusts. Validation of the proposed design framework is carried out by evaluating motor cases possessing different thrust requirements, and results obtained highlight the proposed framework as a practical and efficient strategy for solid-rocket-motor designs. Nanosatellite clusters are one of the current and future trends in space technology. To maintain a satellite cluster over a long period of time, the nanosatellites need to mitigate the alongtrack drift created by the initial orbit injection. In the mass range of 1-10 kg, CubeSats have strict constraints on allowed mass, volume, and electrical power; and they are equipped with only limited sensor and actuator capabilities. A state-of-the-art miniaturized electric propulsion system is one option to realize the required orbit control capability. Due to the low thrust provided by an electric propulsion system, long orbit control maneuvers are required. Therefore, mission design is highly affected by long-term attitude and power constraints. Assuming a cyclic thrust controller, four methods are developed to allocate the satellite power and attitude resources for orbit control. Two time division methods are presented that allocate dedicated time slots for each of the satellite's main tasks. Alternatively, two cosine methods are presented that use an error cone, around the required attitude, as a criterion for selecting when to operate the thruster. It is demonstrated that the proposed methods are capable of maintaining a realistic CubeSat formation-flying mission using low-power electric propulsion. For a CubeSat with deployable solar panels, the double cosine method shows the best performance. This paper presents an investigation of water-impact characteristics of a space capsule model with various initial pitching angles using an in-house smoothed particle hydrodynamics solver. By solving the governing equations coupled with six degree-of-freedom equations, the strong interaction between the space capsule and water is properly modeled with the kinematic dynamics and hydrodynamics accurately captured. The ditching events of the capsule model with different initial pitching angles were simulated, the computation of which was set up based on a time-step-size refinement study with results compared to documented experimental data. The well-agreed-upon results indicate that the initial pitching angle had a noticeable effect on both the normal and longitudinal loads due to the specific geometric characteristics and the appearance of a negative pressure region and that the accelerations became and remained a steady value when the initial pitching angle exceeded 33 deg. In this case, the pressure distribution on the belly of the capsule was also apparently influenced during water impact, which was demonstrated by two different trend of loads. One is that the smaller the initial angle, the higher the pressure to which the probe points were subject, and the other is that the maximum value of pressure experienced increasing and changed to decreasing later. Also, a noticeable nosedown pitching and a significant splashing jet caused by the impact of the capsule leading edge on the water were demonstrated particularly. As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid-body dynamic simulations are carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations are compared to analyses based on the industry-standard Baker-type explosion model, and they are used to formulate an improved version of the model. A validation against the experiment is carried out for one configuration. The standard model, which neglects an external fluid, is found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high. An improved model introduces terms that accommodate an external fluid and better account for the pressure at the fragment wall. A physics-based analysis is critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines. The practical autonomous air refueling of unmanned air systemtanker aircraft to unmanned air system receiver aircraft will require an integrated relative navigation system and controller that is tolerant to faults. This paper develops and demonstrates a fault-tolerant structured-adaptive-model-inversion controller integrated with a reliable relative-position sensor for this autonomous air-refueling scenario using the probe-and-drogue method. The structured-adaptive-model-inversion controller does not depend on fault-detection information, yet reconfigures and provides smooth trajectory tracking and probe docking in the presence of control-effector failure. The controller also handles parameter uncertainty in the receiver-aircraft model. In this paper, the controller is integrated with a vision-based relative-position sensor, which tracks the relative position of the drogue, and a reference-trajectory generator. The feasibility and performance of the controller and integrated system are demonstrated with simulated docking maneuvers with a nonstationary drogue, in the presence of system uncertainties and control-effector failures. The results presented in the paper demonstrate that the integrated controller/sensor system can provide successful docking in the presence of system uncertainties for a specified class of control-effector failures. This paper presents an approach for velocity control of tilt-wing aircraft over their entire flight envelope, ranging from hovering flight to wing-borne flight. With their capability of vertical takeoff and landing operation in combination with efficient cruise flight, tilt-wing aircraft offer multiple benefits in unmanned aerial vehicle applications. Control of tilt-wing aircraft, in particular for fully automated flight, is challenging because along with their versatility comes significant variations in flight mechanics characteristics. Known approaches to this problem subdivide tilt-wing flight into discrete aircraft configurations. The presented control concept omits discrete configurations and instead allows for continuous flight state transitions over a unified flight configuration space. Actuation that is unique to tilt-wing aircraft, for example, tilt angle control, is not only used as an aircraft configuration parameter but also as a full-fledged motion control device. The concept includes a map-based feedforward controller to maintain trimmed straight-lined flight and uses virtual control devices for motion control independent of the flight state. Flight state transitions are conducted along state trajectories that are defined deterministically and safely with respect to flight envelope limitations. The controller is implemented for an unmanned tilt-wing demonstrator and evaluated in flight. Compared to prior results, velocity response is considerably improved. Recently, there has been immense interest in using unmanned aerial vehicles for civilian operations. As a result, unmanned aerial systems traffic management is needed to ensure the safety and goal satisfaction of potentially thousands of unmanned aerial vehicles flying simultaneously. Currently, the analysis of large multi-agent systems cannot tractably provide these guarantees if the agents' set of maneuvers is unrestricted. In this paper, platoons of unmanned aerial vehicles flying on air highways is proposed to impose an airspace structure that allows for tractable analysis. For the air highway placement problem, the fast marching method is used to produce a sequence of air highways that minimizes the cost of flying from an origin to any destination. The placement of air highways can be updated in real time to accommodate sudden airspace changes. Within platoons traveling on air highways, each vehicle is modeled as a hybrid system. Using Hamilton-Jacobi reachability, safety and goal satisfaction are guaranteed for all mode transitions. For a single altitude range, the proposed approach guarantees safety for one safety breach per vehicle; in the unlikely event of multiple safety breaches, safety can be guaranteed over multiple altitude ranges. This paper demonstrates the platooning concept through simulations of three representative scenarios. This work investigates a Hamiltonian structure-preserving control that uses the acceleration of solar radiation pressure for the stabilization of unstable periodic orbits in the circular restricted three-body problem. This control aims to stabilize the libration-point orbits in the sense of Lyapunov by achieving simple stability. It also preserves the Hamiltonian nature of the controlled system. The Hamiltonian structure-preserving control is then extended to a general case in which complex and conjugate eigenvalues occur at high-amplitude orbits. High-amplitude orbits are currently of interest to the European Space Agency for future libration-point orbit missions because they require a lower insertion Delta v compared to low-amplitude orbits. Based on the design of the feedback control, the purpose of this work is to verify when the use of solar radiation pressure is feasible and to determine the structural requirements and the spacecraft 's pointing accuracy. Remote sensing instrumentation onboard missions to asteroids is paramount to address many of the fundamental questions in modern planetary science. Yet in situ surface measurements provide the "ground truth" necessary to validate and enhance the science return of these missions. Nevertheless, because of the dynamical uncertainties associated with the environment near these objects, most missions spend long periods of times stationed afar. Small landers can be used much more daringly, however, and thus have already been identified as valuable assets for in situ exploration. This paper explores the potential for ballistic landing opportunities enabled by the natural dynamics found in binary asteroid systems. The dynamics near a binary asteroid are modeled by means of the circular restricted three-body problem, which provides a reasonable representation of a standard binary system. Natural landing trajectories are then sought that allow for a deployment from the exterior region and touchdown with minimum local-vertical velocity. The results show that, although landing on the main body of the system would require an effective landing system capable to dissipate excess of energy and avoid bouncing off the asteroid, the smaller companion offers the prospect of simple ballistic landing opportunities. Conformance monitoring is a key technique used to check whether an aircraft is likely to deviate or has deviated significantly from its assigned flight plan or clearance. Because of the inherent uncertainties and disturbances, it is difficult to define a reasonable range for inevitable deviations during flight. The solution framework in this paper is proposed to detect nonconformance based on the comparison between the state values measured by a surveillance system and a range of expected state values. The probabilistic framework in Gaussian processes is adopted to formulate these uncertainties and to model the behavior of conforming flights. A specified interval of the probabilistic distribution of state values in the proposed model is used as the acceptable range of expected state values. Practical examples in two common level-flight scenarios are presented to demonstrate the performance of this approach in terms of detection time and false alarm rate. This paper proposes a systematic analysis and synthesis method for the optimal design of filters for L-1 adaptive output feedback controllers. In the L-1 adaptive feedback structure, the low-pass filter is the key to the tradeoff between the performance and robustness of the closed-loop system. Although the filter design for the L-1 adaptive architecture has been studied in previous papers, the need for a numerically tractable synthesis method is yet to be fulfilled. In this paper, the L-1 adaptive controller with an optimized filter is used for precision trajectory tracking control of a small quadrotor (Crazyflie) in an experimental setup. The controller demonstrates robustness to time delay, noise, disturbances, radio transmission, and uncertainties in the modeling of quadrotor. Spacecraft formation flying and satellite cluster flight have seen growing interest in the last decade. However, the problem of finding the optimal debris collision avoidance maneuver for a satellite in a cluster has received little attention. This paper develops a method for choosing the timing for conducting minimum-fuel avoidance maneuvers without violating the cluster intersatellite maximal distance limits. The mean semimajor axis difference between the maneuvering satellite and the other satellites is monitored for the assessment of a maneuver possibility. In addition, three techniques for finding optimal maneuvers under the constraints of cluster keeping are developed. The first is an execution of an additional cluster-keeping maneuver at the debris time of closest approach, the second is a global all-cluster maneuver, and the third is a fuel-optimal maneuver, which incorporates the cluster-keeping constraints into the calculation of the evasive maneuver. The methods are demonstrated and compared. The first methodology proves to be the most fuel-efficient. The global maneuver guarantees boundedness of the intersatellite distances as well as fuel and mass balance. However, it is rather fuel-expensive. The last method proves to be useful at certain timings and is a compromise between fuel consumption and the number of maneuvers. This paper proposes a cooperative guidance law that attempts to reduce the variability of the relative look angle between two missiles that pursue the same maneuvering target, while not interfering with the intercept mission. The relative look angle is defined as the angle between the velocity vector of a missile and the line of sight to the other missile. The motivation for the stated problem is to assure that communication between missiles in the salvo can be performed using directional antennas, which would improve the communication performance. The guidance law design is performed using a linear quadratic optimal control approach. Three cooperative guidance laws are proposed. Their performance is studied and analyzed using the numerical simulation results of a planar nonlinear intercept. The authors evaluated whether a psychiatric clerkship reduces stigmatized attitudes towards people with mental illness among medical students. A 56-item questionnaire was used to assess the attitudes of medical students towards patients with mental illness and their beliefs about its causes before and after their participation in their psychiatric clerkship at a major medical school in Rio de Janeiro. Exploratory factor analysis identified four factors, reflecting "social acceptance of people with mental illness," "normalizing roles for people with mental illness in society," "non-belief in supernatural causes for mental illness," and "belief in bio-psychosocial causes for mental illness." Analysis of variance was used to evaluate changes in these factors before and after the clerkship. One significant difference was identified with a higher score on the factor representing social acceptance after as compared to before the clerkship (p = 0.0074). No significant differences were observed on the other factors. Participation in a psychiatric clerkship was associated with greater social acceptance but not with improvement on other attitudinal factors. This may reflect ceiling effects in responses before the clerkship concerning supernatural and bio-psychosocial beliefs about causes of mental illness that left little room for change. Stigma among health care providers toward people with mental illness is a worldwide problem. This study at a large US university examined medical student attitudes toward mental illness and its causes, and whether student attitudes change as they progress in their education. An electronic questionnaire focusing on attitudes toward people with mental illness, causes of mental illness, and treatment efficacy was used to survey medical students at all levels of training. Exploratory factor analysis was used to establish attitudinal factors, and analysis of variance was used to identify differences in student attitudes among these factors. Independent-samples t tests were used to assess attitudes toward efficacy of treatments for six common psychiatric and medical conditions. The study response rate was 42.6 % (n = 289). Exploratory factor analysis identified three factors reflecting social acceptance of mental illness, belief in supernatural causes, and belief in biopsychosocial causes. Stages of student education did not differ across these factors. Students who had completed the psychiatry clerkship were more likely to believe that anxiety disorders and diabetes could be treated effectively. Students reporting personal experiences with mental illness showed significantly more social acceptance, and people born outside the USA were more likely to endorse supernatural causes of mental illness. Sociocultural influences and personal experience with mental illness have a greater effect than medical education on attitudes toward people with mental illness. Psychiatric education appears to have a small but significant effect on student attitudes regarding treatment efficacy. Challenges in pursuing research during residency may contribute to the shortage of clinician-scientists. Although the importance of mentorship in facilitating academic research careers has been described, little is understood about early career research mentorship for residents. The aim of this study was to better understand the mentorship process in the context of psychiatry residency. Semi-structured interviews were conducted with experienced faculty mentors in a psychiatry department at a large academic medical center. Interviews were analyzed using inductive thematic analysis. Results from faculty interviews identified several key themes that were explored with an additional sample of resident mentees. Five themes emerged in our study: (1) being compatible: shared interests, methods, and working styles; (2) understanding level of development and research career goals in the context of residency training; (3) establishing a shared sense of expectations about time commitment, research skills, and autonomy; (4) residents' identity as a researcher; and (5) the diverse needs of a resident mentee. There was considerable congruence between mentor and mentee responses. There is an opportunity to improve research mentoring practice by providing guidance to both mentors and mentees that facilitates a more structured approach to the mentorship relationship. The authors compared the current knowledge and attitude of psychiatrists, psychiatry residents, and psychiatric nurses towards the pharmacological management of acute agitation. Questionnaires were electronically distributed to all attending psychiatrists, psychiatry residents, and psychiatric nurses who were either employed by the University Department of Psychiatry and Behavioral Sciences or were staff at a 250-bed affiliated Psychiatric Hospital. Where possible, Fisher's exact test was used to compare responses to questions based on designation. Of the 250 questionnaires distributed, 112 were returned (response rate of 44.8%), of which 64 (57.1%) were psychiatric nurses, 27 (24.1%) were attending psychiatrists, and 21 (18.8%) were psychiatry residents. A significantly higher percentage of attending psychiatrists and psychiatric nurses compared to psychiatry residents thought that newer second- generation antipsychotics (SGAs) are not as effective as older first-generation antipsychotics (FGAs) for managing acute agitation (55.6, 48.4, and 9.5% respectively, p = 0.008). The combination of intramuscular haloperidol, lorazepam, and diphenhydramine was the most preferred option chosen by all designations for the psychopharmacological management of severe agitation. Furthermore, a larger percentage of the psychiatric nurses, in comparison to attending psychiatrists, also chose the combination of intramuscular chlorpromazine, lorazepam, and diphenhydramine as an option for managing severe agitation; no psychiatry resident chose this option. Knowledge of evidence-based psychopharmacological management of agitation differs among attending psychiatrists, psychiatry residents and psychiatric nurses. Although the management of agitation should be individualized and context specific, monotherapy should be considered first where applicable. The authors examined changes in attitudes and intention to work with mentally ill patients (treat, specialize, or work in the field) among nursing students after a planned intervention consisting of a mental health course. Data were collected before and after a planned intervention. The nature of the intervention was educational, for third year undergraduate nursing students. The core intervention included lectures on mental illness, encounters with people coping with mental illness, simulations, and a film on coping with mental illness. Behavioral intention to work with mentally ill patients and three dimensions of nursing students' attitudes (perceived functional characteristics, perceived danger, and value diminution of mentally ill patients) were measured before and after the intervention. The post-intervention impact of the intervention on participants' attitudes and behavioral intention was measured. One hundred and one undergraduate third year nursing students studying at four nursing schools in Israel participated in the study. The planned intervention improved the students' attitudes towards mentally ill patients but did not improve their intention of working with them. Post-intervention, older and less religious students had more intention to work with mentally ill patients. Moreover, older and Jewish students held better attitudes towards the functional characteristics of mentally ill patients. Being older was also correlated with the perception of mentally ill patients as less dangerous and male students ascribed to them more value diminution. Students' attitudes towards mentally ill patients and their behavioral intention to work in the psychiatry field should be addressed during the initial training and in continuing education. Teaching methods should include theoretical learning on multicultural mental health practice concurrently with clinical placements. Within 10 years, the Association of American Medical Colleges envisions graduating medical students will be entrusted by their school to perform 13 core entrustable professional activities (EPAs) without direct supervision. The authors focused on eight EPAs that appear most relevant to clinical training during the psychiatry clerkship at their institution to evaluate whether students assess themselves as making progress in EPAs during this clerkship, to see how students' self-assessments compare with the clerkship director's assessments, and to see if weaknesses in the curriculum were found. An EPA-assessment scale was designed (ratings 1 to 5) to assess progress toward entrustment in each EPA. Medical students completed pre- and post-psychiatry clerkship self-assessments. The clerkship director independently assessed each student's progress in EPAs utilizing assessment methods already present in the curriculum. Seventy of 116 students (60.3%) completed both pre- and post-clerkship self-assessments. These ratings increased significantly from pre- to post-clerkship, representing large effect sizes from 0.83 to 1.13. The largest mean rating increase was observed for EPA 2, Prioritize a differential diagnosis following a clinical encounter. Mean post-clerkship self-assessment ratings were significantly higher than mean post-clerkship instructor ratings for seven of the eight EPAs. The results suggest training during the psychiatry clerkship can contribute to the professional development of medical students in the eight EPAs studied but that student self-assessments tend to be higher than those of the clerkship director. Further study is needed of the relative value and role of student self-assessments versus faculty assessments of progress in EPAs. We aimed to determine whether residents' confidence initiating medications increased with the number of times they prescribed individual medications and to quantify the relationship between prescription frequency and gains in confidence. From July 2011 to June 2014, PGY-3 residents completed a survey of confidence levels at their psychopharmacology clinic orientation and then again 12 months later. The Emory Healthcare electronic medical record was used to identify all medications prescribed by each resident during their 12-month rotation and the frequency of these prescriptions. Confidence in initiating treatment with all medicines/medication classes increased over the 12-month period. For three of the medication classes for which residents indicated they were least confident at orientation, the number of prescriptions written during the year was significantly associated with an increase in confidence. Measuring resident confidence is a relevant and achievable outcome and provides data for educators regarding the amount of experience needed to increase confidence. A practical, reliable, and valid instrument is needed to measure the impact of the learning environment on medical students' well-being and educational experience and to meet medical school accreditation requirements. From 2012 to 2015, medical students were surveyed at the end of their first, second, and third year of studies at four medical schools. The survey assessed students' perceptions of the following nine dimensions of the school culture: vitality, self-efficacy, institutional support, relationships/inclusion, values alignment, ethical/moral distress, work-life integration, gender equity, and ethnic minority equity. The internal reliability of each of the nine dimensions was measured. Construct validity was evaluated by assessing relationships predicted by our conceptual model and prior research. Assessment was made of whether the measurements were sensitive to differences over time and across institutions. Six hundred and eighty-six students completed the survey (49 % women; 9 % underrepresented minorities), with a response rate of 89 % (range over the student cohorts 72-100 %). Internal consistency of each dimension was high (Cronbach's alpha 0.71-0.86). The instrument was able to detect significant differences in the learning environment across institutions and over time. Construct validity was supported by demonstrating several relationships predicted by our conceptual model. The C-Change Medical Student Survey is a practical, reliable, and valid instrument for assessing the learning environment of medical students. Because it is sensitive to changes over time and differences across institution, results could potentially be used to facilitate and monitor improvements in the learning environment of medical students. There are no studies investigating physicians' knowledge of catatonia. The authors aimed to assess and increase physicians' awareness of catatonia. A survey with clinical questions about catatonia was administered, followed by a brief online teaching module about catatonia and a post-education survey. Twenty-one psychiatry residents (response rate, 70%) and 36 internal medicine residents (response rate, 34%) participated in the pre-education survey. Psychiatry residents identified 75% of the correct answers about catatonia, compared to 32% correct by internal medicine residents (p < 0.001). Twenty participants (response rate, 35%) completed the online education module and second survey, which resulted in a significant improvement in correct response rates from 60 to 83% in all the participants (p < 0.001). Residents' baseline knowledge of catatonia is low, particularly among internal medicine residents. A brief online module improved resident physicians' knowledge of catatonia. Educational strategies to improve recognition of catatonia should be implemented. While standardized patients (SPs) remain the gold standard for assessing clinical competence in a standardized setting, clinical case vignettes that allow free-text, open-ended written responses are more resource- and time-efficient assessment tools. It remains unknown, however, whether this is a valid method for assessing competence in the management of agitation. Twenty-six psychiatry residents partook in a randomized controlled study evaluating a simulation-based teaching intervention on the management of agitated patients. Competence in the management of agitation was assessed using three separate modalities: simulation with SPs, open-ended clinical vignettes, and self-report questionnaires. Performance on clinical vignettes correlated significantly with SP-based assessments (r = 0.59, p = 0.002); self-report questionnaires that assessed one's own ability to manage agitation did not correlate with SP-based assessments (r = -0.06, p = 0.77). Standardized clinical vignettes may be a simple, time-efficient, and valid tool for assessing residents' competence in the management of agitation. Integration of basic and clinical science is a key component of medical education reform, yet best practices have not been identified. The authors compared two methods of basic and clinical science integration in the psychiatry clerkship. Two interventions aimed at integrating basic and clinical science were implemented and compared in a dementia conference: flipped curriculum and coteaching by clinician and physician-scientist. The authors surveyed students following each intervention. Likert-scale responses were compared. Participants in both groups responded favorably to the integration format and would recommend integration be implemented elsewhere in the curriculum. Survey response rates differed significantly between the groups and student engagement with the flipped curriculum video was limited. Flipped curriculum and co-teaching by clinician and physician-scientist are two methods of integrating basic and clinical science in the psychiatry clerkship. Student learning preferences may influence engagement with a particular teaching format. To determine the perceived educational impact of a resident-led psychiatry research newsletter ('Research Watch') on the psychiatry residents at the authors' residency program. An anonymous, voluntary paper questionnaire was distributed to all psychiatry residents at the program. The survey inquired about the degree of exposure (quantified as 'exposure index') and contribution to the newsletter. A set of questions asked residents to estimate how much of the improvement they attributed to the influence of the newsletter, rating the attribution between 0 and 100%, in the areas of interest in scholarly activities/research, knowledge of current psychiatric research, and participation in scholarly activities/research. The survey also inquired if the newsletter had any impact on their clinical practice. Of 29 residents in the program who received the survey, 27 (93%) responded. The percentage of residents reporting perceived non-zero impact of the newsletter on specific areas of improvement was as follows: interest in scholarly activities/research (44%), knowledge of current psychiatric research (48%), participation in scholarly activities/research (40%), and clinical practice (40%). Exposure index significantly and positively correlated with self-reported percentage attribution for knowledge (correlation coefficient 0.422, p value 0.028) and self-reported impact on clinical practice (correlation coefficient 0.660, p value 0.000), and degree of contribution significantly and positively correlated with self-reported percentage attribution for knowledge (correlation coefficient 0.488, p value 0.010). A resident-led research newsletter can have a positive perceived impact on the residents' interest, knowledge, and participation in research, as well as a positive perceived impact on clinical practice. Quality improvement to optimize workflow has the potential to mitigate resident burnout and enhance patient care. This study applied mixed methods to identify factors that enhance or impede workflow for residents performing emergency psychiatric consultations. The study population consisted of all psychiatry program residents (55 eligible, 42 participating) at the Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles. The authors developed a survey through iterative piloting, surveyed all residents, and then conducted a focus group. The survey included elements hypothesized to enhance or impede workflow, and measures pertaining to self-rated efficiency and stress. Distributional and bivariate analyses were performed. Survey findings were clarified in focus group discussion. This study identified several factors subjectively associated with enhanced or impeded workflow, including difficulty with documentation, the value of personal organization systems, and struggles to communicate with patients' families. Implications for resident education are discussed. This study examined physician residents' and fellows' knowledge of eating disorders and their attitudes toward patients with eating disorders. Eighty physicians across disciplines completed a survey. The response rate for this survey across disciplines was 64.5 %. Participants demonstrated limited knowledge of eating disorders and reported minimal comfort levels treating patients with eating disorders. Psychiatry discipline (p = 0.002), eating disorder experience (p = 0.010), and having >= 4 eating disorder-continuing medical education credits (p = 0.037) predicted better knowledge of anorexia nervosa but not bulimia nervosa. Psychiatry residents (p = 0.041), and those who had treated at least one eating disorder patient (p = 0.006), reported significantly greater comfort treating patients with eating disorders. These results suggest that residents and fellows from this sample may benefit from training to increase awareness and confidence necessary to treat patients with eating disorders. Sufficient knowledge and comfort are critical since physicians are often the first health care provider to have contact with patients who have undiagnosed eating disorders. This paper is devoted to Laurentius Eichstadt, a Baltic astronomer of the generation between Tycho and Hevelius. As a calendar-maker, Eichstadt used and tested the astronomical tables and the planetary theories of his elder contemporaries, Longomontanus and Kepler; as a town physician and gymnasium professor, he taught mathematics and astronomy alongside medicine and natural philosophy in Stettin and Gdansk. Eichstadt's indefatigable engagement with theory, practice, and teaching is marked by his continuous reassessment, adjustment, and revision of views in astronomy, physics, and metaphysics, aimed at bringing these fields in better agreement with each other and with empirical observation. Eichstadt's critical attitude did not prevent him from remaining committed to his scholastic legacy. As a matter of fact, his creative reworking and teaching of astronomy and philosophy bear witness to the long vitality of the northern European scientific tradition rooted in Melanchthonian literacy and Aristotelian philosophy. The work and conceptions of this participant in the astronomical debates of the early seventeenth century offers us an insight into the complex interplay of technical astronomy and metaphysical discourse in a time of transition from a geometrical approach to planetary theory resting on Aristotelian metaphysics to a post-Keplerian physical-mathematical science unifying heavens and earth. This paper distinguishes four perspectives in the process of reception of Copernicanism in colonial Rio de la Plata: (1) the discussion of the systems of the world in the University of Cordoba by the Jesuits until 1767, (2) the treatment of this topic by the Franciscans in Cordoba and in their convent school in Buenos Aires, (3) the teaching by the secular clergy in the Colegio de San Carlos in the same city, and (4) the celebration of Copernicus by the enlightened naval engineer Pedro Cervino in the Nautical School of the Consulado de Buenos Aires. The examination of these cases on the basis of manuscript sources and colonial printings shows that the reception of Copernican theory was an erratic process rich in incidences. Historical data play a crucial role in fields such as astronomy, providing accounts of past celestial phenomena that may not be predictable such as the appearance of novae, meteors and comets. Astronomers have made use of such information in their efforts to reconstruct past events; however, the reliability of these data and the willingness of astronomers to accept them have created difficulties. The expected comet of 1848 provides an example of the tendency of astronomers to utilize only the medieval data which fit their preexisting theories while discarding observations that challenged their assumptions. The resulting failure of the predicted comet to appear demonstrates the limitation of this selective use of historical data. Analysis of fifteenth- and sixteenth-century solar tables which have been used in navigation, and of declination tables derived from them, reveals that much of the accepted history needs revision. In particular, the celebrated astronomical tables of Abraham Zacut contain systematic and accidental errors by which later tables may be identified as derived from them. The tables of Pedro Nunez and Martin Cortes are not their own, but have been copied from those of Philipp Imsser, which are usually, but incorrectly, attributed to his predecessor Johannes Stoffler. William Bourne used an incorrect conversion table to calculate his declinations and had to manipulate his data to hide it. Pedro de Medina's Dutch translation by Marten Everaert has declination tables that fit neither the Alfonsine nor the Copernican scheme and are inconsistent within themselves. Also, the declination tables of Lucas Jansz. Waghenaer need amendment and those of Willem Jansz Blaeu are older than he suggests himself. The absolute accuracy of declination tables is assessed in retrospect by comparison with modern celestial mechanics programmes. The asterism of Veronica's Veil reported by Rheita in 1645 is argued to be real, though inconspicuous; not a star cluster; and not correlated with Johannes Zahn's picture published in 1686 and widely redistributed. The asterism is most likely the rectangle bounded by rho Leonis, beta Sextantis, omicron Leonis, and iota Hydrae, with some scattered interior stars interpreted as a face. The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence. Prototype learning system was developed based on this learning sequence using Android Emulator. For analysis purpose, 42 learners were given to learn the course Java Programming taught at graduation level. A posttest was conducted after learning for evaluation purpose. Multiple regression analysis method was applied on these results to develop regression equations for the proposed method of learning. Statistical Package for Social Sciences software was used for statistical analysis purposes. It was found that posttest results are directly proportional to the quality of traditional learning. Better quality students require less time in constructing prototype system. Further concept mapping was found to have a positive impact on proposed method of learning. When the number of concepts is large, a learning sequence among these can be generated using the proposed method. This learning sequence can be used to identify the concept a student needs additional learning. This case study examines preservice teachers' integration of technology in teaching various subject domains. It aims to gain in-depth understandings of preservice teachers' pedagogical patterns for teaching through the theoretical lens of technological pedagogical and content knowledge. Multiple data sources were collected in a teacher education institution in Hong Kong. The teachers' pedagogical patterns vary depending on their instructional decisions affected by individual preferences, various subject cultures, and individual school settings. The patterns reflected various forms of technological pedagogical and content knowledge development in teaching different subjects. Implications for preparation of preservice teachers' pedagogy, teacher preparation, and development are also discussed. Observational tutoring has been found to be an effective method for teaching a variety of subjects by reusing dialogue from previous successful tutoring sessions. While it has been shown content can be learned through observational tutoring, it has yet to been examined if a secondary behavior such as goal setting can be influenced. The present study investigated whether observing virtual humans engaging in a tutoring session on a difficult learning topic, rotational kinematics, with embedded positive goal-oriented dialogue would increase knowledge of the material and perpetuate a shift in an observer's goal orientation from performance avoidance goal orientation (PAVGO) to learning goal orientation (LGO). Learning gains were observed in pretest to posttest knowledge retention tests. Significant negative changes from pretest to posttest occurred across conditions for LGO. Additionally, significant increases from PAVGO pretest to posttest were observed in the control condition; however, PAVGO did not significantly change in the experimental condition. This study examined the relationship between Internet dependence in university students and forms of coping with stress and self-efficacy and investigated whether Internet dependence varies according to such variables as sex roles, gender, and duration of Internet use. The study was performed with 632 university students. The Internet Addiction Test, the Coping With Stress Scale, the General Self-Efficacy Scale, the Bem Sex Role Inventory Test, and a Personal Information Form were used in the collection of data. Results revealed a significant negative correlation between Internet dependence and the seeking social support form of coping with stress. A significant negative correlation was also determined between Internet dependence and self-efficacy. In addition, university students' Internet addiction scores varied significantly depending on sex roles. An emerging body of research examines language learning of young children from experiences with digital storybooks, but little is known about the ways in which specific components of digital storybooks, including interactive elements, may influence language learning. The purpose of the study was to examine the incidental word learning and story comprehension of preschool children after interactions with interactive and noninteractive versions of a digital storybook. Thirty preschool children were randomly assigned to one of two experimental conditions: interactive in which the story text was presented aloud and interactive features were present and not interactive in which the story text was presented aloud with no interactive features. After three sessions with the digital storybook, no group differences were observed between conditions on measures of word learning or story comprehension. Children in both groups demonstrated some learning of new words; however, gains were minimal, approximately one new word per child. This study contributes preliminary data to indicate that interactive components of digital storybooks may not be sufficient to facilitate language learning. Instruction, rather than incidental exposure, is likely necessary for meaningful language learning from digital storybooks. The aim of this study was to evaluate the effects of various realistic levels of talking-head on students' emotions in pronunciation learning. Four talking-head characters with varying levels of realism were developed and tested: a nonrealistic three-dimensional character, a realistic three-dimensional character, a two-dimensional character, and an actual human character. The student's emotional level was measured with the learning component of Achievement Emotions Questionnaire following their exploration of the instructional material on a self-paced learning method. The research method was a quasi-experimental design, and the data were analyzed using ANCOVA test. The sample consisted of 150 Semester 1 students from four community colleges. The findings revealed significant differences (p<.05) in emotion test outcomes among groups who received different levels of realism for the talking-head in pronunciation learning app. In conclusion, this study recommends a nonrealistic three-dimensional, two-dimensional, or actual human character as the most suitable design for the talking-head in instructional materials. This article examines how student movements between traditional public schools (TPSs) and chartersboth brick and mortar and cybermay be associated with both racial isolation and poverty concentration. Using student-level data from the universe of Pennsylvania public schools, this study builds upon previous research by specifically examining student transfers into charter schools, disaggregating findings by geography. We find that, on average, the transfers of African American and Latino students from TPSs to charter schools were segregative. White students transferring within urban areas transferred to more racially segregated schools. Students from all three racial groups attended urban charters with lower poverty concentration. Two federal campus-based financial aid programs, the Supplemental Educational Opportunity Grant (SEOG) and the Federal Work-Study Program (FWS), combine to provide nearly US$2 billion in funding to students with financial need. However, the allocation formulas have changed little since 1965, resulting in community colleges and newer institutions getting much smaller awards than long-standing private colleges with high costs of attendance. I document the trends in campus-level allocations over the past two decades and explore several different methods to reallocate funds based on current financial need while limiting the influence of high-tuition colleges. In 2009, a seldom-used policy lever emerged in the form of a competitive grant program, Race to the Top (RTTT), and sparked a flurry of state-led initiatives as states vied for federal dollars. The current study examines the policymaking context that surrounded these events and propelled Tennessee to the top of the race among the states. Through interviews with legislators and bureaucrats, I analyze the state-level processes instigated by a federal program in which all but four states participated, but fewer than half were winners. My examination details the parallels between the RTTT guidelines, Tennessee's efforts, including the Special Session in the General Assembly, and the state's plan for improving education in the state as outlined in their RTTT application. In this cross-sectional study, we examined a matched sample of 924 educators' perceptions of severity of bullying and harassment and school climate prior to (Wave 1 n = 435) and following (Wave 2 n = 489) the implementation of New York's anti-bullying and harassment legislation, the Dignity for All Students Act (DASA). Alignment with DASA mandates predicted educator perceptions of (a) less severe bullying and harassment, (b) positive school climate, and (c) less need for improvement in school anti-bullying practices. The relations did not differ before and after the implementation of DASA, suggesting that implementing practices aligned with the legislation was associated with positive outcomes, although the relations may not be due to the mandate itself. A bipyridinium derivative appending a benzocrown ether, in which the phenyl unit in the benzocrown ether was directly bounded to the N-position of the bipyridinium unit, has been synthesized. The compound showed a yellow color associated with an intramolecular charge transfer (CT), which was affected by the presence of alkali and alkaline earth metal ions. An unusual CT response to K+ for 1 was observed and could be applicable for K+ sensing. (C) 2017 Elsevier Ltd. All rights reserved. The voltage-gated sodium channel Na(v)1.7 is a genetically validated target for the treatment of pain with gain-of-function mutations in man eliciting a variety of painful disorders and loss-of-function mutations affording insensitivity to pain. Unfortunately, drugs thought to garner efficacy via Na(v)1 inhibition have undesirable side effect profiles due to their lack of selectivity over channel isoforms. Herein we report the discovery of a novel series of orally bioavailable arylsulfonamide Na(v)1.7 inhibitors with high levels of selectivity over Na(v)1.5, the Na-v isoform responsible for cardiovascular side effects, through judicious use of parallel medicinal chemistry and physicochemical property optimization. This effort produced inhibitors such as compound 5 with excellent potency, selectivity, behavioral efficacy in a rodent pain model, and efficacy in a mouse itch model suggestive of target modulation. (C) 2017 Elsevier Ltd. All rights reserved. The potent and selective prostanoid EP4 receptor antagonist CJ-042794 was radiolabeled with F-18, and evaluated for imaging EP4 receptor expression in cancer with positron emission tomography (PET). The fluorination precursor, arylboronic acid pinacol ester 4, was prepared in 4 steps with 42% overall yield. F-18-CJ-042794 was synthesized via a copper-mediated F-18-fluorination reaction followed by base hydrolysis, and was obtained in 1.5 +/- 1.1% (n = 2) decay-corrected radiochemical yield. PET/CT imaging and biodistribution studies in mice showed that F-18-CJ-042794 was excreted through both renal and hepatobiliary pathways with significant retention in blood. The EP4-receptor-expressing LNCaP prostate cancer xenografts were clearly visualized in PET images with 1.12 +/- 0.08%ID/g (n = 5) uptake value and moderate tumour-to-muscle contrast ratio (2.73 +/- 0.22) at 1 h post-injection. However, the tumour uptake was nonspecific as it could not be blocked by co-injection of cold standard, precluding the application of F-18-CJ-042794 for PET imaging of EP4 receptor expression in cancer. (C) 2017 Elsevier Ltd. All rights reserved. We report a series of tranylcypromine analogues containing a fluorine in the cyclopropyl ring. A number of compounds with additional m- or p-substitution of the aryl ring were micromolar inhibitors of the LSD1 enzyme. In cellular assays, the compounds inhibited the proliferation of acute myeloid leukemia cell lines. Increased levels of the biomarkers H3K4me2 and CD86 were consistent with LSD1 target engagement. (C) 2017 Elsevier Ltd. All rights reserved. Three potential chromogenic enzymatic probes, each possessing a self-immolative spacer unit, were synthesised for the purpose of detecting L-alanylaminopeptidase activity in microorganisms. An Alizarin based probe was the most effective, allowing several species to generate strongly coloured colonies in the presence of metal ions. (C) 2017 Elsevier Ltd. All rights reserved. Quaternary ammonium compounds (QACs) are ubiquitous antiseptics whose chemical stability is both an aid to prolonged antibacterial activity and a liability to the environment. Soft antimicrobials, such as QACs designed to decompose in relatively short times, show the promise to kill bacteria effectively but not leave a lasting footprint. We have designed and prepared 40 soft QAC compounds based on both ester and amide linkages, in a systematic study of mono-, bis-, and tris-cationic QAC species. Antimicrobial activity, red blood cell lysis, and chemical stability were assessed. Antiseptic activity was strong against a panel of six bacteria including two MRSA strains, with low micromolar activity seen in many compounds; amide analogs showed superior activity over ester analogs, with one bisQAC displaying average MIC activity of similar to 1 mu M. For a small subset of highly bioactive compounds, hydrolysis rates in pure water as well as buffers of pH = 4, 7, and 10 were tracked by LCMS, and indicated good stability for amides while rapid hydrolysis was observed for all compounds in acidic conditions. (C) 2017 Elsevier Ltd. All rights reserved. Hitherto this is the first report pertaining to production of biofilm inhibitory compound(s) (BIC) from Bacillus subtilis BR4 against Pseudomonas aeruginosa (ATCC 27853) coupled with production optimization. In order to achieve this, combinations of media components were formulated by employing statistical tools such as Plackett-Burman analysis and central composite rotatable design (CCRD). It was evident that at 35 ml L-1 glycerol and 3.8 g L-1 casamino acid, anti-biofilm activity and production of extracellular protein significantly increased by 1.5-fold and 1.2-fold, respectively. These results corroborate that the combination of glycerol and casamino acid plays a key role in the production of BIC. Further, metabolic profiling of BIC was carried out using liquid chromatography/tandem mass spectrometry (LC-MS/MS) based on m/z value. The presence of Stigmatellin Y was predicted with monoisotopic neutral mass of 484.2825 Da. In support of optimization study, higher production of BIC was confirmed in the optimized-media-grown BR4 (OPT-BR4) than in the ideal-media-grown BR4 (ID-BR4) by LC-MS/MS analysis. PqsR in P. aeruginosa is a potential target for anti-virulent therapy. Molecular docking study has revealed that Stigmatellin Y interacts with PqsR in the similar orientation like a cognate signal (PQS) and synthetic inhibitor. In addition, Stigmatellin Y was found to exhibit interaction with four more amino acid residues of PqsR to establish strong affinity. Stigmatellin Y thus might play a role of competitor for PQS to distract PQS-PqsR mediated communication in P. aeruginosa. The present investigation thus paves new avenues to develop anti-Pseudomonas virulent therapy. (C) 2017 Elsevier Ltd. All rights reserved. It is well known that parental and community-based support are each related to healthy development in lesbian, gay, bisexual, transgender, queer, and questioning (LGBTQ) youth, but little research has explored the ways these contexts interact and overlap. Through go-along interviews (a method in which participants guide the interviewer around the community) with 66 youth in British Columbia, Massachusetts, and Minnesota, adolescents (aged 14-19 years) reported varying extent of overlap between their LGBTQ experiences and their parent-youth experiences; parents and youth each contributed to the extent of overlap. Youth who reported high overlap reported little need for resources outside their families but found resources easy to access if wanted. Youth who reported little overlap found it difficult to access resources. Findings suggest that in both research and practice, considering the extent to which youth feel they can express their authentic identity in multiple contexts may be more useful than simply evaluating parental acceptance or access to resources. The onset of acute and chronic illness in children frequently triggers episodes of stress and posttraumatic stress symptoms (PTSS) in mothers. Mothers of children with type 1 diabetes (T1D) consistently report high levels of stress and PTSS. The purpose of this integrative review was to review and synthesize the published empirical research. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used to conduct this integrative literature review. A total of 19 studies were identified from a sample of 128. Stress and PTSS were prevalent in mothers of youth with T1D. While PTSS was most severe at disease onset, symptoms often persisted 1 to 5 years after diagnosis. The diagnosis of T1D in a child was traumatic for mothers. Stress and PTSS in mothers adversely affected children's health. Management of stress symptoms in mothers may lead to improved behavioral and metabolic outcomes in children. Spina bifida (SB) is the second most common birth defect worldwide. Mothers of children with SB face extraordinary challenges due to the complicated conditions and disability of their children. Little is known about the impact of these challenges on the mothers' well-being, particularly in Middle Eastern culture, where chronic illness and disability are perceived as a stigma, and care of disabled children has traditionally been the responsibility of the mother. The aim of this study was to illuminate mothers' lived experience of having a child with SB in Palestine. Twenty Arab-Muslim mothers living in Palestine were purposefully recruited from several rehabilitation centers in Palestine and were interviewed in 2014. The transcribed interviews were analyzed according to phenomenological hermeneutics. The mothers' experiences were described in the main theme: From feeling broken to looking beyond broken. Four themes were interwoven: living with constant anxiety, living with uncertainty, living with a burden, and living with a difficult life situation. These findings highlight the burden and resilience of the Arab-Muslim Palestinian mothers while striving to maintain the well-being of the whole family as well as facilitating the child's welfare. Family satisfaction is an important outcome of palliative care and is a critical measure for health care professionals to address when assessing quality of care. The FAMCARE-2 is a widely used measure of family satisfaction with the health care received by both patient and family in palliative care. In this study, a team of Italian researchers culturally adapted the FAMCARE-2 to the Italian language and psychometrically tested the instrument by measuring satisfaction of 185 family caregivers of patients admitted into two palliative care services. FAMCARE-2 showed excellent levels of internal consistency (Cronbach's coefficient = .96) and test-retest reliability (r = .98, p < .01). The confirmatory factor analysis showed a single-factor structure with good fit. Satisfaction levels were significantly correlated with family caregivers being females with less education, patient length of care, and place of assistance and death. This scale can help health care professionals identify which aspects of care need improvement and enable family caregivers to manage their challenging role. Family caregivers of patients with moderate-to-severe traumatic brain injury (TBI) regularly visit the patient during the hospital stay and are involved in their care. As impairments caused by the TBI often preclude the patient from stating preferences for visitors, family caregivers often make decisions about visitors on the patient's behalf during the hospital stay. However, limited literature investigates this process. The purpose of this study was to describe family caregivers' experience of visitors while the patient with moderate-to-severe TBI is hospitalized. Authors used grounded theory to conduct 24 interviews with 16 family caregivers. Findings showed family caregivers manage welcome and unwelcome visitors throughout the hospital stay to protect the patient's physical and emotional safety and to conserve their own energy. Staff had limited involvement in management of unwelcome visitors. These findings have practice implications for educating hospital staff about providing family nursing and assisting families to manage unwelcome visitors and about policy implications for improving hospital visiting policies. BACKGROUND: Psychological vulnerability is related to cognitive beliefs that reflect dependence on one's sense of self-worth and to maladaptive functioning. It is a disadvantage that renders people less protected to face negative life experiences. OBJECTIVE: The purpose of this study was to adapt and test the psychometric properties of the Psychological Vulnerability Scale in a sample of 267 Portuguese higher education students. DESIGN: A psychometric study of the Psychological Vulnerability Scale, after translation into Portuguese, was performed with a convenience sample of higher education students. Participants were asked to fill in the sociodemographic questionnaire, the Psychological Vulnerability Scale, the Brief Symptom Inventory, and a one-item question about the Perception of Vulnerability. RESULTS: The mean age of the participants was 20.5 years (SD = 3.3). A factor analysis confirmed the original one-factor structure, explaining 42.9% of the total variance. The Psychological Vulnerability Scale showed adequate internal consistency and excellent test-retest stability. Convergent validity was confirmed by positive correlations with the Brief Symptom Inventory and Perception of Vulnerability. CONCLUSIONS: Overall, the Psychological Vulnerability Scale showed good validity, reliability, and stability over time. The Psychological Vulnerability Scale is now ready to be used by practitioners and researchers to measure the psychological vulnerability among Portuguese higher education students. These data add to the body of knowledge of psychiatric and mental health nursing and provides support for the use of the Psychological Vulnerability Scale in higher education students. BACKGROUND: This case illustrates previously undiagnosed dissociative identity disorder (DID) in a middle-aged female with extensive childhood trauma, who was high functioning prior to a trigger that caused a reemergence of her symptoms. The trigger sparked a dissociative state, attempted suicide, and subsequent inpatient psychiatric hospitalization. OBJECTIVE: Practitioners should include in their differential and screen for undiagnosed DID in patients with episodic psychiatric hospitalizations refractory to the standard treatments for previously diagnosed mental illnesses. DESIGN: Case study. RESULTS: During hospitalization, the diagnosis of DID became apparent and treatment included low-dose risperidone, mirtazapine, sertraline, unconditional positive regard, normalization of her dissociative states in an attempt to decrease her anxiety during treatment, and documentation for the patient via written notes following interviews. CONCLUSION: These methods helped her come to terms with the diagnosis and allowed the treatment team to teach her coping skills to lessen the impact of dissociative states following discharge. BackgroundRelatively little is known about the effects of mode of delivery on long-term health-related quality-of-life outcomes. Furthermore, no previous study has expressed these outcomes in preference-based (utility) metrics. MethodsThe study population comprised 2,161 mothers recruited from a prospective population-based study in the East Midlands of England encompassing live births and stillbirths between 32(+0) and 36(+6) weeks' gestation and a sample of term-born controls. Perinatal data were extracted from the mothers' maternity records. Health-related quality-of-life outcomes were assessed at 12 months postpartum, using the EuroQol Five Dimensions (EQ-5D) measure with responses to the EQ-5D descriptive system converted into health utility scores. Descriptive statistics and multivariable analyses were used to estimate the relationship between the mode of delivery and health-related quality-of-life outcomes. ResultsThe overall health-related quality-of-life profile of the women in the study cohort mirrored that of the English adult population as revealed by national health surveys. A significantly higher proportion of women delivering by cesarean delivery reported some, moderate, severe, or extreme pain or discomfort at 12 months postpartum than women undergoing spontaneous vaginal delivery. Multivariable analyses, using the Ordinary Least Squares estimator revealed that, after controlling for maternal sociodemographic characteristics, cesarean delivery without maternal or fetal compromise was associated with a significant EQ-5D utility decrement in comparison to spontaneous vaginal delivery among all women (-0.026; p = 0.038) and among mothers of term-born infants (-0.062; p < 0.001). Among mothers of term-born infants, this result was replicated in models that controlled for all maternal and infant characteristics (utility decrement of -0.061; p < 0.001). The results were confirmed by sensitivity analyses that varied the categorization of the main exposure variable (mode of delivery) and the econometric strategy. ConclusionsAmong mothers of term-born infants, cesarean delivery without maternal or fetal compromise is associated with poorer long-term health-related quality of life in comparison to spontaneous vaginal delivery. Further longitudinal studies are needed to understand the magnitude, trajectory, and underpinning mechanisms of health-related quality-of-life outcomes following different modes of delivery. BackgroundGiven increased public reporting of the wide variation in hospital obstetric quality, we sought to understand how women incorporate quality measures into their selection of an obstetric hospital. MethodsWe surveyed 6141 women through Ovia Pregnancy, an application used by women to track their pregnancy. We used t tests and chi-square tests to compare response patterns by age, parity, and risk status. ResultsMost respondents (73.2%) emphasized their choice of obstetrician/midwife over their choice of hospital. Over half of respondents (55.1%) did not believe that their choice of hospital would affect their likelihood of having a cesarean delivery. While most respondents (74.9%) understood that quality of care varied across hospitals, few prioritized reported hospital quality metrics. Younger women and nulliparous women were more likely to be unfamiliar with quality metrics. When offered a choice, only 43.6% of respondents reported that they would be willing to travel 20 additional miles farther from their home to deliver at a hospital with a 20 percentage point lower cesarean delivery rate. DiscussionWomen's lack of interest in available quality metrics is driven by differences in how women and clinicians/researchers conceptualize obstetric quality. Quality metrics are reported at the hospital level, but women care more about their choice of obstetrician and the quality of their outpatient prenatal care. Additionally, many women do not believe that a hospital's quality score influences the care they will receive. Presentations of hospital quality data should more clearly convey how hospital-level characteristics can affect women's experiences, including the fact that their chosen obstetrician/midwife may not deliver their baby. BackgroundFriedman, the United Kingdom's National Institute for Health and Care Excellence (NICE), and the American College of Obstetricians and Gynecologists/Society for Maternal-Fetal Medicine (ACOG/SMFM) support different active labor diagnostic guidelines. Our aims were to compare likelihoods for cesarean delivery among women admitted before vs in active labor by diagnostic guideline (within-guideline comparisons) and between women admitted in active labor per one or more of the guidelines (between-guideline comparisons). DesignActive labor diagnostic guidelines were retrospectively applied to cervical examination data from nulliparous women with spontaneous labor onset (n=2573). Generalized linear models were used to determine outcome likelihoods within- and between-guideline groups. ResultsAt admission, 15.7%, 48.3%, and 10.1% of nulliparous women were in active labor per Friedman, NICE, and ACOG/SMFM diagnostic guidelines, respectively. Cesarean delivery was more likely among women admitted before vs in active labor per the Friedman (AOR 1.75 [95% CI 1.08-2.82] or NICE guideline (AOR 2.55 [95% CI 1.84-3.53]). Between guidelines, cesarean delivery was less likely among women admitted in active labor per the NICE guideline, as compared with the ACOG/SMFM guideline (AOR 0.55 [95% CI 0.35-0.88]). ConclusionMany nulliparous women are admitted to the hospital before active labor onset. These women are significantly more likely to have a cesarean delivery. Diagnosing active labor before admission or before intervention to speed labor may be one component of a multi-faceted approach to decreasing the primary cesarean rate in the United States. The NICE diagnostic guideline is more inclusive than Friedman or ACOG/SMFM guidelines and its use may be the most clinically useful for safely lowering cesarean rates. BackgroundThe United Kingdom's National Institute for Health and Care Excellence (NICE) recently published recommendations that support planned home birth for low-risk women. The American College of Obstetricians and Gynecologists (ACOG) remains wary of planned home birth, asserting that hospitals and birthing centers are the safest birth settings. Our objective was to examine opinions of obstetricians in Salt Lake City, Utah about home birth in the context of rising home birth rates and conflicting guidelines. MethodsParticipants were recruited through online searches of Salt Lake City obstetricians and through snowball sampling. We conducted individual interviews exploring experiences with and attitudes toward planned home birth and the ACOG/NICE guidelines. ResultsFifteen obstetricians who varied according to years of experience, location of medical training, sex, and subspecialty (resident, OB/GYN, maternal-fetal medicine specialist) were interviewed. Participants did not recommend home birth but supported a woman's right to choose her birth setting. Obstetrician opinions about planned home birth were shaped by misconceptions of home birth benefits, confusion surrounding the scope of care at home and among home birth providers, and negative transfer experiences. Participants were unfamiliar with the literature on planned home birth and/or viewed the evidence as unreliable. Support for ACOG guidelines was high, particularly in the context of the United States health care setting. ConclusionPhysician objectivity may be limited by biases against home birth, which stem from limited familiarity with published evidence, negative experiences with home-to-hospital transfers, and distrust of home birth providers in a health care system not designed to support home birth. BackgroundRefugee women experience higher incidence of childbirth complications and poor pregnancy outcomes. Resettled refugee women often face multiple barriers accessing pregnancy care and navigating health systems in high income countries. MethodsA community-based model of group pregnancy care for Karen women from Burma was co-designed by health services in consultation with Karen families in Melbourne, Australia. Focus groups were conducted with women who had participated to explore their experiences of using the program, and whether it had helped them feel prepared for childbirth and going home with a new baby. ResultsNineteen women (average time in Australia 4.3years) participated in two focus groups. Women reported feeling empowered and confident through learning about pregnancy and childbirth in the group setting. The collective sharing of stories in the facilitated environment allowed women to feel prepared, confident and reassured, with the greatest benefits coming from storytelling with peers, and developing trusting relationships with a team of professionals, with whom women were able to communicate in their own language. Women also discussed the pivotal role of the bicultural worker in the multidisciplinary care team. Challenges in the hospital during labor and birth were reported and included lack of professional interpreters and a lack of privacy. ConclusionGroup pregnancy care has the potential to increase refugee background women's access to pregnancy care and information, sense of belonging, cultural safety using services, preparation for labor and birth, and care of a newborn. BackgroundRepeat cesarean delivery is the single largest contributor to the escalating cesarean rate worldwide. Approximately 80 percent of women with a past cesarean are candidates for vaginal birth after a cesarean (VBAC), but in Canada less than one-third plan VBAC. Emerging evidence suggests that these trends may be due in part to nonclinical factors, including care provider practice patterns and delays in access to surgical and anesthesia services. This study sought to explore maternity care providers' and decision makers' attitudes toward and experiences with providing and planning services for women with a previous cesarean. MethodsIn-depth, semi-structured interviews were conducted with family physicians, midwives, obstetricians, nurses, anesthetists, and health service decision makers recruited from three rural and two urban Canadian communities. Constructivist grounded theory informed iterative data collection and analysis. ResultsAnalysis of interviews (n=35) revealed that the factors influencing decisions resulted from interactions between the clinical, organizational, and policy levels of the health care system. Physicians acted as information providers of clinical risks and benefits, with limited discussion of patient preferences. Decision makers serving large hospitals revealed concerns related to liability and patient safety. These stemmed from competing access to surgical resources. ConclusionsTo facilitate women's increased access to planned VBAC, it is necessary to address the barriers perceived by care providers and decision makers. Strategies to mitigate concerns include initiating decision support immediately after the primary cesarean, addressing the social risks that influence women's preferences, and managing perceptions of patient and litigation risks through shared decision making. BackgroundOur aim was to study whether midwife experience affects the rate of severe perineal tears (3rd and 4th degree). MethodsA retrospective cohort study of all women with term vertex singleton pregnancies, who underwent normal vaginal deliveries, in a single tertiary hospital, between 2011 and 2015, was performed. Exclusion criteria were instrumental deliveries and stillbirth. All midwives used a hands on technique for protecting the perineum. The midwife experience at each delivery was calculated as the time interval between her first delivery and current delivery. A comparison was performed between deliveries in which midwife experience was less than 2years (inexperienced), between 2 and 10years (moderately experienced), and more than 10years (highly experienced). A multivariate regression analysis was performed to assess the association between midwife experience and the incidence of severe perineal tears, after controlling for confounders. ResultsOverall, 15146 deliveries were included. Severe perineal tears were diagnosed in 51 (0.33%) deliveries. Women delivered by inexperienced midwives had a higher rate of severe perineal tears compared with women delivered by highly experienced midwives (0.5% vs 0.2%, respectively, P=.024). On multivariate regression analysis, midwife experience was independently associated with a lower rate of severe perineal tears, after controlling for confounding factors. Each additional year of experience was associated with a 4.7% decrease in the risk of severe perineal tears (adjusted OR 0.95 [95% CI 0.91-0.99, P=.03). ConclusionMore experienced midwives had a lower rate of severe perineal tears, and may be preferred for managing deliveries of women at high risk for such tears. BackgroundDespite the evidence of multiple benefits of early skin-to-skin contact, it does not always happen and infants are separated from their parents because of different hospital practices. The aim of this study was to explore parent-infant closeness and separation, and which factors promote closeness or result in separation in the birthing unit in the first 2hours after birth from the point of view of staff members. MethodsThis qualitative descriptive pilot study was conducted in one university hospital in Finland in December 2014. Midwives and auxiliary nurses working in the birthing unit were eligible for the study. The data were collected with a new application downloaded on a smartphone. The participants were asked to record all the closeness and separation events they observed between the infants and parents using the application. ResultsThe application was used during 20 work shifts by 14 midwives or auxiliary nurses. The participants described more closeness than separation events. Our findings indicated that the staff of the birthing unit aimed for mother-infant closeness, and father-infant closeness was a secondary goal. Closeness was mostly skin-to-skin contact and justified as a normal routine care practice. Infants were separated fromtheir parents for routine measurements and because of infants' compromised health. ConclusionRoutines and normal care practices both promoted parent-infant closeness and caused separation. Parent-infant closeness and separation were controlled by staff members of the birthing unit. BackgroundPoor sleep during pregnancy has been associated with poorer birth outcomes. High body mass index (BMI) is often associated with poor sleep, but little is known about the relationship between gestational weight gain and sleep in late pregnancy. The purpose of this study was to evaluate the relationships of both gestational weight gain and pre-pregnancy BMI to objective and subjective measures of sleep during late pregnancy. MethodsPregnant women (n=128) were recruited from prenatal clinics and childbirth classes primarily serving low-income women. Their sleep (disruption and duration) was objectively assessed in their last month of pregnancy with 72hours of wrist actigraphy monitoring. Their perceived sleep quality was assessed with the Pittsburgh Sleep Quality Index. Pre-pregnancy and late pregnancy height and weight were assessed by self-report and used to calculate BMI and gestational weight gain, which were then grouped into standardized categories. ResultsMean Pittsburgh Sleep Quality Index score was 6.83.1 (range 2-16). Sixty percent had excess gestational weight gain and it was associated with poorer perceived sleep quality, but was unrelated to objective measures of sleep duration and disruption. Pre-pregnancy BMI was unrelated to all sleep parameters. However, analyses of the interaction of pre-pregnancy BMI and gestational weight gain indicated that excess weight gain was associated with shorter sleep duration and more sleep disruption, but only among women who were overweight before pregnancy. ConclusionPregnancy is an opportunity to promote long-term women's health with a better understanding of the relationship between weight management and healthy sleep habits. BackgroundEarly recognition and management of low maternal iron status is associated with improved maternal, fetal, and neonatal outcomes. However, existing international guidelines for the testing and management of maternal iron-deficiency anemia are variable, with no national guideline for New Zealand midwives. Clinical management is complicated by normal physiological hemodilution, and complicated further by the effects of inflammation on iron metabolism, especially in populations with a high prevalence of obesity or infection. This study describes how midwives in one New Zealand area diagnose and treat anemia and iron deficiency, in the absence of established guidelines. MethodsData on demographics, laboratory results, and documented clinical management were retrospectively collected from midwives (n=21) and women (n=189), from September to December 2013. Analysis was predominantly descriptive. A secondary analysis of iron status and body mass index (BMI) was undertaken. ResultsA total of 46% of 186 women, with hemoglobin testing at booking, did not have ferritin tested; 86% (of 385) of ferritin tests were not concurrently tested with C-reactive protein. Despite midwives prescribing iron for 48.7% of second trimester women, 47.1% still had low iron status before birth. Only 22.8% of women had hemoglobin testing postpartum. There was a significant difference between third trimester median ferritin levels in women with BMI 25.00 (14g/L) and BMI <25.00 (18g/L) (P=.05). DiscussionThere was a wide range in the midwives' practice. Maternal iron status was difficult to categorize, because of inconsistent testing. This study indicates the need for an evidence-based clinical guideline for New Zealand midwives and maternity care providers. This paper examines the source of the documented empirical link between measures of accruals quality and a firm's cost of capital. First, we argue that when regressions include accruals quality and operating volatility as determinants, these highly correlated measures capture different underlying constructs. Second, we find that in such regressions, the accruals quality measure displays inconsistent associations, while operating volatility variables display robust associations, with various cost of capital measures. Third, we provide research design suggestions to disentangle the effect of accruals quality from operating volatility, and we show how this method leads to less noisy coefficient estimates. These findings should be useful in designing empirical tests of the hypothesized associations involving accruals quality, operating volatility, and cost of capital. We compare the portfolio choices of Humans prospect theory investors to the portfolio choices of Econs power utility and mean-variance (MV) investors. In a numerical example, prospect theory portfolios are decidedly unreasonable. In an in-sample asset allocation setting, the prospect theory results are consistent with myopic loss aversion. However, the portfolios are extremely unstable. The power utility and MV results are consistent with traditional finance theory, where the portfolios are stable across decision horizons. In an out-of-sample asset allocation setting, the power utility and portfolios outperform the prospect theory portfolios. Nonetheless the prospect theory portfolios with loss aversion coefficients of 2.25 and 2 perform well. Using data from the Lipper TASS hedge fund database over the period 19942012, we examine the role of liquidity risk in explaining the relation between asset size and hedge fund performance. While a significant negative size-performance relation exists for all hedge funds, once we stratify our sample by liquidity risk, we find that such a relationship only exists among funds with the highest liquidity risk. Liquidity risk is found to be another important source of diseconomies of scale in the hedge fund industry. Evidently, for high liquidity risk funds, large funds are less able to recover from the relatively more significant losses incurred during market-wide liquidity crises, resulting in lower performance for large funds relative to small funds. This paper studies the determinants of trading volume and liquidity of corporate bonds. Using transactions data from a comprehensive dataset of insurance company trades, our analysis covers more than 17,000 US corporate bonds of 4,151 companies over a five-year period prior to the introduction of TRACE. Our transactions data show that a variety of issue- and issuer-specific characteristics impact corporate bond liquidity. Among these, the most economically important determinants of bond trading volume are the bonds issue size and age trading volume declines substantially as bonds become seasoned and are absorbed into less active portfolios. Stock-level activity also impacts bond trading volume. Bonds of companies with publicly traded equity are more likely to trade than those with private equity. Further, public companies with more active stocks have more actively traded bonds. Finally, we show that while the liquidity of high-yield bonds is more affected by credit risk, interest-rate risk is more important in determining the liquidity of investment-grade bonds. I examine whether changes in CEO status affect risk-related business decisions. I use prestigious awards as shocks to CEO status relative to other CEOs. Firms with award-winning CEOs decrease their idiosyncratic volatility, and their industry betas converge towards one. These firms also reduce their spending on research and development, while increasing investment in fixed assets relative to a matched sample of firms with non-winning CEOs. The evidence suggests that CEOs who reach higher status become more concerned about poor relative performance. By conforming to other firms in their industry, CEOs with the highest reputation can lock-in their relative advantage. Patient safety is compromised by medical errors and adverse events related to miscommunications among healthcare providers. Communication among healthcare providers is affected by human factors, such as interpersonal relations. Yet, discussions of interpersonal relations and communication are lacking in healthcare team literature. This paper proposes a theoretical framework that explains how interpersonal relations among healthcare team members affect communication and team performance, such as patient safety. We synthesized studies from health and social science disciplines to construct a theoretical framework that explicates the links among these constructs. From our synthesis, we identified two relevant theories: framework on interpersonal processes based on social relation model and the theory of relational coordination. The former involves three steps: perception, evaluation, and feedback; and the latter captures relational communicative behavior. We propose that manifestations of provider relations are embedded in the third step of the framework on interpersonal processes: feedback. Thus, varying team-member relationships lead to varying collaborative behavior, which affects patient-safety outcomes via a change in team communication. The proposed framework offers new perspectives for understanding how workplace relations affect healthcare team performance. The framework can be used by nurses, administrators, and educators to improve patient safety, team communication, or to resolve conflicts. Background: Cultivating hospital environments that support older people's care is a national priority. Evidence on geriatric nursing practice environments, obtained from studies of registered nurses (RNs) in American teaching hospitals, may have limited applicability to Canada, where RNs and registered practical nurses (RPNs) care for older people in predominantly nonteaching hospitals. Purpose: This study describes nurses' perceptions of the overall quality of care for older people and the geriatric nursing practice environment (geriatric resources, interprofessional collaboration, and organizational value of older people's care) and examines if these perceptions differ by professional designation and hospital teaching status. Methods: A cross-sectional survey, using Dillman's tailored design, that included Geriatric Institutional Assessment Profile subscales, was completed by 2005 Ontario RNs and registered practical nurses to assess their perceptions of the quality of care and geriatric nursing practice environment. Results: Scores on the Geriatric Institutional Assessment Profile subscales averaged slightly above the midpoint except for geriatric resources which was slightly below. Registered practical nurses rated the quality of care and geriatric nursing practice environment higher than RNs; no significant differences were found by hospital teaching status. Conclusions: Nurses' perceptions of older people's care and the geriatric nursing practice environment differ by professional designation but not hospital teaching status. Teaching and nonteaching hospitals should both be targeted for geriatric nursing practice environment improvement initiatives. Bacterial serine dipeptide lipids are known to promote inflammatory processes and are detected in human tissues associated with periodontal disease or atherosclerosis. Accurate quantification of bacterial serine lipid, specifically lipid 654 [((S)-15-methyl-3-((13-methyltetradecanoyl)oxy)hexadecanoyl)glycyl-l-serine, (3S)-l-serine] isolated from Porphyromonas gingivalis, in biological samples requires the preparation of a stable isotope internal standard for sample supplementation and subsequent mass spectrometric analysis. This report describes the convergent synthesis of a deuterium-substituted serine dipeptide lipid, which is an isotopically labeled homologue that represents a dominant form of serine dipeptide lipid recovered in bacteria. Fibrin deposition is observed in several diseases such as atherosclerosis, deep vein thrombosis, and also tumors, where it contributes to the formation of mature tumor stroma. The aim of this study was to develop a gallium-labeled peptide tracer on the basis of the fibrin-targeting peptide Epep for PET imaging of fibrin deposition. For this purpose, the peptide Epep was modified with a NOTA moiety for radiolabeling with Ga-67 and Ga-68 and compared with the earlier validated In-111-DOTA-Epep tracer. In vitro binding assays of Ga-67-NOTA-Epep displayed an enhanced retention as compared to previously published data showing binding of In-111-DOTA-Epep to human (84.0 +/- 0.6 vs 66.6 +/- 1.4 %Dose) and mouse derived fibrin clots (83.5 +/- 1.7 vs 74.2 +/- 2.4% Dose). In vivo blood kinetics displayed a bi-phasic elimination profile (t(1/2),=2.6 +/- 1.0minutes and t(1/2),=15.8 +/- 1.3minutes) and ex vivo biodistribution showed low blood values at 4hours post injection and a low uptake in nontarget tissue (<0.2 %ID/g; kidneys, 1.9%ID/g). In conclusion, taking into account the ease of radiolabeling and the promising in vitro and in vivo studies, gallium-labeled Epep displays the potential for further development towards a PET tracer for fibrin deposition. In support of the development of a new treatment for COPD, 2 C-14 labeled compounds were required for in vitro animal studies. The synthesis of nitrile [C-14]-1 was completed in 3 steps from C-14 labeled 4-bromobenzonitrile in accord with the previously developed medicinal chemistry route. The second compound, 2, did not possess an arylnitrile as did 1, which made the synthetic design more complex. An advanced, unlabeled benzotriazole containing intermediate, 10, was synthesized in low yield over 3 steps and was subsequently reacted with (KCN)-C-14 to give a mixture of diastereomers 12. Separation of the diastereomers followed by deprotection afforded [C-14]-2 in a 13% radiochemical yield. The procedure of the directed synthesis of N-vinylpyrrolidone-N-vinylformamide (VP-VFA) copolymers with grafted iminodiacetate (IDA) chelating units is presented. The methods for labelling resulting conjugates with indium-113m were developed. The metal-copolymer conjugates were characterized by different physicochemical methods, including IR and NMR, viscometry, light scattering, and exclusion high-performance liquid chromatography. Parameters of radiochemical synthesis of the conjugates labelled with indium-113m were optimized. It was shown that the VP-VFA-IDA copolymer firmly binds indium-113m both in the acid and alkaline solutions, with pH of the reaction mixture having almost no effect on the complexation. VP-VFA-IDA-In conjugates were found to be unstable in histidine challenge reaction. Drawing on assets-oriented, sociocultural theories of imagination and learning, the authors argue that the improvisational qualities and expanded resources of dramatic approaches to teaching make a positive difference in the quality of and persistence in students' story writing. The authors describe findings from a controlled quasi-experimental study examining the outcomes of an 8-week story-writing and drama-based program, Literacy to Life, implemented in 29 third-grade classrooms in elementary schools with and without Title I funding located within the same urban school district in Texas. Pre- and post-measures of writing self-efficacy, story building, and generating and revising ideas showed significant positive results, especially for students in schools that receive Title I funding. Research findings and the sociocultural theoretical framework argue for increased resources in support of opportunities for students to practice combinatorial imagination and use cultural knowledge for creative writing, as was made possible through the Literacy to Life program. We conducted a formative experiment investigating how an intervention that engaged students in constructing multimodal arguments could be integrated into high school English instruction to improve students' argumentative writing. The intervention entailed three essential components: (a) construction of arguments defined as claims, evidence, and warrants; (b) digital tools that enabled the construction of multimodal arguments; and (c) a process approach to writing. The intervention was implemented for 11 weeks in high school English classrooms. Data included classroom observations; interviews with the teacher, students, and administrators; student reflections; and the products students created. These data, analyzed using grounded-theory coding and constant-comparison analysis, informed iterative modifications of the intervention. A retrospective analysis led to several assertions contributing to an emerging pedagogical theory that may guide efforts to promote high school students' ability to construct arguments using digital tools. Using qualitative methodology, this research examines how graduates of a K-5 dual language immersion program have experienced multiple and competing social, cultural, institutional, and political forces at play in complex processes that ultimately affect one's mobilities of language, literacy, and learning. These students have now grown into adulthood, and the extent to which their past experiences as dual language students have affected their current language and literacy ideologies and practices is examined. As graduates experienced and internalized notions of Spanish as social, cultural, economic, and literacy capital, this likely contributed to current ideologies that greatly esteem bilingualism and biliteracy. The findings highlight that ideologies of language and literacy are neither static nor fixed, but over time, they have been molded and reshaped in a very fluid and lively process. This article analyzes data from a summer literacy program for intermediate and middle-level children of migrant farmworkers. The program was grounded in a sociocultural perspective on literacy, stressing the importance of interaction and collaboration within socioculturally responsive pedagogy, using enabling literature to empower students. Adaptations of readers' and writers' workshop methods, emphasizing the significance of valuing students' individual responses, were used throughout. The students were presented with a documentary, young adult novels, and more than two dozen children's picture storybooks representing the lives of migrant farmworkers. Then, using their own responses to these enabling mentor texts as scaffolding, the students collaborated to create illustrated narratives about growing up as migrants. The program provided a safe space that encouraged migrant students to express their experiences and concernsnormally silenced in classroomsduring literacy tasks and empowered them to ask for support. The program demonstrated the benefits of combining socioculturally responsive critical literacy pedagogy with enabling instructional materials in the development of emergent conscientization among the students. Finally, this article shows how the migrant students' perspectives and experiences can inform and challenge teachers, citizens, and policy makers to address the systemic injustices in the lives of migrant children. A qualitative think-aloud study, informed by social literacies and holistic bilingual perspectives, was conducted to examine how six emergent bilingual, Mexican American, fourth graders approached, interacted with, and comprehended narrative and expository texts in Spanish and English. The children had strong Spanish reading test scores, but differed in their English reading and oral proficiency test scores. All but one of them varied their cognitive and bilingual strategy use according to the demands and genre of the text and their oral English proficiency. The most frequent bilingual strategies demonstrated were translating and code-mixing. Only two children used cognates. The children often employed one language to explain their reading in the other language. They displayed a wider range of strategies across two languages compared with a single language, supporting the use of a holistic bilingual perspective to assess their reading rather than a parallel monolingual perspective. Their reading profiles in the two languages were similar, suggesting cross-linguistic transfer, although the think-aloud procedures could not determine strategy transference. The findings supported a translanguaging interpretation of their bilingual reading practices. Future research on how emergent bilingual children of different ages develop translanguaging and use it to comprehend texts was recommended. In working life, ageing and retiring staff and managers are being replaced by younger generations which come from different working life cultures. This may give rise to different management expectations. As a result, this creates a need to assess how the concept of appreciative management is implemented in health care. The aim was to develop a valid and reliable instrument to assess appreciative management. A multi-phase, mixed-method and psychometric evaluation of the Appreciative Management Scale (AMS) was conducted. A concept analysis and systematic literature review were carried out. The instrument's development employed a two-phase Delphi study approach including essays, survey iteration rounds and expert panel evaluation. The instrument was pre-tested and tested empirically in a survey completed by staff respondents and managers. AMS 1.0 has 83 items that are categorised into Systematic Management, Equality, Appreciation of Know-How, and the Promotion of Wellbeing at Work. The instrument was found to be valid and reliable. The AMS 1.0 scale needs to be tested internationally in order to conduct evaluative surveys of appreciative management in other countries. By using the AMS 1.0 instrument to assess managers' management practices, managers receive valuable feedback on their own management skills and also the skills of workers. Clinical commissioning groups were set up under the Health & Social Care Act (2012) in England to commission healthcare services for local communities. Governing body nurses provide nursing leadership to commissioning services on clinical commissioning groups. Little is known about how nurses function on clinical commissioning groups. We conducted observations of seven formal meetings, three informal observation sessions and seven interviews from January 2015 to July 2015 in two clinical commissioning groups in the South of England. Implicit in the governing body nurse role is the enduring and contested assumption that nurses embody the values of caring, perception and compassion. This assumption undermines the authority of nurses in multidisciplinary teams where authority is traditionally clinically based. Emerging roles within clinical commissioning groups are not based on clinical expertise, but on well-established new public management concepts which promote governance over clinically-based authority. While governing body nurses claim an authority located in clinical and managerial expertise, this is contested by members of the clinical commissioning group and external stakeholders irrespective of whether it is aligned with clinical knowledge and practice or with new forms of management, as both disregard the type of expertise nurses in commissioning embody. The drive to establish clinical academic careers in nursing in the United Kingdom has gained momentum in recent years, spearheaded by opportunities presented by the Higher Education England/National Institute for Health Research integrated clinical academic pathway. However, embedding clinical academic careers within a healthcare organisation is challenging. This paper outlines the approach that one large NHS Trust has taken to developing a framework for clinical academic careers in nursing. The internal and external resources that are drawn upon to support the implementation of the framework are outlined and some of the practical challenges of making the framework a reality are discussed. The development, implementation and sustainability of the framework are dependent on professional, managerial and research leadership together with close collaboration between the healthcare organisation and higher education institutions. This paper describes the leadership and management competencies of head nurses and directors of nursing in social and health care. In the nursing profession, studies have tended to describe the role of the nurse manager, or to provide lists of competencies, talents and traits which can be found in successful managers. However, nursing managers' leadership and management competencies lack any depth of research knowledge. Data were gathered by electronic questionnaire. Respondents (n=1025) were head nurses and directors of nursing. The data were statistically analysed. Both groups evaluated their leadership and management competencies to be quite good and their general competence to be better than their special competence. Overall, directors of nursing rated their general competence and special competence better than head nurses. However, the head nurses had a stronger expertise in general competence areas, professional competence and credibility, and also in the special competence areas of substance knowledge than the directors of nursing. While the overall leadership and management competencies were good for both groups, each has identified areas which can be further developed. We use returns of actively managed mutual funds to document the link between accrual quality (AQ) and systematic (priced) risk. Despite compelling theoretical arguments, prior research finds no evidence that poor AQ commands a risk premium in the cross-section of realized stock returns. We argue that the previously obtained premium estimates are biased downward because, for a large portion of poor AQ stocks, higher expected returns are offset by the news of deteriorating fundamentals. We suggest that skilled mutual fund managers should be able to either avoid investing in stocks with deteriorating fundamentals or assign them lower portfolio weights. As a consequence, returns on their portfolios should better reflect the expected AQ risk premium. Our empirical evidence is consistent with these predictions. The internet is an enormous and growing source of information for investors about the opinions of others. Virtually any individual with internet access can express opinions about firms and editorialize about company news. However, to date we know very little about the impact these nontraditional internet intermediaries have on markets. We develop a framework wherein internet information intermediaries fall along a spectrum of professionalism and document a nuanced relationship between coverage by these intermediaries and capital market effects. Using a novel dataset that tracks coverage of companies by individuals posting on thousands of websites, we find that coverage by professional and semi-professional intermediaries is associated with positive capital market effects but coverage by nonprofessional internet intermediaries has the opposite effect-hindering price formation. The detrimental effects of nonprofessional coverage are observed most strongly when the intermediaries have larger audiences. This study uses state tax amnesties to examine how firms respond to forgiveness-particularly repeated forgiveness-by a taxing authority. We posit that tax forgiveness programs alter taxpayer perceptions of the probability of detection by enforcers or the probability of future forgiveness programs, either of which could affect future tax aggressiveness. We find that firms headquartered in an amnesty-granting state increase state income tax aggressiveness following the first instance of tax amnesty, relative to control firms in other states. Moreover, we find evidence that tax aggressiveness incrementally increases with each additional repetition of a tax amnesty. Finally, we find that the effect of amnesties on tax aggressiveness is more prominent for small firms, which face less scrutiny and for which the tax aggressiveness measures are less confounded. Our findings suggest that repeated programs of tax forgiveness have increasingly negative implications for corporate tax collections. Prior research on the determinants of credit ratings has focused on rating agencies' use of quantitative accounting information, but the there is scant evidence on the impact of textual attributes. This study examines the impact of financial disclosure narrative on bond market outcomes. We find that less readable financial disclosures are associated with less favorable ratings, greater bond rating agency disagreement, and a higher cost of debt. We improve causal identification by exploiting the 1998 Plain English Mandate, which required a subset of firms to exogenously improve the readability of their filings. Using a difference-in-differences design, we find that the firms required to improve the readability of their filings experience more favorable ratings, lower bond rating disagreement, and lower cost of debt. Collectively, our evidence suggests that textual financial disclosure attributes appear to not only influence bond market intermediaries' opinions but also firms' cost of debt. We study lease accounting in an international panel data set to examine how accounting outcomes vary with two features of accounting standards: the emphasis on using professional judgement to apply principles, and the presence or absence of bright-line tests. We study four countries-Australia, Canada, the UK, and the US-and companies in two lease-intensive industries-retail and transportation. Our primary study period spans the time when Australia and the UK switched from domestic to international accounting standards, and in one test, we also consider Canada's transition to international standards. We find that neither an explicit requirement to apply a principle nor omitting bright-line tests materially increases the use of capital lease treatment among these firms. Overall, we conclude that this financial reporting outcome is relatively insensitive to these standard-setting tools. Studies comparing IFRS with U.S. GAAP generally focus on differences in the attributes and consequences of the recognized financial items. We, in contrast, focus on voluntary disclosure resulting from arguably the most significant difference between IFRS and GAAP: the capitalization of development costs-the "D" of R&D-required by IFRS but prohibited by GAAP. Using a sample of Israeli high-technology and science-based firms, some using IFRS and others U.S. GAAP, we document a significant externality of IFRS development cost capitalization in the form of extensive voluntary disclosure of forward-looking information on product pipeline development and its expected consequences. We show that this disclosure is value-relevant over and above the mandated financial information, including the capitalized R&D asset. We also show that the capitalized development costs (an asset) is highly significant in relation to stock prices, and enhances the relevance of the voluntary disclosures. The practice of providing quarterly earnings guidance has been criticized for encouraging investors to fixate on short-term earnings and encouraging managerial myopia. Using data from the post-Regulation Fair Disclosure period, we examine whether the cessation of quarterly earnings guidance reduces short-termism among investors. We show that, after guidance cessation, investors in firms that stop quarterly guidance are composed of a larger (smaller) proportion of long-term (short-term) institutions, put more (less) weight on long-term (short-term) earnings in firm valuation, become more (less) sensitive to analysts' long-term (short-term) earning forecast revisions, and are less likely to dismiss chief executive officers for missing quarterly earnings targets by small amounts, relative to investors in firms that continue to issue quarterly earnings guidance. Our study provides new evidence of the benefit of stopping quarterly earnings guidance, that is, the reduction of short-termism among investors. This study examines how financial reporting quality affects corporate dividend policy. We find that higher quality reporting is associated with higher dividends. This positive association is more pronounced among firms with more severe free cash flow problems and among firms with higher ownership by monitoring-type institutional investors. Further analysis of the relation between reporting quality and under-/over-payment of dividends suggests that reporting quality largely mitigates underpayment of dividends. Additionally, both a granger causality test and a difference-in-difference analysis of dividend changes around a quasi-exogenous reporting event yield evidence consistent with the direction of causality going from financial reporting to dividends. Overall, these findings are consistent with financial reporting quality acting as a governance mechanism that induces managers to pay dividends by disciplining free cash flow problems. Our findings support the view that dividends are the result of enhanced monitoring (Jensen 1986; La Porta, Lopez-de-Silanes, Shleifer, and Vishny 2000). We examine international differences in the effect of management forecasts (which we use to proxy for voluntary disclosure) on the cost of equity capital (COC) across 31 countries. We find that the issuance of management forecasts is associated with a lower COC worldwide but that the effect of management forecasts on the COC depends on country-level institutional factors. Specifically, management forecasts have a stronger effect on the COC in countries with stronger investor protection and better information dissemination and a weaker effect in countries with higher mandatory disclosure requirements. Further analyses reveal that these relations are more pronounced when management forecasts are more frequent, more precise, and more disaggregated. Overall, our findings suggest that the ability of management forecasts to reduce firms' COC derives not only from country-level factors that enhance the credibility of their forecasts but also from factors that reflect the quality of the information environment in terms of the distribution of news and the availability and quality of alternative information. Thus, investor protection, media penetration, and mandatory disclosure requirements have an important effect on the ability of management forecasts to lower the COC. International Financial Reporting Standards (IFRS) allow managers flexibility in classifying interest paid, interest received, and dividends received within operating, investing, or financing activities within the statement of cash flows. In contrast, U.S. Generally Accepted Accounting Principles (GAAP) requires these items to be classified as operating cash flows (OCF). Studying IFRS-reporting firms in 13 European countries, we document firms' cash-flow classification choices vary, with about 76, 60, and 57% of our sample classifying interest paid, interest received, and dividends received, respectively, in OCF. Reported OCF under IFRS tends to exceed what would be reported under U.S. GAAP. We find the main determinants of OCF-enhancing classification choices are capital market incentives and other firm characteristics, including greater likelihood of financial distress, higher leverage, and accessing equity markets more frequently. In analyzing the consequences of reporting flexibility, we find some evidence that the market's assessment of the persistence of operating cash flows and accruals varies with the firm's classification choices and the results of certain OCF prediction models are sensitive to classification choices. Exit theory predicts a governance role for outside blockholders' exit threats, but this role could be ineffective if managers' potential private benefits exceed their loss in stock-price declines caused by the blockholders' exits. We test this prediction using the Split-Share Structure Reform (SSSR) in China, which provided a large exogenous and permanent shock to the cost for outside blockholders to exit. We find that firms whose outside blockholders experience an increase in exit threats improve performance more than those whose outside blockholders experience no increase. The governance effect of exit threats also is ineffective in the group of firms with the highest concern for private benefits of control. Finally, a battery of theory-motivated tests shows that the documented effects are unlikely explained by outside blockholder intervention or some well-known intended effects of SSSR. We examine the effect of increased book-tax conformity on corporate capital structure. Prior studies document a decrease in the informativeness of accounting earnings for equity markets resulting from higher book-tax conformity. We argue that the decrease in earnings informativeness impacts equity holders more than debt holders because of the differences in payoff structures between debt and equity investments such that increases in book-tax conformity lead to increases in firms' reliance on debt capital. We exploit a natural experiment in the U.S. and find that firms facing an increase in required book-tax conformity increase leverage relative to other firms. We also provide evidence of an increase in the cost of equity (but not of debt) capital for firms facing an increase in required book-tax conformity, relative to control firms, and show that these increases in cost of equity capital are positively associated with an increase in leverage. Our findings are consistent with firms substituting away from equity and toward more debt in the presence of higher book-tax conformity. We study two-stage, multi-division budgeting mechanisms that allocate scarce resources among divisions using capital charge rates. Each divisional manager observes private sequential project information and competes for scarce resources over two stages. The optimal capital charge rates in our two-stage setting can be quite different from those that arise in a single-stage setting. If the firm faces a resource constraint at only the second stage, a less severe constraint can imply more first-stage project initiation, which can lead to higher second-stage capital charge rates. If resources are constrained at both stages, a decrease in the severity of the constraint at just one stage decreases the capital charge rate at that stage but increases the capital charge rate at the other stage because each constraint affects the intensity of competition at both stages. This result holds regardless of whether the scarce resources are fungible or non-fungible across stages. Prior to SFAS 142, goodwill was subject to periodic amortization and a recoverability-based impairment test. SFAS 142 eliminates periodic amortization and imposes a fair-value-based impairment test. We examine the impact of this standard on the accounting for and valuation of goodwill. Our results indicate that the new standard has resulted in relatively inflated goodwill balances and untimely impairments. We also find that investors do not appear to fully anticipate the untimely nature of post-SFAS 142 goodwill impairments. Overall, our results suggest that, in practice, some managers have exploited the discretion afforded by SFAS 142 to delay goodwill impairments, thus temporarily inflating earnings and stock prices. Background: Ethics and dignity in prehospital emergency care are important due to vulnerability and suffering. Patients can lose control of their body and encounter unfamiliar faces in an emergency situation. Objective: To describe what specialist ambulance nurse students experienced as preserved and humiliated dignity in prehospital emergency care. Research design: The study had a qualitative approach. Method: Data were collected by Flanagan's critical incident technique. The participants were 26 specialist ambulance nurse students who described two critical incidents of preserved and humiliated dignity, from prehospital emergency care. Data consist of 52 critical incidents and were analyzed with interpretive content analysis. Ethical considerations: The study followed the ethical principles in accordance with the Declaration of Helsinki. Findings: The result showed how human dignity in prehospital emergency care can be preserved by the ambulance nurse being there for the patient. The ambulance nurses meet the patient in the patient's world and make professional decisions. The ambulance nurse respects the patient's will and protects the patient's body from the gaze of others. Humiliated dignity was described through the ambulance nurse abandoning the patient and by healthcare professionals failing, disrespecting, and ignoring the patient. Discussion: It is a unique situation when a nurse meets a patient face to face in a critical life or death moment. The discussion describes courage and the ethical vision to see another human. Conclusion: Dignity was preserved when the ambulance nurse showed respect and protected the patient in prehospital emergency care. The ambulance nurse students' ethical obligation results in the courage to see when a patient's dignity is in jeopardy of being humiliated. Humiliated dignity occurs when patients are ignored and left unprotected. This ethical dilemma affects the ambulance nurse students badly due to the fact that the morals and attitudes of ambulance nurses are reflected in their actions toward the patient. Background: Ethics consultation is the traditional way of resolving challenging ethical questions raised about patient care in the United States. Little research has been published on the resolution process used during ethics consultations and on how this experience affects healthcare professionals who participate in them. Objectives: The purpose of this qualitative research was to uncover the basic process that occurs in consultation services through study of the perceptions of healthcare professionals. Design and Method: The researchers in this study used a constructivist grounded theory approach that represents how one group of professionals experienced ethics consultations in their hospital in the United States. Results: The results were sufficient to develop an initial theory that has been named after the core concept: Moving It Along. Three process stages emerged from data interpretation: moral questioning, seeing the big picture, and coming together. It is hoped that this initial work stimulates additional research in describing and understanding the complex social process that occurs for healthcare professionals as they address the difficult moral issues that arise in clinical practice. Modern American nursing has an extensive ethical heritage literature that extends from the 1870s to 1965 when the American Nurses Association issued a policy paper that called for moving nursing education out of hospital diploma programs and into colleges and universities. One consequence of this move was the dispersion of nursing libraries and the loss of nursing ethics textbooks, as they were largely not brought over into the college libraries. In addition to approximately 100 nursing ethics textbooks, the nursing ethics heritage literature also includes hundreds of journal articles that are often made less accessible in modern databases that concentrate on the past 20 or 30 years. A second consequence of nursing's movement into colleges and universities is that ethics was no longer taught by nursing faculty, but becomes separated and placed as a discrete ethics (later bioethics) course in departments of philosophy or theology. These courses were medically identified and rarely incorporated authentic nursing content. This shift in nursing education occurs contemporaneously with the rise of the field of bioethics. Bioethics is rapidly embraced by nursing, and as it develops within nursing, it fails to incorporate the rich ethical heritage, history, and literature of nursing prior to the development of the field of bioethics. This creates a radical disjunction in nursing's ethics; a failure to more adequately explore the moral identity of nursing; the development of an ethics with a lack of fit with nursing's ethical history, literature, and theory; a neglect of nursing's ideal of service; a diminution of the scope and richness of nursing ethics as social ethics; and a loss of nursing ethical heritage of social justice activism and education. We must reclaim nursing's rich and capacious ethics heritage literature; the history of nursing ethics matters profoundly. Background: The role of nurses as patient advocates is one which is well recognised, supported and the subject of a broad body of literature. One of the key impediments to the role of the nurse as patient advocate is the lack of support and legislative frameworks. Within a broad range of activities constituting advocacy, whistleblowing is currently the subject of much discussion in the light of the Mid Staffordshire inquiry in the United Kingdom (UK) and other instances of patient mistreatment. As a result steps to amend existing whistleblowing legislation where it exists or introduce it where it does not are underway. Objective: This paper traces the development of legislation for advocacy. Conclusion: The authors argue that while any legislation supporting advocacy is welcome, legislation on its own will not encourage or enable nurses to whistleblow. Indonesia is recognized as a nurse exporting country, with policies that encourage nursing professionals to emigrate abroad. This includes the country's adoption of international principles attempting to protect Indonesian nurses that emigrate as well as the country's own participation in a bilateral trade and investment agreement, known as the Indonesia-Japan Economic Partnership Agreement that facilitates Indonesian nurse migration to Japan. Despite the potential trade and employment benefits from sending nurses abroad under the Indonesia-Japan Economic Partnership Agreement, Indonesia itself is suffering from a crisis in nursing capacity and ensuring adequate healthcare access for its own populations. This represents a distinct challenge for Indonesia in appropriately balancing domestic health workforce needs, employment, and training opportunities for Indonesian nurses, and the need to acknowledge the rights of nurses to freely migrate abroad. Hence, this article reviews the complex operational and ethical issues associated with Indonesian health worker migration under the Indonesia-Japan Economic Partnership Agreement. It also introduces a policy proposal to improve performance of the Indonesia-Japan Economic Partnership Agreement and better align it with international principles focused on equitable health worker migration. Background: With the number of young people with medical complexity increasing, an increasing number must navigate the transition to adulthood. This transition, in part, involves a situational transition in which young people and their families must access new services in the adult system. Objectives: To explore how societal ideologies, communities, and organizations represent the foundation of barriers to access to services. Research Design: The discussion in this paper, framed within a social justice perspective, outlines barriers to access to services at the societal and community levels including societal ideologies, differences in philosophies of care in pediatric and adult care, physical environments, and availability of services. Ethical Considerations: Since this is an exploratory discussion paper, no ethical approval was required. Findings and Conclusion: Based on analysis of the literature from a social justice perspective, it is suggested that the adult health care and social service systems do not provide the supports and services necessary to empower young people and their families to achieve their goals and maintain their health and quality of life. It is, thus, an ethical issue that the transfer from pediatric to adult services is occurring in the absence of appropriate services. Recommendations at the individual, community and policy levels highlight how nurses can address this ethical issue to promote more equitable access to services. Background: Mobbing and burnout can cause serious consequences, especially for health workers and managers. Level of burnout and exposure to mobbing may trigger each other. There is a need to conduct additional and specific studies on the topic to develop some strategies. Research objectives: The purpose of this study is to determine the relationship between level of burnout and exposure to mobbing of the managers (head physician, assistant head physician, head nurse, assistant head nurse, administrator, assistant administrator) at the Ministry of Health hospitals. Research design: The Leymann Inventory of Psychological Terrorization scale was used to measure the level of exposure to mobbing and the Maslach Burnout Inventory scale was used to measure the level of burnout of hospital managers. The relationship between level of burnout and exposure to mobbing was analyzed by Pearson's Correlation Analysis. Participants and research context: The population of this study included managers (454 managers) at the Ministry of Health hospitals in the metropolitan area of Ankara between September 2010 and May 2011. All the managers were tried to conduct, but some managers did not want to reply to the questionnaire and some managers were not found at their workplace. Consequently, using a convenience sampling, 54% of the managers replied to the questionnaire (244 managers). Ethical consideration: The approval of the study was granted by the Ministry of Health in Turkey. Furthermore, the study was evaluated and accepted by the Education, Planning and Coordination Council of one of the education and research hospitals in the study. Findings: Positive relationships were found among each subdimension of the mobbing and emotional exhaustion and depersonalization. A negative relationship was found between each subdimension of the mobbing and personal accomplishment. Discussion: In hospitals, by detecting mobbing actions, burnout may be prevented. Conclusion: Exposure to mobbing and burnout could be a serious problem for head nurses who are responsible for both the performance of the nurses and organization. Additionally, head nurses who are faced with mobbing and burnout are more likely to provide suboptimal services which could potentially result in negative outcomes. Therefore, this study draws attention to the importance of preventing these attitudes in the organization. Background: Conscience is an important concept in ethics, having various meanings in different cultures. Because a growing number of healthcare professionals are of immigrant background, particularly within the care of older people, demanding multiple ethical positions, it is important to explore the meaning of conscience among care providers within different cultural contexts. Research objective: The study aimed to illuminate the meaning of conscience by enrolled nurses with an Iranian background working in residential care for Persian-speaking people with dementia. Research design: A phenomenological hermeneutical method guided the study. Participants and research context: A total of 10 enrolled nurses with Iranian background, aged 33-46 years, participated in the study. All worked full time in residential care settings for Persian-speaking people with dementia in a large city, in Sweden. Ethical considerations: The study was approved by the Regional Ethical Review Board for ethical vetting of research involving humans. Participants were given verbal and written study information and assured that their participation was voluntary and confidential. Findings: Three themes were constructed including perception of conscience, clear conscience grounded in relations and striving to keep a clear conscience. The conscience was perceived as an inner guide grounded in feelings, which is dynamic and subject to changes throughout life. Having a clear conscience meant being able to form a bond with others, to respect them and to get their confirmation that one does well. To have a clear conscience demanded listening to the voice of the conscience. The enrolled nurses strived to keep their conscience clear by being generous in helping others, accomplishing daily tasks well and behaving nicely in the hope of being treated the same way one day. Conclusion: Cultural frameworks and the context of practice needed to be considered in interpreting the meaning of conscience and clear conscience. Background: Nurses, social workers, and medical residents are ethically mandated to engage in policy advocacy to promote the health and well-being of patients and increase access to care. Yet, no instrument exists to measure their level of engagement in policy advocacy. Research objective: To describe the development and validation of the Policy Advocacy Engagement Scale, designed to measure frontline healthcare professionals' engagement in policy advocacy with respect to a broad range of issues, including patients' ethical rights, quality of care, culturally competent care, preventive care, affordability/accessibility of care, mental healthcare, and community-based care. Research design: Cross-sectional data were gathered to estimate the content and construct validity, internal consistency, and test-retest reliability of the Policy Advocacy Engagement Scale. Participants and context: In all, 97 nurses, 94 social workers, and 104 medical residents (N = 295) were recruited from eight acute-care hospitals in Los Angeles County. Ethical considerations: Informed consent was obtained via Qualtrics and covered purposes, risks and benefits; voluntary participation; confidentiality; and compensation. Institutional Review Board approval was obtained from the University of Southern California and all hospitals. Findings: Results supported the validity of the concept and the instrument. In confirmatory factor analysis, seven items loaded onto one component with indices indicating adequate model fit. A Pearson correlation coefficient of .36 supported the scale's test-retest stability. Cronbach's of .93 indicated strong internal consistency. Discussion: The Policy Advocacy Engagement Scale demonstrated satisfactory psychometric properties in this initial test. Findings should be considered within the context of the study's limitations, which include a low response rate and limited geographic scope. Conclusion: The Policy Advocacy Engagement Scale appears to be the first validated scale to measure frontline healthcare professionals' engagement in policy advocacy. With it, researchers can analyze variations in professionals' levels of policy advocacy engagement, understand what factors are associated with it, and remedy barriers that might exist to their provision of it. Background When conducting qualitative research, participants usually share lots of personal and private information with the researcher. As researchers, we must preserve participants' identity and confidentiality of the data. Objective To critically analyze an ethical conflict encountered regarding confidentiality when doing qualitative research. Research design Case study. Findings and discussion one of the participants in a study aiming to explain the meaning of living with HIV verbalized his imminent intention to commit suicide because of stigma of other social problems arising from living with HIV. Given the life-threatening situation, the commitment related to not disclosing the participant's identity and/or the content of the interview had to be broken. To avoid or prevent suicide, the therapist in charge of the case was properly informed about the participant's intentions. One important question arises from this case: was it ethically appropriate to break the confidentiality commitment? Conclusion confidentiality could be broken if a life-threatening event is identified during data collection and participants must know that. This has to be clearly stated in the informed consent form. The aim of this paper is, first, to recall fuzzy relational compositions (products) and, to introduce an idea, how excluding features could be incorporated into the theoretical background. Apart from definitions, we provide readers with a theoretical investigation. This investigation addresses two natural questions. Firstly, under which conditions (in which underlying algebraic structures) the given three natural approaches to the incorporation of excluding symptoms coincide. And secondly, under which conditions, the proposed incorporation of excluding features preserves the same natural and desirable properties similar to those preserved by fuzzy relational compositions. The positive impact of the incorporation on reducing the suspicions provided by the basic "circlet" composition without losing the possibly correct suspicion is demonstrated on a real taxonomic identification (classification) of Odonata. Here, we demonstrate how the proposed concept may eliminate the weaknesses provided by the classical fuzzy relational compositions and, at the same time, compete with powerful machine learning methods. The aim of the demonstration is not to show that proposed concept outperforms classical approaches, but to show, that its potential is strong enough in order to complete them or in order to be combined with them and to use its different nature. (C) 2017 Elsevier Ltd. All rights reserved. In this paper we deal with the problem of designing a classifier able to learn the classification of existing units in inventory and then use it to classify new units according to their attributes in a multi-criteria ABC inventory classification environment. To solve this problem we design a multi-start constructive algorithm to train a discrete artificial neural network using a randomized greedy strategy to add neurons to the network hidden layer. The process of weights' searching for the neurons to be added is based on solving linear programming formulations. The computational experiments show that the proposed algorithm is much more efficient when the dual formulations are used to find the weights of the network neurons and that the obtained classifier has good levels of generalization accuracy. In addition, the proposed algorithm can be straight applied to other multi-class classification problems with more than three classes. (C) 2017 Elsevier Ltd. All rights reserved. F-score is a simple feature selection technique, however, it works only for two classes. This paper proposes a novel feature ranking method based on Fisher discriminate analysis (FDA) and F-score, denoted as FDAF-score, which considers the relative distribution of classes in a multi-dimensional feature space. The main idea is that a proper subset is got according to maximizing the proportion of average between class distance to the relative within-class scatter, Because the method removes all insignificant features at a time, it can effectively reduce computational cost. Experiments on six benchmarking UCI datasets and two artificial datasets demonstrate that the proposed FDAF-score algorithm can not only obtain good results with fewer features than the original datasets as well as fast computation but also deal with the classification problem with noises well. (C) 2017 Elsevier Ltd. All rights reserved. Data Envelopment Analysis gauges the performance of operating entities in the best scenario for input and output multipliers. Robust efficiency analysis is a conservative approach that is concerned with an assured level of performance for an entity across all possible multiplier scenarios. In this study, we extend the robust efficiency analysis procedure to the situation where precise information on some input and output data is unavailable. Perfect efficiency analysis and potential efficiency analysis methods are developed to determine, respectively, the lower and upper bounds of an entity's robust efficiency rating. The concepts of robust efficiency are expanded to classify entities in consideration into three groups: perfectly robust efficient, potentially robust efficient and robust inefficient. Two approaches are presented to convert robust efficiency analysis models into linear programs. It is claimed that Data Envelopment Analysis and robust efficiency analysis together provide a comprehensive picture of an entity's relative efficiency. A computational experiment is conducted to compare the traditional efficiency analysis method with robust efficiency analysis in the presence of imprecise data. The results illustrate that perfect efficiency analysis exhibits a superior power of discrimination than potential efficiency analysis and that an entity recommended by perfect efficiency analysis has a satisfactory average performance. (C) 2017 Elsevier Ltd. All rights reserved. This paper addresses the problem of estimating continuous boundaries between acceptable and unacceptable engineering design parameters in complex engineering applications. In particular, a procedure is proposed to reduce the computational cost of finding and representing the boundary. The proposed methodology combines a low-discrepancy sequence (Sobol) and a support vector machine (SVM) in an active learning procedure able to efficiently and accurately estimate the boundary surface. The paper describes the approach and methodological choices resulting in the desired level of boundary surface refinement and the new algorithm is applied to both two highly-nonlinear test functions and a real-world train stability design problem. It is expected that the new method will provide designers with a tool for the evaluation of the acceptability of designs, particularly for engineering systems whose behaviour can only be determined through complex simulations. (C) 2017 Elsevier Ltd. All rights reserved. This paper proposes a unified approach to creating investment strategies with various desirable properties for investors. Particularly, we provide a new interpretation and the resulting formulations for state space models to attain our investment objectives, which are possibly specified as generating additional returns over benchmark stock indexes or achieving target risk-adjusted returns. Our state space models with particle filtering algorithm are employed to develop expert systems for investment strategies in highly complex financial markets. More concretely, in our state space framework, we apply a system model to representing portfolio weight processes with various constraints, as well as the standard underlying state variables such as volatility processes, Further, we formulate an observation model to stand for target value processes with non-linear functions of observed and latent variables. Numerical experiments demonstrate the effectiveness of our methodology through creating excess returns over S&P 500 and generating investment portfolios with fine risk-return profiles. (C) 2017 Elsevier Ltd. All rights reserved. The paper proposes new advanced methods of image description and an ensemble of classifiers for recognition of mammograms in breast cancer. The non-negative matrix factorization and many other advanced methods of image representation, not exploited in the field of mammogram recognition, are developed and checked in the role of diagnostic features. Final image recognition is done by using an ensemble of classifiers. The new approach to the integration of an ensemble is proposed. It applies the weighted majority voting with the weights determined from the optimization task defined on the basis of the area under curve of ROC. The results of numerical experiments performed on large data base "Digital Database for Screening Mammography" containing more than 10,000 mammograms have confirmed superior accuracy in recognition of abnormal from the normal cases. The presented results of class recognition exceed the best achievements for this base reported in the actual publications. (C) 2017 Elsevier Ltd. All rights reserved. Obstructive sleep apnea (OSA) is a very common, but a difficult sleep disorder to diagnose. Recurrent obstructions form in the airway during sleep, such that OSA can threaten a breathing capacity of patients. Clinically, continuous positive airway pressure (CPAP) is the most specific and effective treatment for this. In addition, these patients must be separated according to its degree, with CPAP treatment applied as a result. In this study, 30 OSA patients from two different databases were automatically classified using electrocardiogram (ECG) data, identified as mild, moderate, and severe. One of the databases was original recordings which had 9 OSA patients with 8303 epochs and the other one was Physionet benchmark database which had 21 patients with 20,824 epochs. Fifteen morphological features could be identified when apnea was seen, both before and after it presented. Five data groups in total for first dataset and second dataset were prepared with these features and 10-fold cross validation was used to effectively determine the test data. Then, sequential backward feature selection (SBFS) algorithm was applied to understand the more effective features. The prepared data groups were evaluated with artificial neural networks (ANN) to obtain optimum classification performance. All processes were repeated for ten times and error deviation was calculated for the accuracy. Furthermore, different classifiers which are frequently used in the literature were tested with selected features. The degree of OSA was estimated from three epochs in pre-apnea data, yielding the success rates of 97.20 +/- 2.15% and 90.18 +/- 8.11% with the SBFS algorithm for the first and second datasets, respectively. Also, SVM classifier followed ANN system in the success rates of 96.23 +/- 3.48% and 88.75 +/- 8.52% for used datasets. (C) 2017 Elsevier Ltd. All rights reserved. Dimensionality Reduction (DR) is very useful and popular in many application areas of expert and intelligent systems, such as machine learning, finance, data and text mining, multimedia mining, image processing, anomaly detection, defense applications, bioinformatics and natural language processing. DR is widely applied for better data visualization and improving learning in all the above fields. In this manuscript, we propose a novel DR approach namely, Noisy-free Length Discriminant Analysis (NLDA) by developing Noisy-free Relevant Pattern Selection (NRPS). Traditional pattern selection methods discriminate boundary and non-boundary patterns with the help of class information and nearest neighbors. And these methods completely ignore noisy patterns which may degrade the performance of subsequent subspace learning. To overcome this, we develop Noisy-free Relevant Pattern Selection (NRPS), in which data instances are partitioned into boundary, non-boundary and noisy patterns. With the help of noisy free boundary and non-boundary patterns, Noisy-free Length Discriminant Analysis (NLDA) has been proposed by developing new within and between-class scatters. These scatters model discriminations between lengths (L-2-norms) of different class instances by considering only boundary and non-boundary patterns, while ignoring noisy patterns. A cosine hyperbolic frame work has been developed to formulate the objective of NLDA. Moreover, NLDA can also model the discrimination of multimodal data as different class data may consist of different lengths. Experimental study conducted on the synthesized data, UCI, and leeds butterfly databases. Moreover, an experimental study over human and computer interaction, i.e., face recognition (one of the application areas of expert and intelligent systems), has been performed. And, these studies prove that the proposed method can produce better discriminated subspace compare to the state-of-the-art methods. (C) 2017 Elsevier Ltd. All rights reserved. With the development of intelligent surveillance systems, human behavior recognition has been extensively researched. Most of the previous methods recognized human behavior based on spatial and temporal features from (current) input image sequences, without the behavior prediction from previously recognized behaviors. Considering an example of behavior prediction, "punching" is more probable in the current frame when the previous behavior is "standing" as compared to the previous behavior being "lying down." Nevertheless, there has been little study regarding the combination of currently recognized behavior information with behavior prediction. Therefore, we propose a fuzzy system based behavior recognition technique by combining both behavior prediction and recognition. To perform behavior recognition during daytime and nighttime, a dual camera system of visible light and thermal (far infrared light) cameras is used to capture 12 datasets including 11 different human behaviors in various surveillance environments. Experimental results along with the collected datasets and open database showed that the proposed method achieved higher accuracy of behavior recognition when compared to conventional methods. (C) 2017 Elsevier Ltd. All rights reserved. Currently a consensus on multi-label classification is to exploit label correlations for performance improvement. Many approaches build one classifier for each label based on the one-versus-all strategy, and integrate classifiers by enforcing a regularization term on the global weights to exploit label correlations. However, this strategy might be suboptimal since it may be only part of the global weights that support the assumption. This paper proposes clustered intrinsic label correlations for multi-label classification (CILC), which extends traditional support vector machine to the multi-label setting. The predictive function of each classifier consists of two components: one component is the common information among all labels, and the other component is a label-specific one which highly depends on the corresponding label. The label-specific one representing the intrinsic label correlations is regularized by clustered structure assumption. The appealing features of the proposed method are that it separates the common information and the label-specific information of the labels and utilizes clustered structures among labels represented by the label-specific parts. The practical multi-label classification problems can be directly solved by the proposed CILC method, such as text categorization, image annotation and sentiment analysis. Experiments across five data sets validate the effectiveness of CILC, compared with six well-established multi-label classification algorithms. (C) 2017 Elsevier Ltd. All rights reserved. Classification rules and rules describing interesting subgroups are important components of descriptive machine learning. Rule learning algorithms typically proceed in two phases: rule refinement selects conditions for specializing the rule, and rule selection selects the final rule among several rule candidates. While most conventional algorithms use the same heuristic for guiding both phases, recent research indicates that the use of two separate heuristics is conceptually better justified, improves the coverage of positive examples, and may result in better classification accuracy. The paper presents and evaluates two new beam search rule learning algorithms: DoubleBeam-SD for subgroup discovery and DoubleBeam-RL for classification rule learning. The algorithms use two separate beams and can combine various heuristics for rule refinement and rule selection, which widens the search space and allows for finding rules with improved quality. In the classification rule learning setting, the experimental results confirm previously shown benefits of using two separate heuristics for rule refinement and rule selection. In subgroup discovery, DoubleBeam-SD algorithm variants outperform several state-of-the-art related algorithms. (C) 2017 Elsevier Ltd. All rights reserved. Object detection and recognition are challenging computer vision tasks receiving great attention due to the large number of applications. This work focuses on the detection/recognition of products in supermarket shelves; this framework has a number of practical applications such as providing additional product/price information to the user or guiding visually impaired customers during shopping. The automatic creation of planograms (i.e., actual layout of products on shelves) is also useful for commercial analysis and management of large stores. Although in many object detection/recognition contexts it can be assumed that training images are representative of the real operational conditions, in our scenario such assumption is not realistic because the only training images available are acquired in well-controlled conditions. This gap between the training and test data makes the object detection and recognition tasks far more complex and requires very robust techniques. In this paper we prove that good results can be obtained by exploiting color and texture information in a multi-stage process: pre-selection, fine-selection and post processing. For fine selection we compared a classical Bag of Words technique with a more recent Deep Neural Networks approach and found interesting outcomes. Extensive experiments on datasets of varying complexity are discussed to highlight the main issues characterizing this problem, and to guide toward the practical development of a real application. (C) 2017 Elsevier Ltd. All rights reserved. In this paper we propose and validate a trading rule based on flag pattern recognition, incorporating important innovations with respect to the previous research. Firstly, we propose a dynamic window scheme that allows the stop loss and take profit to be updated on a quarterly basis. In addition, since the flag pattern is a trend-following pattern, we have added the EMA indicator to filter trades. This technical analysis indicator is calculated both for 15-min and 1-day timeframes, which enables short and medium terms to be considered simultaneously. We also filter the flags according to the price range on which they are developed and have limited the maximum loss of each trade to 100 points. The proposed methodology was applied to 91,309 intraday observations of the DJIA index, considerably improving the results obtained in the previous proposals and those obtained by the buy & hold strategy, both for profitability and risk, and also after taking into account the transaction costs. These results seem to challenge market efficiency in line with other similar studies, in the specific analysis carried out on the DJIA index and is also limited to the setup considered. (C) 2017 Elsevier Ltd. All rights reserved. Most of the existing methods for solving fully fuzzy mathematical programs are based on the standard fuzzy arithmetic operations and/or Zadeh's extension principle. These methods may produce questionable results for many real-life applications. Due to this fact, this paper presents a novel method based on the constrained fuzzy arithmetic concept to solve fully fuzzy balanced/unbalanced transportation problems in which all of the parameters (source capacities, demands of destinations, transportation costs etc.) as well as the decision variables (transportation quantities) are considered as fuzzy numbers. In the proposed method, the requisite crisp and/or fuzzy constraints between the base variables of the fuzzy components are provided from the decision maker according to his/her exact or vague judgments. Thereafter, fuzzy arithmetic operations are performed under these requisite constraints by taking into account the additional information while transforming the fuzzy transportation model into crisp equivalent form. Therefore, various fuzzy efficient solutions can be generated by making use of the proposed method according to the decision maker's risk attitude. In order to present the efficiency/applicability of the proposed method, different types of fully fuzzy transportation problems are generated and solved as illustrative examples. A detailed comparative study is also performed with other methods available in the literature. The computational analysis have shown that relatively more precise solutions are obtained from the proposed method for "risk-averse" and "partially risk-averse" decision makers. The proposed method also successfully provided fuzzy acceptable solutions for "risk seekers" with high degree of uncertainty similar to the other existing methods in the literature. (C) 2017 Elsevier Ltd. All rights reserved. Non-Hodgkin lymphoma is the most common cancer of the lymphatic system and should be considered as a group of several closely related cancers, which can show differences in their growth patterns, their impact on the body and how they are treated. The diagnosis of the different types of neoplasia is made by a specialist through the analysis of histological images. However, these analyses are complex and the same case can lead to different understandings among pathologists, due to the exhaustive analysis of decisions, the time required and the presence of complex histological features. In this context, computational algorithms can be applied as tools to aid specialists through the application of segmentation methods to identify regions of interest that are essential for lymphomas diagnosis. In this paper, an unsupervised method for segmentation of nuclear components of neoplastic cells is proposed to analyze histological images of lymphoma stained with hematoxylin-eosin. The proposed method is based on the association among histogram equalization, Gaussian filter, fuzzy 3-partition entropy, genetic algorithm, morphological techniques and the valley-emphasis method in order to analyze neoplastic nuclear components, improve the contrast and illumination conditions, remove noise, split overlapping cells and refine contours. The results were evaluated through comparisons with those provided by a specialist and techniques available in the literature considering the metrics of accuracy, sensitivity, specificity and variation of information. The mean value of accuracy for the proposed method was 81.48%. Although the method obtained sensitivity rates between 41% and 51%, the accuracy values showed relevance when compared to those provided by other studies. Therefore, the novelties presented here may already encourage new studies with a more comprehensive overview of lymphoma segmentation. (C) 2017 Elsevier Ltd. All rights reserved. A wind power system has diverse operating characteristics as its operations depend on many factors such as wind power, machinery ageing and breakdowns, etc. Knowledge of the operating behavior of the wind power system is helpful for monitoring its status and for isolating harmful elements when malfunctions occur, To investigate the operating status and behavior of the system, the fuzzy clustering method is introduced to classify the system's operating points. Relative distance indices of the cluster centers are defined to describe the operating behavior. With those, the location and operating behavior of the operating point are identified in relation to the cluster centers. (C) 2017 Elsevier Ltd. All rights reserved. Fingerprint indexing plays a key role in the automatic fingerprint identification systems (AFISs) which allows us to speed up the search in large databases without missing accuracy. In this paper, we propose a fingerprint indexing algorithm based on novel features of minutiae triplets to improve the performance of fingerprint indexing. The minutiae triplet based feature vectors, which are generated by ellipse properties and their relation with the triangles formed by the proposed expanded Delaunay triangulation, are used to generate indices and a recovery method based on k-means clustering algorithm is employed for fast and accurate retrieval. The proposed expanded Delaunay triangulation algorithm is based on the quality of fingerprint images and combines two robust Delaunay triangulation algorithms. This paper also employs an improved k-means clustering algorithm which can be applied over large databases, without reducing the accuracy. Finally, a candidate list reduction criteria is employed to reduce the candidate list and to generate the final candidate list for matching stage. Experimental results over some of the fingerprint verification competition (FVC) and national institute of standards and technology (NIST) databases show superiority of the proposed approach in comparison with state-of-the-art indexing algorithms. Our indexing proposal is very promising for the improvement of real-time AFISs efficiency and accuracy in the near future. (C) 2017 Elsevier Ltd. All rights reserved. The feature selection is important to speed up the process of Automatic Text Document Classification (ATDC). At present, the most common method for discriminating feature selection is based on Global Filter-based Feature Selection Scheme (GFSS). The GFSS assigns a score to each feature based on its discriminating power and selects the top-N features from the feature set, where N is an empirically determined number. As a result, it may be possible that the features of a few classes are discarded either partially or completely. The Improved Global Feature Selection Scheme (IGFSS) solves this issue by selecting an equal number of representative features from all the classes. However, it suffers in dealing with an unbalanced dataset having large number of classes. The distribution of features in these classes are highly variable. In this case, if an equal number of features are chosen from each class, it may exclude some important features from the class containing a higher number of features. To overcome this problem, we propose a novel Variable Global Feature Selection Scheme (VGFSS) to select a variable number of features from each class based on the distribution of terms in the classes. It ensures that, a minimum number of terms are selected from each class. The numerical results on benchmark datasets show the effectiveness of the proposed algorithm VGFSS over classical information science methods and IGFSS. (C) 2017 Elsevier Ltd. All rights reserved. Social media data can be valuable in many ways. However, the vast amount of content shared and the linguistic variants of languages used on social media are making it very challenging for high-value topics to be identified. In this paper, we present an unsupervised multilingual approach for identifying highly relevant terms and topics from the mass of social media data. This approach combines term ranking, localised language analysis, unsupervised topic clustering and multilingual sentiment analysis to extract prominent topics through analysis of Twitter's tweets from a period of time. It is observed that each of the ranking methods tested has their strengths and weaknesses, and that our proposed 'joint' ranking method is able to take advantage of the strengths of the ranking methods. This 'Joint' ranking method coupled with an unsupervised topic clustering model is shown to have the potential to discover topics of interest or concern to a local community. Practically, being able to do so may help decision makers to gauge the true opinions or concerns on the ground. Theoretically, the research is significant as it shows how an unsupervised online topic identification approach can be designed without much manual annotation effort, which may have great implications for future development of expert and intelligent systems. (C) 2017 Elsevier Ltd. All rights reserved. Smart services, one of the most intriguing areas of current Internet of Things(IoT) research, require improvement in terms of recognizing user activities. Sound is a useful medium for making decisions based on activity recognition in the smart home environment, which includes mobile devices such as sensors and actuators. Instead of visual sensors to recognize human activity, acoustic sensor data is acquired in an unobtrusive manner for greater privacy. However, multiuser activity provides a formidable challenge for acoustic data-based activity recognition systems because of the difficulty of identifying multiple sources of activity from among a variety of sounds. In our study, we propose a statistical method to detect the interval of interference, which is also known as the unexpected mesa, distinguishing activities based on the pre- and post-mesa intervals. The results suggest that the proposed method outperforms previously presented classification algorithms in terms of the accuracy of multiuser activity recognition. Future studies may utilize this method for improvement of existing smart home systems. (C) 2017 Published by Elsevier Ltd. Ant colony optimization (ACO) for continuous functions has been widely applied in recent years in different areas of expert and intelligent systems, such as steganography in medical systems, modelling signal strength distribution in communication systems, and water resources management systems. For these problems that have been addressed previously, the optimal solutions were known a priori and contained in the pre-specified initial domains. However, for practical problems in expert and intelligent systems, the optimal solutions are often not known beforehand. In this paper, we propose a robust ant colony optimization for continuous functions (RACO), which is robust to domains of variables. RACO applies self adaptive approaches in terms of domain adjustment, pheromone increment, domain division, and, ant size without any major conceptual change to ACO's framework. These new characteristics make the search of ants not limited to the given initial domain, but extended to a completely different domain. In the case of initial domains without the optimal solution, RACO can still obtain the correct result no matter how the initial domains vary. In the case of initial domains with the optimal solution, we also show that RACO is a competitive algorithm. With the assistance of RACO, there is no need to estimate proper initial domains for practical continuous optimization problems in expert and intelligent systems. (C) 2017 Elsevier Ltd. All rights reserved. Recommender systems are among the most visible applications of intelligent systems technology in practice and are used to help users find items of interest, for example on e-commerce sites, in a personalized way. While past research has focused mainly on accurately predicting the relevance of items that are unknown to the user, other quality criteria for recommendations have been investigated in recent years, including diversity, novelty, or serendipity. Considering these additional factors, however, often leads to the following two challenges. First, in many application domains, trade-offs like "diversity vs. accuracy" have to be balanced. Second, it is not always clear how much diversity or novelty is desirable in practice. In this work, we propose a novel parameterizable optimization scheme that re-ranks accuracy-optimized recommendation lists in order to cope with these challenges. Our method is both capable of considering multiple optimization goals at the same time and designed to consider individual user tendencies regarding the different quality factors, like diversity. In contrast to previous work, the method is not restricted to a specific underlying item ranking algorithm and its generic design allows the algorithm to be parameterized according to the requirements of the application domain. Experimental evaluations with different datasets show that balancing the quality factors with our method can be done with a marginal or no loss in ranking accuracy. Given that our method can be applied in various domains and within the narrow time constraints of online recommendation, our work opens new opportunities to design novel finer-grained personalization approaches in practical applications. (C) 2017 Elsevier Ltd. All rights reserved. For high-value assets such as certain types of plant equipment, the total amount of resources devoted to Operation and Maintenance may substantially exceed the resources expended in acquisition and installation of the asset, because high-value assets have long useful lifetimes. Any asset failure during this useful lifetime risks large losses in income and goodwill, and decreased safety. With the continual development of information, communication, and sensor technologies, Condition-Based Maintenance (CBM) policies have gained popularity in industries. A successfully implemented CBM reduces the losses due to equipment failure by intelligently maintaining the equipment before catastrophic failures occur. However, effective CBM requires an effective fault analysis method based on gathered sensor data. In this vein, this paper proposes a Bayesian network-based fault analysis method, from which novel fault identification, inference, and sensitivity analysis methods are developed. As a case study, the fault analysis method was analyzed in a centrifugal compressor utilized in a plant. (C) 2017 Elsevier Ltd. All rights reserved. This article examines the potential benefits of solving a stochastic DEA model over solving a deterministic DEA model. It demonstrates that wrong decisions could be made whenever a possible stochastic DEA problem is solved when the stochastic information is either unobserved or limited to a measure of central tendency. We propose two linear models: a semi-stochastic model where the inputs of the DMU of interest are treated as random while the inputs of the other DMUs are frozen at their expected values, and a stochastic model where the inputs of all of the DMUs are treated as random. These two models can be used with any empirical distribution in a Monte Carlo sampling approach. We also define the value of the stochastic efficiency (or semi-stochastic efficiency) and the expected value of the efficiency. (C) 2017 Elsevier Ltd. All rights reserved. We consider two NP-hard open dimension nesting problems for which a set of items has to be packed without overlapping into a two-dimensional bin in order to minimize one or both dimensions of this bin. These problems are faced by real-life applications, such as textile, footwear and automotive industries. Therefore, there is a need for specialized systems to help in a decision making process. Bearing this in mind, we derive new concepts as the no-fit raster, which can be used to check overlapping between any two-dimensional generic-shaped items. We also use a biased random key genetic algorithm to determine the sequence in which items are packed. Once the sequence of items is determined, we propose two heuristics based on bottom-left moves and the no-fit raster concept, which are in turn used to arrange these items into the given bin observing the objective criteria. As far as we know, the problem with two open dimensions is being solved for the first time in the context of nesting problems and we present the first whole quadratic model for this problem. Computational experiments conducted on benchmark instances from the literature (some from the textile industry and others including circles, convex, and non-convex polygons) show the competitiveness of the approaches developed as they were able to calculate the best results for 74.14% of the instances. It can be observed that these results show new directions in terms of solving nesting problems whereby approaches can be coupled in existing intelligent systems to support decision makers in this field. (C) 2017 Elsevier Ltd. All rights reserved. The use of robots in our daily lives is increasing. As we rely more on robots, thus it becomes more important for us that the robots will continue on with their mission successfully. Unfortunately, these sophisticated, and sometimes very expensive, machines are susceptible to different kinds of faults. It becomes important to apply a Fault Detection (FD) mechanism which is suitable for the domain of robots. Two important requirements of such a mechanism are: high accuracy and low computational-load during operation (online). Supervised learning can potentially produce very accurate FD models, and if the learning takes place offline then the online computational-load can be reduced. Yet, the domain of robots is characterized with the absence of labeled data (e.g., "faulty", "normal") required by supervised approaches, and consequently, unsupervised approaches are being used. In this paper we propose a hybrid approach-an unsupervised approach can label a data set, with a low degree of inaccuracy, and then the labeled data set is used offline by a supervised approach to produce an online FD model. Now, we are faced with a choice-should we use the unsupervised or the hybrid fault detector? Seemingly, there is no way to validate the choice due to the absence of (a priori) labeled data. In this paper we give an insight to why, and a tool to predict when, the hybrid approach is more accurate. In particular, the main impacts of our work are (1) we theoretically analyze the conditions under which the hybrid approach is expected to be more accurate. (2) Our theoretical findings are backed with empirical analysis. We use data sets of three different robotic domains: a high fidelity flight simulator, a laboratory robot, and a commercial Unmanned Arial Vehicle (UAV). (3) We analyze how different unsupervised FD approaches are improved by the hybrid technique and (4) how well this improvement fits our prediction tool. The significance of the hybrid approach and the prediction tool is the potential benefit to expert and intelligent systems in which labeled data is absent or expensive to create. (C) 2017 Elsevier Ltd. All rights reserved. Synthetic Aperture Radars (SAR) are the main instrument used to support oil detection systems. In the microwave spectrum, oil slicks are identified as dark spots, regions with low backscatter at sea surface. Automatic and semi-automatic systems were developed to minimize processing time, the occurrence of false alarms and the subjectivity of human interpretation. This study presents an intelligent hybrid system, which integrates automatic and semi-automatic procedures to detect dark spots, in six steps: (I) SAR pre-processing; (II) Image segmentation; (III) Feature extraction and selection; (IV) Automatic clustering analysis; (V) Decision rules and, if needed; (VI) Semi-automatic processing. The results proved that the feature selection is essential to improve the detection capability, keeping only five pattern features to automate the clustering procedure. The semi-automatic method gave back more accurate geometries. The automatic approach erred more including regions, increasing the dark spots area, while the semiautomatic method erred more excluding regions. For well-defined and contrasted dark spots, the performance of the automatic and the semi-automatic methods is equivalent. However, the fully automatic method did not provide acceptable geometries in all cases. For these cases, the intelligent hybrid system was validated, integrating the semi-automatic approach, using compact and simple decision rules to request human intervention when needed. This approach allows for the combining of benefits from each approach, ensuring the quality of the classification when fully automatic procedures are not satisfactory. (C) 2017 Elsevier Ltd. All rights reserved. Many information systems record executed process instances in the event log, a very rich source of information for several process management tasks, like process mining and trace comparison. In this paper, we present a framework, able to convert activities in the event log into higher level concepts, at different levels of abstraction, on the basis of domain knowledge. Our abstraction mechanism manages non trivial situations, such as interleaved activities or delays between two activities that abstract to the same concept. Abstracted traces can then be provided as an input to an intelligent system, meant to implement a variety of process management tasks, significantly enhancing the quality and the usefulness of its output. In particular, in the paper we demonstrate how trace abstraction can impact on the quality of process discovery, showing that it is possible to obtain more readable and understandable process models. We also prove, through our experimental results, the impact of our approach on the capability of trace comparison and clustering (realized by means of a metric able to take into account abstraction phase penalties) to highlight (in)correct behaviors, abstracting from details. (C) 2017 Elsevier Ltd. All rights reserved. Sustainability of business is often reliant on the efficiency of its internal and external operations as well as on overall customer satisfaction. Evaluating overall operational performance via utilizing both qualitative and quantitative information is an essential first step towards a sustainable and reliable business environment. With this motivation, this study proposes a hybrid approach combining Fuzzy AHP, DEA and TOPSIS methodologies for retail performance evaluation. A food industry case study is considered to illustrate its implementation. (C) 2017 Elsevier Ltd. All rights reserved. While music improvisation is an NP-hard problem, it has always been handled by musicians successfully. Harmony search is a class of meta-heuristics inspired by the improvisation process of musicians. Inexperienced musicians usually make several harmonies until they find the desired one. However, experienced musicians more rely on their knowledge and experience instead of brute-force searching for a desired harmony. When making a harmony, they are able to distinguish the undesired notes of the current harmony and just modify them instead of throwing away the total harmony and making a new one. This approach of experienced musicians was adopted in this paper to allow the harmony search algorithm to exploit the knowledge and experience accumulated in the harmony memory to refine current harmonies. The underlying algorithm is called Selective Refining Harmony Search in which a new harmony memory update has been utilized. The main differences between the proposed method and the original harmony search are the integration of selection in the improvisation step and introduction of refinement concept. The improvements provided by the proposed method originate from its superior ability to imitate the behavior of musicians in the sense that instead of total harmonies, decision variables are considered during the memory update process. This modification provided outstanding performance to minimize the objective functions of the underlying problems, while computational loads are also better than other methods in most cases. During refinement procedure, two new parameters were employed to make a trade-off between effectiveness and efficiency of the algorithm. During the experiments, the proposed method exhibited robust performance against its two new parameters. Several algorithms including original harmony search and its state-of-the-art variants were implemented to conduct comprehensive comparisons. All of the algorithms were evaluated over IEEE CEC 2010 suite, one of the well-known and challenging benchmark test sets. Several studies have evidenced that the utilization of, harmony search has been fostered in diverse application fields. Thus, the improvements to harmony search algorithm provided by the proposed method makes it more efficient for those fields and possible to be employed for more challenging problems. (C) 2017 Elsevier Ltd. All rights reserved. Conceptual design plays an important role in development of new products and redesign of existing products. Morphological matrix is a popular tool for conceptual design. Although the morphological-matrix based conceptual design approaches are effective for generation of conceptual schemes, quantitative evaluation to each of the function solution principle is seldom considered, thus leading to the difficulty to identify the optimal conceptual design by combining these function solution principles. In addition, the uncertainties due to the subjective evaluations from engineers and customers in early design stage are not considered in these morphological-matrix based conceptual design approaches. To solve these problems, a systematic decision making approach is developed in this research for product conceptual design based on fuzzy morphological matrix to quantitatively evaluate function solution principles using knowledge and preferences of engineers and customers with subjective uncertainties. In this research, the morphological matrix is quantified by associating the properties of function solution principles with the information of customer preferences and product failures. Customer preferences for different function solution principles are obtained from multiple customers using fuzzy pairwise comparison (FPC). The fuzzy customer preference degree of each solution principle is then calculated by fuzzy logarithmic least square method (FLLSM). In addition, the product failure data are used to improve product reliability through fuzzy failure mode effects analysis (FMEA). Unlike the traditional FMEA, the causality relationships among failure modes of solution principles are analyzed to use failure information more effectively through constructing a directed failure causality relationship diagram (DFCRD). A fuzzy multi-objective optimization model is also developed to solve the conceptual design problem. The effectiveness of this new approach is demonstrated using a real-world application for conceptual design of a horizontal directional drilling machine (HDDM). (C) 2017 Elsevier Ltd. All rights reserved. Sentiment analysis, is a text mining task that determines the polarity of a given text, i.e., its positiveness or negativeness. Recently, it has received a lot of attention given the interest in opinion mining in micro-blogging platforms. These new forms of textual expressions present new challenges to analyze text because of the use of slang, orthographic and grammatical errors, among others. Along with these challenges, a practical sentiment classifier should be able to handle efficiently large workloads. The aim of this research is to identify in a large set of combinations which text transformations (lemmatization, stemming, entity removal, among others), tokenizers (e.g., word n-grams), and token-weighting schemes make the most impact on the accuracy of a classifier (Support Vector Machine) trained on two Spanish datasets. The methodology used is to exhaustively analyze all combinations of text transformations and their respective parameters to find out what common characteristics the best performing classifiers have. Furthermore, we introduce a novel approach based on the combination of word-based n-grams and character-based q-grams. The results show that this novel combination of words and characters produces a classifier that outperforms the traditional word-based combination by 11.17% and 5.62% on the INEGI and TASS'15 dataset, respectively. (C) 2017 Elsevier Ltd. All rights reserved. This technical note presents a new iterative procedure for solving systems of m linear equations in n variables under a sufficient condition that is practical. We show how this procedure may utilize elementary row operations to meet its sufficient condition. In this iterative procedure, the approximate solution obtained in each iteration is a convex combination of some 4,0-norm projections of the previous approximate solution. Under a regularity condition, this procedure converges quadratically. Application examples are given that show how this procedure can generate desired non-basic solutions and how it can aid Fourier-Motzkin elimination method in solving linear programming problems. (C) 2017 Elsevier B.V. All rights reserved. Hyers-Ulam stability has played an important role not only in the theory of functional equations but also in a variety of branches of mathematics, such as differential equations, integral equations and linear operators. In the present paper we will discuss the Hyers-Ulam stability of the iterative equation with a general boundary restriction. By the construction of a uniformly convergent sequence of functions, we prove that for every approximate solution of such an equation, there exists an exact solution near it. (C) 2017 Elsevier B.V. All rights reserved. In this note we consider the question of equivalence of pseudospectra and structured. pseudospectra of block matrices. The structures we study are all so called double structures; that is, the blocks of the given matrix are of the same structure as the block matrix. The approach is based on that of non-block matrices, which are also briefly studied, and the use of distance to singularity. We also list some open problems and conjectures. (C) 2017 The Author(s). Published by Elsevier B.V. Here we develop an option pricing method based on Legendre series expansion of the density function. The key insight, relying on the close relation of the characteristic function with the series coefficients, allows to recover the density function rapidly and accurately. Based on this representation for the density function, approximations formulas for pricing European type options are derived. To obtain highly accurate result for European call option, the implementation involves integrating high degree Legendre polynomials against exponential function. Some numerical instabilities arise because of serious subtractive cancellations in its formulation (96) in Proposition A.1. To overcome this difficulty, we rewrite this quantity as solution of a second-order linear difference equation and solve it using a robust and stable algorithm from Olver. Derivation of the pricing method has been accompanied by an error analysis. Errors bounds have been derived and the study relies more on smoothness properties which are not provided by the payoff functions, but rather by the density function of the underlying stochastic models. This is particularly relevant for options pricing where the payoffs of the contract are generally not smooth functions. The numerical experiments on a class of models widely used in quantitative finance show exponential convergence. (C) 2017 Elsevier B.V. All rights reserved. We consider the solution of large linear systems of equations that arise from the discretization of ill-posed problems. The matrix has a Kronecker product structure and the right-hand side is contaminated by measurement error. Problems of this kind arise, for instance, from the discretization of Fredholm integral equations of the first kind in two space-dimensions with a separable kernel and in image restoration problems. Regularization methods, such as Tikhonov regularization, have to be employed to reduce the propagation of the error in the right-hand side into the computed solution. We investigate the use of the global Golub-Kahan bidiagonalization method to reduce the given large problem to a small one. The small problem is solved by employing Tikhonov regularization. A regularization parameter determines the amount of regularization. The connection between global Golub-Kahan bidiagonalization and Gauss-type quadrature rules is exploited to inexpensively compute bounds that are useful for determining the regularization parameter by the discrepancy principle. (C) 2017 Elsevier B.V. All rights reserved. We provide explicit quadrature rules for spaces of C-1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of n subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as n grows, to the "two-third" quadrature rule of Hughes et al. (2010) for infinite domains. (C) 2017 Elsevier B.V. All rights reserved. A sign regular matrix is almost strictly sign regular if all its nontrivial minors of the same order have the same strict sign. These matrices form a subclass of sign regular matrices (matrices whose minors of the same order have the same sign). In this paper, the backward stability for almost strictly sign regular matrices is studied when Neville elimination with two-determinant pivoting strategy is applied. In addition, several numerical experiments are also presented. (C) 2017 Elsevier B.V. All rights reserved. This article presents a method of approximating an arc using a polygon. The method uses the condition that the approximated arc describes equal surface areas of the circular sector. The method described in this article ensures that the polygons obtained by approximation have the same area as the original geometric. (C) 2017 Elsevier B.V. All rights reserved. The previous approximation curves of conic section by quartic Bezier curves interpolate the conic section at the specified parameter values. In this paper, by solving the minimax problem, we present the optimal parameter values for approximating conic sections by the quartic Bezier curves. The upper bound on the Hausdorff distance between the conic section and the approximation curves is minimized. The method of solving the minimax problem is changed to solve a quartic algebraic equation by delicate reasoning. (C) 2017 Elsevier B.V. All rights reserved. In the current paper, an efficient numerical method based on two-dimensional hybrid of block-pulse functions and Legendre polynomials is developed to approximate the solutions of two-dimensional nonlinear Fredholm, Volterra and Volterra-Fredholm integral equations of the second kind. The main idea of the presented method is based upon some of the important benefits of the hybrid functions such as high accuracy, wide applicability and adjustability of the orders of block-pulse functions and Legendre polynomials to achieve highly accurate numerical solutions. By using the numerical integration and collocation method, two-dimensional nonlinear integral equations are reduced to a system of nonlinear algebraic equations. The focus of this paper is to obtain an error estimate and to show the convergence analysis for the numerical approach under the L-2-norm. Numerical results are presented and compared with the results from other existing methods to illustrate the efficiency and accuracy of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. Sparse recovery from indirectly under-sampled or possibly noisy data is a burgeoning topic drawing the attention of many researchers. Since sparse recovery problems can be cast as a class of the constrained convex optimization models which minimize a nonsmooth convex objection function in a convex closed set, fast and efficient methods for solving the constrained optimization models are highly needed. By introducing the indicator functions related to constrained sets in sparse recovery models, we reformulate these models as two general unconstrained optimization problems. To develop fast first order methods, two smoothing approaches are proposed based on the Moreau envelope: smoothing the related indicator functions or the objection functions. By using the first smoothing approach, we obtain more proper unconstrained models for sparse recovery from noisy data. Fast iterative shrinkage-thresholding algorithm (FISTA) is applied to solve the smoothed models. When smoothing the objection functions, we propose an efficient first-order method based on FISTA to establish the rate of convergence of order theta(logk/k) for the iterative sequence of values of the original objective functions. Numerical experiments have demonstrated that the two proposed smoothing methods are comparable to the stateof-the-art first-order methods with respect to accuracy and speed when applied to the sparse recovery problems such as compressed sensing, matrix completion, and robust and stable principal component analysis. (C) 2017 Elsevier B.V. All rights reserved. In this paper we propose an iteration method to solve the multiple constrained least squares matrix problem. We first transform the multiple constrained least squares matrix problems into the multiple constrained matrix optimal approximation problem, and then we use the idea of Dykstra's algorithm to derive the basic iterative pattern. We observe that we only need to solve multiple single constrained least squares matrix problems at each iteration step of the proposed algorithm. We give a numerical example to illustrate the effectiveness of the proposed method to solve the original problems. Also, we give an example to illustrate that the method proposed by Escalante and Li to solve the single constrained least squares matrix problem is not correct. (C) 2017 Published by Elsevier B.V. We consider the nonlinear matrix equation X-p= A + M(B + X-1)M-1*, where p > 1 is a positive integer, M is an arbitrary n x n matrix, and A and B are Hermitian positive semidefinite matrices. An elegant estimate of the Hermitian positive definite (HPD) solution is derived. A fixed-point iteration and an inversion-free variant iteration for obtaining the HPD solution are proposed. Some numerical examples are presented to show the efficiency of the proposed two iterative methods. (C) 2017 Elsevier B.V. All rights reserved. We present a mathematical analysis for our self-consistent Implicit/Explicit (IMEX) method that we have introduced in Kadioglu and Knoll (2010, 2013), Kadioglu et al. (2009, 2010) and Kadioglu (2017). The self-consistent IMEX algorithm is designed to produce second order time convergent solutions to multi-physics and multiple time scale fluid dynamics problems. The algorithm is a combination of an explicit block that solves the non-stiff part and an implicit block that solves the stiff part of the problem. The explicit block is always solved inside the implicit block as part of the nonlinear function evaluation making use of the Jacobian-Free Newton Krylov (JFNK) method (Knoll and Keyes, 2004; Nourgaliev et al., 2010; Saad, 2003). In this way, there is a continuous interaction between the implicit and explicit blocks meaning that the improved solutions (in terms of the time accuracy) at each nonlinear iteration are immediately felt by the explicit block and the improved explicit solutions are readily available to form the next set of nonlinear residuals. This continuous interaction between the two algorithm blocks results in an implicitly balanced algorithm in that all the nonlinearities due to coupling of different time terms are converged. In other words, we obtain a self-consistent IMEX method that eliminates the possible order reduction in time accuracy for certain types of problems that a classical IMEX method can suffer from. We note that the classic IMEX methods split the operators such a way that the implicit and explicit blocks are executed independent of each other, and this may lead to non-converged nonlinearities therefore time inaccuracies for certain models. We also note that the well-known Strang-Splitting operator split technique (Strang, 1968) can suffer from the above mentioned time order reduction for certain applications, even though the method itself formally a second order numerical procedure. In this study, we provide a mathematical analysis (modified equation analysis) that examines and compares the time convergence behavior of our self-consistent IMEX method versus the classic IMEX method. We provide computational results to verify our analysis and analytical findings. We also computationally compare our IMEX procedure to the Strang-Splitting method. (C) 2017 Elsevier B.V. All rights reserved. In this paper we study the Castelnuovo-Mumford regularity of the path ideals of finite simple graphs. We find new upper bounds for various path ideals of gap free graphs. In particular we prove that the t-path ideals of gap free, claw free and whiskered-K-4 free graphs have linear minimal free resolutions for all t >= 3. (C) 2016 Elsevier B.V. All rights reserved. We recall that the Brill-Noether Theorem gives necessary and sufficient conditions for the existence of a g(d)(r). Here we consider a general n-fold, etale, cyclic cover p : (C) over tilde -> C of a curve C of genus g and investigate for which numbers r, d a g(d)(r) exists on (C) over tilde. For r = 1 this means computing the gonality of (C) over tilde. Using degeneration to a special singular example (containing a Castelnuovo canonical curve) and the theory of limit linear series for tree-like curves we show that the Plficker formula yields a necessary condition for the existence of a g(d)(r) which is only slightly weaker than the sufficient condition given by the results of Laksov and Kleimann [24], for all n, r, d. (C) 2016 Elsevier B.V. All rights reserved. We generalize the construction of Raynaud [14] of smooth projective surfaces of general type in positive characteristic that violate the Kodaira vanishing theorem. This corrects an earlier paper [19] of the same purpose. These examples are smooth surfaces fibered over a smooth curve whose direct images of the relative dualizing sheaves are not nef, and they violate Kollar's vanishing theorem. Further pathologies on these examples include the existence of non-trivial vector fields and that of non-closed global differential 1-forms. (C) 2016 Elsevier B.V. All rights reserved. We characterise regular Goursat categories through a specific stability property of regular epimorphisms with respect to pullbacks. Under the assumption of the existence of some pushouts this property can be also expressed as a restricted Beck Chevalley condition, with respect to the fibration of points, for a special class of commutative squares. In the case of varieties of universal algebras these results give, in particular, a structural explanation of the existence of the ternary operations characterising 3-permutable varieties of universal algebras. (C) 2016 Elsevier B.V. All rights reserved. The graph groupoids of directed graphs E and F are topologically isomorphic if and only if there is a diagonal-preserving ring *-isomorphism between the Leavitt path algebras of E and F. (C) 2017 Elsevier B.V. All rights reserved. The Divisibility Graph of a finite group G has vertex set the set of conjugacy class sizes of non-central elements in G and two vertices are connected by an edge if one divides the other. We determine the connected components of the Divisibility Graph of the finite groups of Lie type in odd characteristic. (C) 2016 Elsevier B.V. All rights reserved. Let G be a finite linear group containing no transvections. This paper proves that the ring of invariants of G is polynomial if and only if the pointwise stabilizer in G of any subspace is generated by pseudoreflections. Kemper and Malle used the classification of finite irreducible groups generated by pseudoreflections to prove the irreducible case in arbitrary characteristic. We extend their result to the reducible case. (C) 2017 Elsevier B.V. All rights reserved. A subgroup H of G is called complemented in G if there exists a subgroup K of G such that G = HK and H boolean AND K = 1. The aim of this paper is to prove the following: A finite group G is solvable if and only if its Sylow 3-, 5- and 7-subgroups are complemented in G. (C) 2017 Elsevier B.V. All rights reserved. Bell and Zhang have shown that if A and B are two connected graded algebras finitely generated in degree one that are isomorphic as ungraded algebras, then they are isomorphic as graded algebras. We exploit this result to solve the isomorphism problem in the cases of quantum affine spaces, quantum matrix algebras, and homogenized multiparameter quantized Weyl algebras. Our result involves determining the degree one normal elements, factoring out, and then repeating. This creates an iterative process that allows one to determine relationships between relative parameters. (C) 2016 Elsevier B.V. All rights reserved. An integro-differential algebra of arbitrary characteristic is given the structure of a uniform topological space, such that the ring operations as well as the derivation (= differentiation operator) and Rota Baxter operator (= integral operator) are uniformly continuous. Using topological techniques and the central notion of divided powers, this allows one to introduce a composition for (topologically) complete integro-differential algebras; this generalizes the series case, viz. meaning formal power series in characteristic zero and Hurwitz series in positive characteristic. The canonical Hausdorff completion for pseudometric spaces is shown to produce complete integro-differential algebras. The setting of complete integro-differential algebras allows us to describe exponential and logarithmic elements in a way that reflects the "integro-differential properties" known from analysis. Finally, we prove also that any complete integro-differential algebra is saturated, in the sense that every (monic) linear differential equation possesses a regular fundamental system of solutions. While the paper focuses on the commutative case, many results are given for the general case of (possibly noncommutative) rings, whenever this does not require substantial modifications. (C) 2017 Elsevier B.V. All rights reserved. A ring R is said to be left uniquely generated if Ra = Rb in R implies that a = ub for some unit u in R. These rings have been of interest since Kaplansky introduced them in 1949 in his classic study of elementary divisors. Writing 1(b) {r is an element of R vertical bar rb = 0}, a theorem of Canfell asserts that R is left uniquely generated if and only if, whenever Ra+1(b) = R where a, b is an element of R, then a u is an element of 1(b) for some unit u in R. By analogy with the stable range 1 condition we call a ring with this property left annihilator-stable. In this paper we exploit this perspective on the left UG rings to construct new examples and derive new results. For example, writing J for the Jacobson radical, we show that a semiregular ring R is left annihilator-stable if and only if RI J is unit-regular, an analogue of Bass' theorem that semilocal rings have stable range 1. Crown Copyright (C) 2017 Published by Elsevier B.V. All rights reserved. Let L be a restricted Lie algebra over a field of positive characteristic. We prove that the restricted enveloping algebra of L is a principal ideal ring if and only if L is an extension of a finite-dimensional torus by a cyclic restricted Lie algebra. (C) 2017 Elsevier B.V. All rights reserved. In this paper we develop the basic homotopy theory of G-symmetric spectra (that is, symmetric spectra with a G-action) for a finite group G, as a model for equivariant stable homotopy with respect to a G-set universe. This model lies in between Mandell's equivariant symmetric spectra and the G-orthogonal spectra of Mandell and May and is Quillen equivalent to the two. We further discuss equivariant semistability, construct model structures on module, algebra and commutative algebra categories and describe the homotopical properties of the multiplicative norm in this context. (C) 2017 Elsevier B.V. All rights reserved. In this short article, we compute the classical limits of the quantum toroidal and affine Yangian algebras of 5l(n), by generalizing our arguments for gl(1) from [7] (an alternative proof for n > 2 is given in [10]). We also discuss some consequences of these results. (C) 2017 Elsevier B.V. All rights reserved. Weighted singular value decomposition (WSVD) and a representation of the weighted Moore-Penrose inverse of a quaternion matrix by WSVD have been derived. Using this representation, limit and determinantal representations of the weighted Moore-Penrose inverse of a quaternion matrix have been obtained within the framework of the theory of noncommutative column-row determinants. By using the obtained analogs of the adjoint matrix, we get the Cramer rules for the weighted Moore-Penrose solutions of left and right systems of quaternion linear equations. (C) 2017 Elsevier Inc. All rights reserved. As an improvement of the meshless local Petrov-Galerkin (MLPG), the complex variable meshless local Petrov-Galerkin (CVMLPG) method is extended here to dynamic analysis of functionally graded materials (FGMs). In this method, the complex variable moving least squares (CVMLS) approximation is used instead of the traditional moving least-squares (MLS) to construct the shape functions. The main advantage of the CVMLS approximation over MLS approximation is that the number of the unknown coefficients in the trial function of the CVMLS approximation is less than that of the MLS approximation, thus higher efficiency and accuracy can be achieved under the same node distributions. In implementation of the present method, the variations of the FGMs properties are computed by using material parameters at Gauss points, so it totally avoids the issue of the assumption of homogeneous in each element in the finite element method (FEM) for the FGMs. Several numerical examinations for dynamic analysis of FGMs are carried out to demonstrate the accuracy and efficiency of the CVMLPG. (C) 2017 Elsevier Inc. All rights reserved. In Yu, et al. (2017), an analytical expression of the determinant of the Hermitian (quasi-)Laplacian matrix of mixed graphs has been proven. In this paper, we are going to extend those results and derive an analytical expression for the principal minors of the Hermitian (quasi-)Laplacian matrix, which is the principal minor version of the Matrix-Tree theorem. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a compact alternating direction implicit (ADI) method, which combines the fourth-order compact difference for the approximations of the second spatial derivatives and the approximation factorizations of difference operators, is firstly presented for solving two-dimensional (2D) second order dual-phase-lagging models of microscale heat transfer. By the discrete energy method, it is shown that it is second-order accurate in time and fourth-order accurate in space with respect to L-2-norms. Additionally, the compact ADI method is successfully generalized to solve corresponding three-dimensional (3D) problem. Also, the convergence result of the solver for 3D case is given rigorously. Finally, numerical examples are carried out to testify the computational efficiency of the algorithms and exhibit the correctness of theoretical results. (C) 2017 Elsevier Inc. All rights reserved. Preservation of nonnengativity and boundedness in the finite element solution of Nagumo-type equations with general anisotropic diffusion is studied. Linear finite elements and the backward Euler scheme are used for the spatial and temporal discretization, respectively. An explicit, an implicit, and two hybrid explicit-implicit treatments for the nonlinear reaction term are considered. Conditions for the mesh and the time step size are developed for the numerical solution to preserve nonnegativity and boundedness. The effects of lumping of the mass matrix and the reaction term are also discussed. The analysis shows that the nonlinear reaction term has significant effects on the conditions for both the mesh and the time step size. Numerical examples are given to demonstrate the theoretical findings. (C) 2017 Elsevier Inc. All rights reserved. At present, insurance companies are seeking more adequate liquidity funds to cover the insured property losses related to natural and manmade disasters. Past experience shows that the losses caused by catastrophic events, such as earthquakes, tsunamis, floods, or hurricanes, are extremely high. An alternative method for covering these extreme losses is to transfer part of the risk to the financial markets by issuing catastrophe-linked bonds. In this paper, we propose a contingent claim model for pricing catastrophe risk bonds (CAT bonds). First, using a two-dimensional semi-Markov process, we derive analytical bond pricing formulae in a stochastic interest rate environment with aggregate claims that follow compound forms, where the claim inter-arrival times are dependent on the claim sizes. Furthermore, we obtain explicit CAT bond prices formulae in terms of four different payoff functions. Next, we estimate and calibrate the parameters of the pricing models using catastrophe loss data provided by Property Claim Services from 1985 to 2013. Finally, we use Monte Carlo simulations to analyse the numerical results obtained with the CAT bond pricing formulae. (C) 2017 Elsevier Inc. All rights reserved. The Szeged index of a graph G is defined as Sz(G) =Sigma(e)=uv is an element of E(n)u(e)nv(e), where n(u)(e) and nv(e) are, respectively, the number of vertices of G lying closer to vertex u than to vertex v and the number of vertices of G lying closer to vertex v than to vertex u. A cactus is a graph in which any two cycles have at most one common vertex. Let C(n, k) denote the class of all cacti with order n and k cycles, and C-n(t), denote the class of all cacti with order n and t pendant vertices. In this paper, a lower bound of the Szeged index for cacti of order n with k cycles is determined, and all the graphs that achieve the lower bound are identified. As well, the unique graph in c(n)(t) with minimum Szeged index is characterized. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we study a nonintegrable discrete nonlinear Schrodinger (dNLS) equation with nonlinear hopping. By using the planar nonlinear dynamical map approach, we address the spatial properties of the nonintegrable dNLS equation. Through the constructions of exact period-1 and period-2 orbits of a planar nonlinear map which is a stationary version of the nonintegrable dNLS equation, we obtain the spatially periodic solutions of the nonintegrable dNLS equation. We also give the numerical simulations of the orbits of the planar nonlinear map and show how the nonlinear hopping terms affect those orbits. By using discrete Fourier transformation method, we obtain numerical approximations of stationary and travelling solitary wave solutions of the nonintegrable dNLS equation. (C) 2017 Elsevier Inc. All rights reserved. We prove that a second order backward difference semi-discretization approximation for the integro-differential equations with the multi-term kernels preserves the weighted asymptotic integrabilities of continuous solutions. The method of proof extend and simulate numerically those introduced by Hannsgen and Wheeler, relying on deep frequency domain techniques. (C) 2017 Elsevier Inc. All rights reserved. Power law metachronal wave motion, responsible for the cilia transport is investigated in this paper using numerical tools. The dynamical analysis is made in channel and in tube to demonstrate the quantitative effect of the geometry. Similarity transformations are employed to convert the governing partial differential equations into a set of coupled ordinary differential equations. A swift and accurate collocation algorithm is applied to the boundary value problem (BVP) of coupled ordinary differential equations. A nondimensional graphical analysis of the waving amplitude is reported by varying the flow consistency and flow behavior indices. (C) 2017 Elsevier Inc. All rights reserved. We explore the connection between an infinite system of particles in R-2 described by a bi-dimensional version of the Toda equations with the theory of orthogonal polynomials in two variables. We define a 2D Toda lattice in the sense that we consider only one time variable and two space variables describing a mesh of interacting particles over the plane. We show that this 2D Toda lattice is related with the matrix coefficients of the three term relations of bivariate orthogonal polynomials associated with an exponential modification of a positive measure. Moreover, block Lax pairs for 2D Toda lattices are deduced. (C) 2017 Elsevier Inc. All rights reserved. For a (molecular) graph, the first Zagreb index M-1 is the sum of squares of the degrees of vertices, and the second Zagreb index M-2 is the sum of the products of the degrees of pairs of adjacent vertices. In this work, we study the first and second Zagreb indices of graphs based on four new operations related to the lexicographic product, subdivision and total graph. (C) 2017 Elsevier Inc. All rights reserved. In this paper, the Hermite-type radial point interpolation method (RPIM) is applied to analyze the property of piezoelectric ceramics in order to overcome the defects of finite element method. In this method, the inside and boundary of the problem domain are discreted by a distribution of nodes, and then the interpolation function of nodes are constructed to solve the displacement of the evaluation nodes. Compared with the finite element method, it is easier and faster for the Hermite-type RPIM to accurately achieve solution of the local regions. In contrast with the existing meshless methods, this method would not cause singularity in the process of evaluating the shape function. Furthermore, the shape function of the Hermite-type RPIM has a better stability and it can adapt to any distribution of nodes. In addition, the accuracy and stability of the method are proved by the numerical simulation. (C) 2017 Elsevier Inc. All rights reserved. Let X is an element of C-mxm and Y is an element of C-nxn be nonsingular matrices, and let N is an element of C-mxn. Explicit expressions for the Moore-Penrose inverses of M = XNY and a two-by-two block matrix, under appropriate conditions, have been established by Castro-Gonzalez et al. [Linear Algebra Appl. 471 (2015) 353-368]. Based on these results, we derive a novel expression for the Moore-Penrose inverse of A+UV* under suitable conditions, where A is an element of C-mxn, U is an element of C-mxr, and V is an element of C-nxr In particular, if both A and I + V*A(-1)U are nonsingular matrices, our expression reduces to the celebrated Sherman-Morrison-Woodbury formula. Moreover, we extend our results to the bounded linear operators case. (C) 2017 Elsevier Inc. All rights reserved. H7N9, a kind of subgroup of influenza viruses, has lasted for over three years in China. Although it has not yet aroused large-scale outbreak in human, it reappeared and peaked every winter. According to the obtained information and literatures, possible reasons to induce recurrence of human cases every winter can be speculated as migration of birds, temperature cycling, or reopening of live poultry trading markets. However, which one is the major driving factor is unclear. As a result, a dynamical model about migrant birds, resident birds and domestic poultry, considering temperature-dependent decay rate of virus and periodical closure of trading markets, is established to determine internal critical driving factors, and to propose the most effective prevention measure by sensitivity analysis. By numerical analysis, it is concluded that temperature cycling may be the main driving mechanism of the periodicity of infected human cases. Closing trading markets is not the main factor, but it is the most effective measure to control the epidemic to be at a low level. Consequently, periodically closing trading markets is proposed to prevent and control the spread of epidemic. (C) 2017 Elsevier Inc. All rights reserved. This paper studies the stability problem of Yang-Chen system. By introducing different radial unbounded Lyapunov functions in different regions, global exponential attractive set of Yang-Chen chaotic system is constructed with geometrical and algebraic methods. Then, simple algebraic sufficient and necessary conditions of global exponential stability, global asymptotic stability, and exponential instability of equilibrium are proposed. And the relevant expression of corresponding parameters for local exponential stability, local asymptotic stability, exponential instability of equilibria are obtained. Furthermore, the branch problem of the system is discussed, some branch expressions are given for the parameters of the system. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a new family of high-order finite difference schemes is proposed to solve the two-dimensional Poisson equation by implicit finite difference formulas of (2M + 1) operator points. The implicit formulation is obtained from Taylor series expansion and wave plane theory analysis, and it is constructed from a few modifications to the standard finite difference schemes. The approximations achieve (2M+4)-order accuracy for the inner grid points and up to eighth-order accuracy for the boundary grid points. Using a Successive Over-Relaxation method, the high-order implicit schemes have faster convergence as M is increased, compensating the additional computation of more operator points. Thus, the proposed solver results in an attractive method, easy to implement, with higher order accuracy but nearly the same computation cost as those of explicit or compact formulation. In addition, particular case M = 1 yields a new compact finite difference schemes of sixth-order of accuracy. Numerical experiments are presented to verify the feasibility of the proposed method and the high accuracy of these difference schemes. (C) 2017 Elsevier Inc. All rights reserved. The compensated quotient-difference (Compqd) algorithm is proposed along with some applications. The main motivation is based on the fact that the standard quotient difference (qd) algorithm can be numerically unstable. The Compqd algorithm is obtained by applying error-free transformations to improve the traditional qd algorithm. We study in detail the error analysis of the qd and Compqd algorithms and we introduce new condition numbers so that the relative forward rounding error bounds can be derived directly. Our numerical experiments illustrate that the Compqd algorithm is much more accurate than the qd algorithm, relegating the influence of the condition numbers up to second order in the rounding unit of the computer. Three applications of the new algorithm in the obtention of continued fractions and in pole and zero detection are shown. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we discuss two-level implicit methods for the solution of a special type of fourth order parabolic partial differential equation of the form u(xxxx)-2u(xxt)+u(tt)=f(x,t,u), 0 0 subject to appropriate initial and Dirichlet boundary conditions by converting the original problem to a coupled system of two second order parabolic equations. We use only three spatial grid points and it is not required to discretize the boundary conditions. The proposed Crank-Nicolson type scheme is second order accurate in both the temporal and spatial dimensions while the compact Crandall's type scheme is second order accurate in temporal and fourth order accurate in spatial dimension. The methods do not require any fictitious nodes outside the solution domain for handling the boundary conditions. For a fixed mesh ratio parameter (Delta t/Delta x(2)), the proposed Crandall's type method behaves like a fourth order method in space. Using matrix stability analysis, the proposed methods are shown to be unconditionally stable. The resulting implicit difference formulas gives block tri-diaginal matrix structure which is solved efficiently using block Gauss Seidel method or block Newton method depending on linear or nonlinear behaviour of the equations. The methods compute the numerical value of u and time-dependent Laplacian u(xx)-u(t), simultaneously. Numerical results are provided to demonstrate the accuracy and efficiency of the proposed methods. (C) 2017 Elsevier Inc. All rights reserved. Fullerene graphs are cubic, 3-connected planar graphs with only pentagonal and hexagonal faces. A fullerene is called a leapfrog fullerene, Le(F), if it can be constructed by a leapfrog transformation from other fullerene graph F. Here we determine the relation between the Wiener index of Le(F) and the Wiener index of the original graph F. We obtain lower and upper bounds of the Wiener index of Le(i)(F) in terms of the Wiener index of the original graph. As a consequence, starting with any fullerene F, and iterating the leapfrog transformation we obtain fullerenes, Le(i)(F), with Wiener index of order O(n(2.64)) and Omega(n(2.36)), where n is the number of vertices of Le(i)(F). These results disprove Hua et al. (2014) conjecture that the Wiener index of fullerene graphs on n vertices is of order Theta(n(3)). (C) 2017 Elsevier Inc. All rights reserved. In this paper, we present a generalized Refine-Smooth algorithm to design a family of a point b-ary approximating subdivision schemes with bell-shaped mask, where a >= 3 and b >= 2. We use the combination of corner cutting b-ary subdivision scheme and weighted average of (b+1) -points to construct the proposed family. We demonstrate that the proposed family has smaller complexity and support width and higher continuity than the existing Refine-Smooth subdivision schemes. We also study the shape preserving properties of the proposed family. In addition, it is observed that the proposed family is suitable for fitting the locally noisy, oscillatory, and irregular data. (C) 2017 Elsevier Inc. All rights reserved. We study the existence and multiplicity of positive solutions for a system of nonlinear Riemann-Liouville fractional differential equations, subject to multi-point boundary conditions which contain fractional derivatives. The nonsingular and singular cases are investigated. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we study the Szeged index of partial cubes and hence generalize the result proved by Chepoi and Klavzar, who calculated this index for benzenoid systems. It is proved that the problem of calculating the Szeged index of a partial cube can be reduced to the problem of calculating the Szeged indices of weighted quotient graphs with respect to a partition coarser than Theta-partition. Similar result for the Wiener index was recently proved by Klavzar and Nadjafi-Arani. Furthermore, we show that such quotient graphs of partial cubes are again partial cubes. Since the results can be used to efficiently calculate the Wiener index and the Szeged index for specific families of chemical graphs, we consider C4C8 systems and show that the two indices of these graphs can be computed in linear time. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a finite-difference lattice Boltzmann (LB) model for nonlinear isotropic and anisotropic convection-diffusion equations is proposed. In this model, the equilibrium distribution function is delicately designed in order to recover the convection-diffusion equation exactly. Different from the standard LB model, the temporal and spatial steps in this model are decoupled such that it is convenient to study convection-diffusion problem with the non-uniform grid. In addition, it also preserves the advantage of standard LB model that the complex-valued convection-diffusion equation can be solved directly. The von Neumann stability analysis is conducted to discuss the stability region which can be used to determine the free parameters appeared in the model. To test the performance of the model, a series of numerical simulations of some classical problems, including the diffusion equation, the nonlinear heat conduction equation, the Sine-Gordon equation, the Gaussian hill problem, the Burgers Fisher equation, and the nonlinear Schrodinger equation, have also been carried out. The results show that the present model has a second order convergence rate in space, and generally it is also more accurate than the standard LB model. (C) 2017 Elsevier Inc. All rights reserved. Ultrametric approach to the genetic code and the genome is considered and developed. p-Adic degeneracy of the genetic code is pointed out. Ultrametric tree of the codon space is presented. It is shown that codons and amino acids can be treated as p-adic ultrametric networks. Ultrametric modification of the Hamming distance is defined and noted how it can be useful. Ultrametric approach with p-adic distance is an attractive and promising trend towards investigation of bioinformation. (C) 2017 Elsevier Inc. All rights reserved. Given a set S of n line segments in the plane, we say that a region R subset of R-2 is a stabber for S if R. contains exactly one endpoint of each segment of S. In this paper we provide optimal or near-optimal algorithms for reporting all combinatorially different stabbers for several shapes of stabbers. Specifically, we consider the case in which the stabber can be described as the intersection of axis-parallel halfplanes (thus the stabbers are halfplanes, strips, quadrants, 3-sided rectangles, or rectangles). The running times areO(n) (for the halfplane case), O(nlogn) (for strips, quadrants, and 3-sided rectangles), and O(n(2)logn) (for rectangles). (C) 2017 Elsevier Inc. All rights reserved. This paper develops an instrument for measuring service quality in Ghana aimed at capturing a crossvergence perspective and to compare the efficacy of the new instrument (GhanQual) with SERVPERF and PAKSERV within the Ghanaian cultural context. Data were collected using SERVPERF and PAKSERV scale items. Using a structured questionnaire, data were collected from a sample of banking and hospital customers in Ghana and analysed using exploratory factor analysis, confirmatory factor analysis and regression analysis. The new instrument demonstrated superiority over SERVPERF and PAKSERV and is recommended as the most appropriate instrument for measuring service quality within the Ghanaian cultural context. The study identified the important factors service managers in Ghana can utilise in measuring and managing service quality and is believed to be the first of its kind to propose and develop a crossvergence perspective service quality instrument as well as to compare the efficacy of SERVPERF and PAKSERV in a single study. Extension of this study in other countries is recommended as this study was contextualised within the Ghanaian cultural context. Nevertheless, this study provides a starting point for developing cultural-specific and crossvergence perspective instruments for measuring service quality. Using an extended SERVQUAL model, this study identifies and compares the importance of service quality to Muslim consumers with an Islamic or non-Islamic bank account in a non-Muslim country, Britain. Eight group discussions and a survey of 300 Muslims were conducted. Five dimensions of service quality were identified, i.e. responsiveness, credibility, Islamic tangibles, accessibility and reputation. These differ in structure and content from the original SERVQUAL developed in a western context and the subsequent CARTER model constructed in a Muslim country. In addition, significant differences were found in the importance of items between Islamic bank account and non-Islamic bank account holders. This study is one of the first to identify and compare the importance of service quality between Islamic and non-Islamic bank account holders in a western non-Muslim country. The results advance our understanding of the impact of culture on SERVQUAL. The study provides insight into Muslims' bank choice and helps bank managers of both Islamic and non-Islamic banks to focus their attention on the service quality dimensions that matter most to Muslim customers. So far, very little attention has been paid to examining consumer perceptions of trust from an interdisciplinary perspective. The purpose of this study is to examine how consumer trusting belief and disposition to trust within the financial services sector vary on the basis of individual demographic differences in trust. The research provides new insights into how consumers with higher dispositional trust have higher institutional trust and higher trusting belief and how consumers' trusting belief significantly differs according to their demographic background in terms of age, marital status, ethnicity and gross annual income. The findings offer useful insights for the managers in financial institutions to carefully consider the impact of the influence of these individual differences on consumer behaviour in order to serve the needs of consumers in their target market and be able to design financial products and develop trust building strategies to attract and retain them. They also call for the action of the regulators and the financial institutions to play their part in building strong institutional systems that contribute to engendering higher levels of consumer trust. Researching, and therefore marketing, 'unmentionable' products has always been challenging. Pawnbroking, the act of offering a loan secured by the pledge of an item of value, fits within this domain; its use is stigmatised, despite dating back centuries and enjoying high levels of user satisfaction. This study explores perceptions of pawnbroking and recommends marketing tactics to reframe it as a 'mentionable' credit option, allowing the sector to benefit from increased flow of information. This research uses qualitative methods to explore perceptions of pawnbroking, identifying beliefs, attitudes and barriers to use through depth interviews with non-users. The results reveal minimal understanding of the pawnbroking process, with latent stigma and stereotyping reinforced by media sources. Social pressures, emanating from the negative perception of users and perceptions of important social groups, are influential in participants' decisions to disassociate themselves from even the possibility of pawnbroker usage. The managerial implications are that pawnbrokers should emphasise the advantages of financial credit and minimise those of social discredit. Social marketing campaigns should target current perceptions of pawnbroking and encourage informed trial amongst a broader section of society. Resulting from instability in the UK financial climate in recent times, consumers have increasingly turned to alternative credit sources such as payday loans, logbook loans (car title loans) and pawning. Recognising this increasingly important trend in UK society, this study explores how UK consumers manage and select alternative credit sources, through a Consumer Culture Theory lens. Primary data were sourced through a multi-stage interview process with ten consumers of alternative credit providers. Findings were subjected to a rigorous six-stage thematic analysis, which enabled generation of a three-ring orbit model showing how the consumers migrated between categories of credit sources. Furthermore, it was found that other concepts could be traced on to the orbit model, such as access to other credit sources, time pressures, perceived risk and emotional state. It is expected that the findings from this study will benefit lenders, policy makers and regulatory bodies from greater insights into understanding of the emotional state of their customers and the particularly the pressures they may be experiencing when taking last resort credit sources. The present paper deals with the restricted four-body problem (R4BP), when the third primary placed at the triangular libration point of the restricted three-body problem is an ellipsoid. The third primary m(3) is not influencing the motion of the dominating primaries m(1) and m(2). We have studied the motion of m(4), moving under the influence of the three primaries m(i) , i = 1, 2, 3, but the motion of the primaries is not being influenced by infinitesimal mass m(4). Further, we have developed the equations of motion of infinitesimal mass m (4) which involves elliptic integrals and shows the existence and locations of equilibrium points. We have also discussed the zero velocity curves(ZVCs) for various value of Jacobian constant. The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory. A novel application of Modified Chebyshev Picard Iteration (MCPI) to differential correction is presented. By leveraging the Chebyshev basis functions of MCPI, interpolation in 1 dimension may be used to target plane crossing events, instead of integrating the 42 dimensional variational equation required for standard step integrators. This results in dramatically improved performance over traditional differential correctors. MCPI was tested against the Runge-Kutta 7/8 integrator on over 45,000 halo orbits in three different three-body problems, and was found to be up to an order of magnitude faster, while simultaneously increasing robustness. Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods. A new practical approach to estimate the body angular velocity of maneuvering spacecraft using only vector measurements is presented. Several algorithms have been introduced in previous studies to estimate the angular velocity directly from vector measurements at two time instants. However, these direct methods are based on the constant angular velocity assumption, and estimation results may be invalid for attitude maneuvers. In this paper, an estimation scheme to consider attitude disturbances and control torques is proposed. The effects of angular velocity variation on estimation results are quantitatively evaluated, and an algorithm to minimize estimation errors is designed by selecting the optimal time interval between vector measurements. Without losing the simplicity of direct methods, the design parameters of the algorithm are restricted to the expected covariance of disturbances and the maximum angular acceleration. By applying the proposed estimation scheme, gyroscopes can be directly replaced by attitude sensors such as star trackers. Background: Despite advanced nursing roles having a research competency, participation in research is low. There are many barriers to participation in research and few interventions have been developed to address these. This paper aims to describe the implementation of an intervention to increase research participation in advanced clinical nursing roles and evaluate its effectiveness. Methods: The implementation of the intervention was carried out within one hospital site. The evaluation utilised a mixed methods design and a implementation science framework. All staff in advanced nursing roles were invited to take part, all those who were interested and had a project in mind could volunteer to participate in the intervention. The intervention consisted of the development of small research groups working on projects developed by the nurse participant/s and supported by an academic and a research fellow. The main evaluation was through focus groups. Output was analysed using thematic analysis. In addition, a survey questionnaire was circulated to all participants to ascertain their self-reported research skills before and after the intervention. The results of the survey were analysed using descriptive statistics. Finally an inventory of research outputs was collated. Results: In the first year, twelve new clinical nurse-led research projects were conducted and reported in six peer reviewed papers, two non-peer reviewed papers and 20 conference presentations. The main strengths of the intervention were its promptness to complete research, to publish and to showcase clinical innovations. Main barriers identified were time, appropriate support from academics and from peers. The majority of participants had increased experience at scientific writing and data analysis. Conclusion: This study shows that an intervention, with minor financial resources; a top down approach; support of a hands on research fellow; peer collaboration with academics; strong clinical ownership by the clinical nurse researcher; experiential learning opportunities; focused and with needs based educational sessions, is an intervention that can both increase research outputs and capacity of clinically based nurses. Interventions to further enhance nursing research and their evaluation are crucial if we are to address the deficit of nurse-led patient-centred research in the literature. Background: The majority of residents in care homes in the United Kingdom are living with dementia or significant memory problems. Caring in this setting can be difficult and stressful for care staff who work long hours, have little opportunity for training, are poorly paid and yet subject to high expectation. This may affect their mental and physical wellbeing, cause high rates of staff turnover and absenteeism, and affect the quality of care they provide. The main objective of this survey was to explore the nature, characteristics and associations of stress in care home staff. Methods: Staff working in a stratified random sample of care homes within Wales completed measures covering: general health and wellbeing (SF-12); stress (Work Stress Inventory); job content (Karasek Job Content); approach to, and experience of, working with people living with dementia (Approaches to Dementia Questionnaire; and Experience of Working with Dementia Patients); and Productivity and Health Status (SPS-6). Multiple linear regressions explored the effects of home and staff characteristics on carers. Results: 212 staff from 72 care homes completed questionnaires. Staff from nursing homes experienced more work stress than those from residential homes (difference 0.30; 95% confidence interval (CI) from 0.10 to 0.51; P < 0.01), and were more likely to report that their health reduced their ability to work (difference -4.77; CI -7.80 to -1.73; P < 0.01). Psychological demands on nurses were higher than on other staff (difference = 1.57; CI 0.03 to 3.10; P < 0.05). A positive approach to dementia was more evident in those trained in dementia care (difference 8.54; CI 2.31 to 14. 76; P < 0.01), and in staff working in local authority homes than in the private sector (difference 7.75; CI 2.56 to 12. 94; P < 0.01). Conclusions: Our study highlights the importance of dementia training in care homes, with a particular need in the private sector. An effective intervention to reduce stress in health and social care staff is required, especially in nursing and larger care homes, and for nursing staff. Background: There are several studies that have targeted student nurses, but few have clarified the details pertaining to the specific ethical problems in clinical practice with the viewpoint of the nursing faculty. This study was to investigate the ethical problems in clinical practice reported by student nurses to Japanese nursing faculty members for the purpose of improving ethics education in clinical practice. Method: The subjects comprised 705 nursing faculty members (we sent three questionnaires to one university) who managed clinical practice education at 235 Japanese nursing universities. We performed a simple tabulation of the four items shown in the study design. 1) the details of student nurse consultations regarding ethics in clinical practice (involving the students themselves, nurses, care workers, clinical instructors, and nursing faculty members); 2) the methods of ethics education in clinical practice; 3) the difficulties experienced by the nursing faculty members who received the consultations; and 4) the relationship between clinical practice and lectures on ethics. Furthermore, the analysis was based on the idea of ethical principles, respect for persons, beneficence, and justice. Results: The response rate was 28% (198 questionnaires). The nursing faculty members were consulted for various problems by student nurses. The details of these consultations were characterized by the principles of respect for patient by nurses, the principles of benevolence by faculty and clinical instructors, and the principle of justice pertaining to evaluations. The results indicate that there is an awareness among the nursing faculty regarding the necessity of some sort of ethics education at clinical settings. Moreover, based on the nature of the contents of the consultations regarding the hospital and staff, it was evident that the nursing faculty struggled in providing responses. More than half of subjects exhibited an awareness of the relationship between the classroom lectures on ethics and clinical practice. Conclusion: The results suggest the need for analyzing the ethical viewpoints of student nurses, prior learning, and collaboration with related courses as part of ethics education in clinical practice. Background: Responding to older people's distress by acknowledging or encouraging further discussion of emotions is central to supportive, person-centred communication, and may enhance home care outcomes and thereby promote healthy aging. This observational study describes nursing staff's responses to older people's emotional distress, and identify factors that encourage further emotional disclosure. Methods: Audio-recorded home care visits in Norway (n = 196), including 48 older people and 33 nursing staff, were analysed with the Verona Coding Definitions of Emotional Sequences, identifying expressions of emotional distress and subsequent provider responses. The inter-rater reliability (two coders), Cohen's kappa, was > 0.6. Sum categories of emotional distress were constructed: a) verbal and non-verbal expressions referring to emotion, b) references to unpleasant states/circumstances, and c) contextual hints of emotion. A binary variable was constructed based on the VR response codes, differentiating between emotion-focused responses and responses that distanced emotion. Fisher's exact test was used to analyse group differences and determined variables included in a multivariate logistic regression analysis to identify factors promoting emotion-focused responses. Results: Older people's expressions of emotional distress (n = 635) comprised 63 explicit concerns and 572 cues. Forty-eight per cent of nursing staff responses (n = 638) were emotion-focused. Emotion-focused responses were observed more frequently when nursing staff elicited the expression of emotional distress from the patients (54%) than when patients expressed their emotional distress on their own initiative (39%). Expressions with reference to emotion most often received emotion-focused responses (60%), whereas references to unpleasant states or circumstances and contextual hints of emotion most often received non-emotion-focused responses (59%). In a multivariate logistic model, nursing staff's elicitation of the emotional expression (vs patients initiating it) and patients' expression with a reference to an emotion (vs reference to unpleasant states or contextual hints) were both explanatory variables for emotion-focused responses. Conclusions: Emotion-focused responses were promoted when nursing staff elicited the emotional expression, and when the patient expression referred to an emotion. Staff responded most often by acknowledging the distress and using moderately person-centred supportive communication. More research is needed to establish generalizability of the findings and whether older people deem such responses supportive. Experiments were carried on nozzles with different exit geometry to study their impact on supersonic core length. Circular, hexagonal, and square exit geometries were considered for the study. Numerical simulations and schlieren image study were performed. The supersonic core decay was found to be of different length for different exit geometries, though the throat to exit area ratio was kept constant. The impact of nozzle exit geometry is to enhance the mixing of primary flow with ambient air, without requiring tab, wire or secondary method to increase the mixing characteristics. The non-circular mixing is faster comparative to circular geometry, which leads to reduction in supersonic core length. The results depict that shorter the hydraulic diameter, the jet mixing is faster. To avoid the losses in divergent section, the cross section of throat was maintained at same geometry as the exit geometry. Investigation shows that the supersonic core region is dependent on the hydraulic diameter and the diagonal. In addition, it has been observed that number of shock cells remain the same irrespective of exit geometry shape for the given nozzle pressure ratio. Film cooling performances of laidback hole and laidback fan-shaped hole have been investigated using transient liquid crystal measurement technique. For film cooling of laidback hole, the increase of inclination angle reduces the film cooling effectiveness under small blowing ratio, while produces better film coverage and higher film cooling effectiveness under larger blowing ratios. Heat transfer coefficient is higher in the upstream region of laidback hole film cooling with large inclination angle, while laidback hole with small inclination angle has relatively higher heat transfer coefficient in the downstream region. The increase of inclination angle reduces the film cooling effectiveness and heat transfer coefficient of laidback fan-shaped hole film cooling, especially in the upstream region. Increase of inclination angle is beneficial to the thermal protection of laidback hole film cooling under large blowing ratio. Film hole with large inclination angle has larger discharge coefficient due to smaller aerodynamic loss in the hole. Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study. The fatigue life prediction method of a low pressure turbine shaft of the turbojet engine is presented. According to working conditions and assembled conditions of the turbojet engine, load types, load values and constraints of the turbine shaft are analyzed. ANSYS software is employed to simulate actual working conditions to obtain stress-strain distributions of the low pressure turbine shaft. Finally, based on stress-strain curves and surface quality, the fatigue life of the low-pressure turbine shaft is calculated with the modified local stress-strain method and the linear cumulative fatigue damage model. It has been observed that the geometry of a brush seal has a significant effect on the sealing performance. However, the relationship between rotordynamic coefficients and geometry factors of the brush seal itself are rarely considered. In this article, the rotordynamic coefficients of a typical single-stage brush seal for different geometries and operating conditions were numerically analyzed using CFD RANS solutions coupled with a non-Darcian porous medium model. The reaction force which plays an essential role in rotordynamic coefficients was obtained by integrating the dynamic pressure distribution. The influence of the bristle pack thickness, fence height, clearance size and other working condition parameters on aerodynamic force, stiffness coefficients, and damping coefficients of brush seal were presented and compared. In addition, the effects of various geometric configurations on pressure and flow features were also discussed. Transient performance of pumps during transient operating periods, such as startup and stopping, has drawn more and more attentions recently due to the growing engineering needs. During the startup period of a pump, the performance parameters such as the flow rate and head would vary significantly in a broad range. Therefore, it is very difficult to accurately specify the unsteady boundary conditions for a pump alone to solve the transient flow in the absence of experimental results. The closed-loop pipe system including a centrifugal pump is built to accomplish the self-coupling calculation. The three-dimensional unsteady incompressible viscous flow inside the passage of the pump during startup period is numerically simulated using the dynamic mesh method. Simulation results show that there are tiny fluctuations in the flow rate even under stable operating conditions and this can be attributed to influence of the rotor-stator interaction. At the very beginning of the startup, the rising speed of the flow rate is lower than that of the rotational speed. It is also found that it is not suitable to predict the transient performance of pumps using the calculation method of quasi-steady flow, especially at the earlier period of the startup. The environmental parameters such as temperature and air pressure which are changing depending on altitudes are effective on thrust and fuel consumption of aircraft engines. In flights with long routes, thrust management function in airplane information system has a structure that ensures altitude and performance management. This study focused on thrust changes throughout all flight were examined by taking into consideration their energy and exergy performances for fuel consumption of an aircraft engine used in flight with long route were taken as reference. The energetic and exergetic performance evaluations were made under the various altitude conditions. The thrust changes for different altitude conditions were obtained to be at 86.53% in descending direction and at 142.58% in ascending direction while the energy and exergy efficiency changes for the referenced engine were found to be at 80.77% and 84.45 %, respectively. The results revealed here can be helpful to manage thrust and reduce fuel consumption, but engine performance will be in accordance with operation requirements. On account of the complexity of turboprop engine control system, real-time simulation is the technology, under the prerequisite of maintaining real-time, to effectively reduce development cost, shorten development cycle and avert testing risks. The paper takes RT-LAB as a platform and studies the real-time digital simulation of turboprop engine control system. The architecture, work principles and external interfaces of RT-LAB real-time simulation platform are introduced firstly. Then based on a turboprop engine model, the control laws of propeller control loop and fuel control loop are studied. From that and on the basis of Matlab/Simulink, an integrated controller is designed which can realize the entire process control of the engine from start-up to maximum power till stop. At the end, on the basis of RT-LAB platform, the real-time digital simulation of the designed control system is studied, different regulating plans are tried and more ideal control effects have been obtained. In the present study, the ignition effect on detonation initiation was investigated in the air-breathing pulse detonation engine. Two kinds of fuel injection and ignition methods were applied. For one method, fuel and air was pre-mixed outside the PDE and then injected into the detonation tube. The droplet sizes of mixtures were measured. An annular cavity was used as the ignition section. For the other method, fuel-air mixtures were mixed inside the PDE, and a pre-combustor was utilized as the ignition source. At firing frequency of 20 Hz, transition to detonation was obtained. Experimental results indicated that the ignition position and initial flame acceleration had important effects on the deflagration-to-detonation transition. My concern in this paper is a debate between Pascal Engel and Richard Rorty documented in the book What's the Use of Truth? Both Engel and Rorty problematize the natural suggestion that attaining truth is a goal of our inquiries. Where Rorty thinks this means that truth is not something we should aim for at all over and beyond justification, Engel maintains that truth still plays a distinct (conceptual) role in our intellectual and daily lives. Thus, the debate between Engel and Rorty ends in a standoff. In the present paper, I question the claim that truth is not a goal of inquiry. I do so from the point of view of a systematic and general theory of rational goal-setting which has its roots in management science. I argue, in this connection, that Rorty's central claim rests on a principle of goal-setting rationality that is generally invalid. The bottom line is that the goal of truth, like other visionary goals, is likely to have the positive effect of increasing motivation and effort, and this may offset the drawbacks which Rorty, rightly, calls attention to. In largely following Rorty in this regard, Engel is making one concession too much to his opponent. In this paper I discuss Pascal Engel's recent work on doxastic correctness. I raise worries about two elements of his view-the role played in it by the distinction between -correctness and -correctness, and the construal of doxastic correctness as an ideal of reason. I propose an alternative approach. Engel (Grazer Philos Stud 77: 45-59, 2008) has insisted that a number of notable strategies for rejecting the knowledge norm of assertion are put forward on the basis of the wrong kinds of reasons. A central aim of this paper will be to establish the contrast point: I argue that one very familiar strategy for defending the knowledge norm of assertion-viz., that it is claimed to do better in various respects than its competitors (e.g. the justification and the truth norms)-relies on a presupposition that is shown to be ultimately under-motivated. That presupposition is the uniqueness thesis-that there is a unique epistemic rule for assertion, and that such a rule will govern assertions uniformly. In particular, the strategy I shall take here will be to challenge the sufficiency leg of the knowledge norm in a way that at the same time counts against Williamson's (Knowledge and its limits, 2000) own rationale for the uniqueness thesis. However, rather than to challenge the sufficiency leg of the knowledge norm via the familiar style of 'expert opinion' and, more generally, 'second-hand knowledge' cases (e.g. Lackey in Learning from words: testimony as a source of knowledge, 2008), a strategy that has recently been called into question by Benton (Philos Phenomenol Res, 2014), I'll instead advance a very different line of argument against the sufficiency thesis, one which turns on a phenomenon I call epistemic hypocrisy. I discuss Engel's (2009) critique of pragmatic encroachment in epistemology and his related discussion of epistemic value. While I am sympathetic to Engel's remarks on the former, I think he makes a crucial misstep when he relates this discussion to the latter topic. The goal of this paper is to offer a better articulation of the relationship between these two epistemological issues, with the ultimate goal of lending further support to Engel's scepticism about pragmatic encroachment in epistemology. As we will see, key to this articulation will be the drawing of a distinction between two importantly different ways of thinking about epistemic value. This paper offers a novel account of the value of knowledge. The account is novel insofar as it advocates a shift in focus from the value of individual items of knowledge to the value of the commodity of knowledge. It is argued that the commodity of knowledge is valuable in at least two ways: (i) in a wide range of areas, knowledge is our way of being in cognitive contact with the world and (ii) for us the good life is a life rich enough in knowledge. This is an essay on G. E. Moore's argument in defense of common sense against David Hume's theory. However, the burden of essay is to show that, though Moore derived has argument from Thomas Reid, it was the latter who noted that the defense of common sense required more than showing that Hume's theory conflicted with common sense. It required supplying a better theory than that of Hume's of the operations of the human mind, and especially, a better theory of the evidence and justification of common sense beliefs. The essay is a formulation and defense of Reid's theory of conception, conviction and evidence. In "Facts: Particulars of Information Units?" (Linguistics and Philosophy 2002), Kratzer proposed a causal analysis of knowledge in which knowledge is defined as a form of de re belief of facts. In support of Kratzer's view, I show that a certain articulation of the de re/de dicto distinction can be used to integrally account for the original pair of Gettier cases. In contrast to Kratzer, however, I think such an account does not fundamentally require a distinction between facts and true propositions. I then discuss whether this account might be generalized and whether it can give us a reductive analysis of knowledge as de re true belief. Like Kratzer, I think it will not, in particular the distinction appears inadequate to account for Ginet-Goldman cases of causally connected but unreliable belief. Nevertheless, I argue that the de re belief analysis allows us to account for a distinction Starmans and Friedman recently introduced between apparent evidence and authentic evidence in their empirical study of Gettier cases, in a way that questions their claim that a causal disconnect is not operative in the contrasts they found. I present a novel argument against the epistemic conception of perception (ECP) according to which perception either is a form of knowledge or puts the subject in a position to gain knowledge about what is perceived. ECP closes the gap between a perceptual experience that veridically presents a given state of affairs and an experience capable of yielding the knowledge that the state of affairs obtains. Against ECP, I describe a particular case of perceptual experience in which the following triad of claims is true: (i) The experience presents a given state of affairs (it has propositional content); (ii) The experience is veridical; (iii) The experience cannot yield the knowledge that the state of affairs obtains (even in the absence of relevant defeaters). This case involves an empirically well-studied phenomenon, namely perceptual hysteresis, which involves the maintenance of a perceptual experience with a relatively stable content over progressively degrading sensory stimulations. Conditionals whose antecedent and consequent are not somehow internally connected tend to strike us as odd. The received doctrine is that this felt oddness is to be explained pragmatically. Exactly how the pragmatic explanation is supposed to go has remained elusive, however. This paper discusses recent philosophical and psychological work that attempts to account semantically for the apparent oddness of conditionals lacking an internal connection between their parts. These replies to commentators on my work focus on the nature of epistemic norms, on the nature of truth and on the nature and value of knowledge. A normative account of belief and knowledge is committed to substantial and objective epistemic norms. But not everyone agrees on their form. I try here to reply to some doubts raised by my critics. This note is a sequel to Huber (Synthese 191:2167-2193, 2014). It is shown that obeying a normative principle relating counterfactual conditionals and conditional beliefs, viz. the royal rule, is a necessary and sufficient means to attaining a cognitive end that relates true beliefs in purely factual, non-modal propositions and true beliefs in purely modal propositions. Along the way I will sketch my idealism about alethic or metaphysical modality. According to alethic functionalism, truth is a generic alethic property related to lower level alethic properties through the manifestation relation. The manifestation relation is reflexive; thus, a proposition's truth-manifesting property may be a lower level property or truth itself, depending on the subject matter properties of the proposition. A true proposition whose truth-manifesting property is truth itself, rather than a lower level alethic property, is plainly true. Alethic functionalism relies on plain truth to account for the truth of propositions with challenging subject matter properties, such as logically complex propositions and truth attributions. In this paper, it is argued that plain truth leads to a number of serious problems for alethic functionalism. First: Shapiro (in Analysis 71:38-44, 2011) argues that plain truth threatens alethic functionalism with collapse to strong alethic monism; it is argued here that collapse is not merely threatened, but that, on pain of contradiction, collapse is immediate. Second, alethic functionalism's commitment to alethic pluralism requires lower level alethic properties to be ways of being truth, where one property's being a way of being the other property is irreflexive; thus, alethic functionalism is incoherent due to the conflicting commitments to a manifestation relation which is both reflexive and irreflexive. Third, it is argued that a reflexive manifestation relation leads to the contradiction that a lower level alethic property which manifests truth is both identical to and distinct from truth itself. Fourth, careful examination of the notion of a core truism shows that Objectivity and the correspondence intuition are the only core truisms. Finally, it is argued that the first and fourth problems jointly entail a collapse of alethic functionalism to strong correspondence monism. Open-mindedness is generally regarded as an intellectual virtue because its exercise reliably leads to truth. However, some theorists have argued that open-mindedness's truth-conduciveness is highly contingent, pointing out that it is either not truth-conducive at all under certain scenarios or no better than dogmatism or credulity in others. Given such shaky ties to truth, it would appear that the status of open-mindedness as an intellectual virtue is in jeopardy. In this paper, I propose to defend open-mindedness against these challenges. In particular, I show that the challenges are ill-founded because they misconstrue the nature of open-mindedness and fail to consider the requisite conditions of its application. With a proper understanding of open-mindedness and of its requirements, it is clear that recourse to it is indeed truth-conducive. Extended simples are physical objects that, while spatially extended, possess no actual proper parts. The theory that physical reality bottoms out at extended simples is one of the principal competing views concerning the fundamental composition of matter, the others being atomism and the theory of gunk. Among advocates of extended simples, Markosian's 'MaxCon' version of the theory (Aust J Philos 76:213-226, 1998, Monist 87:405-428, 2004) has justly achieved particular prominence. On the assumption of causal realism (i.e., on the assumption that a Humean account of causation is false), I argue here that the reality of MaxCon simples would entail the reality of irreducible, intrinsic dispositional properties. The existence of dispositional properties in turn has important implications for another central debate in metaphysics, namely that between two major competing views concerning the ontology of laws: dispositionalism versus nomological necessitarianism. This paper formulates a general epistemological argument against what I call non-causal realism, generalizing domain specific arguments by Benacerraf, Field, and others. First I lay out the background to the argument, making a number of distinctions that are sometimes missed in discussions of epistemological arguments against realism. Then I define the target of the argument-non-causal realism-and argue that any non-causal realist theory, no matter the subject matter, cannot be given a reasonable epistemology and so should be rejected. Finally I discuss and respond to several possible responses to the argument. In addition to clearing up and avoiding numerous misunderstandings of arguments of this kind that are quite common in the literature, this paper aims to present and endorse a rigorous and fully general epistemological argument against realism. An objection that has been raised to the conciliatory stance on the epistemic significance of peer disagreement known as the Equal Weight View is that it is self-defeating, self-undermining, or self-refuting. The proponent of that view claims that equal weight should be given to all the parties to a peer dispute. Hence, if one of his epistemic peers defends the opposite view, he is required to give equal weight to the two rival views, thereby undermining his confidence in the correctness of the Equal Weight View. It seems that the same objection could be leveled against those who claim to suspend judgment in the face of pervasive unresolvable disagreements, as do the Pyrrhonian skeptics. In this paper, I explore the kind of response to the objection that could be offered from a neo-Pyrrhonian perspective, with the aim of better understanding the intriguing character of Pyrrhonian skepticism. In this paper I critically discuss and, in the end, reject Morgan's Canon, a popular principle in comparative psychology. According to this principle we should always prefer explanations of animal behavior in terms of lower psychological processes over explanations in terms of higher psychological processes, when alternative explanations are possible. The validity of the principle depends on two things, a clear understanding of what it means for psychological processes to be higher or lower relative to each other (1) and a justification of a general preference for explanations that refer to lower psychological abilities (2). However, I argue that we cannot spell out the idea of a psychological scale in a way that claim (2) is satisfied. I start with the discussion of different interpretations of the notion of a psychological scale (Sect. 2). In Sect. 3, I discuss different possible strategies to justify any of those interpretations and argue that all of them fail. Finally, in Sect. 4, I generalize the argument for all possible interpretations of Morgan's Canon and propose an alternative strategy: We should base our interpretations of animal behavior on more general principles such as evidential support and explanatory power, as followed in other scientific domains. The paper presents, and discusses, four candidate explanations of the structure, and construction, of the bees' honeycomb. So far, philosophers have used one of these four explanations, based on the mathematical Honeycomb Conjecture, while the other three candidate explanations have been ignored. I use the four cases to resolve a dispute between Pincock (Mathematics and Scientific Representation, Oxford University Press, Oxford, 2012) and Baker (Synthese, 2015) about the Honeycomb Conjecture explanation. Finally, I find that the two explanations focusing on the construction mechanism are more promising than those focusing exclusively on the resulting, optimal structure. The main reason for this is that optimal structures do not uniquely determine the relevant optimization leading to the optimal structure. Intrapersonal variation due to color contrast effects has been used to argue against the following intuitive propositions about the colors: No object can be more than one determinable or determinate color of the same grade all over at the same time (Incompatibility); external objects are actually colored (Realism); and the colors of objects are mind-independent (Objectivism). In this article, I provide a defense of Incompatibility, Realism, and Objectivism from intrapersonal variation arguments that rely on color contrast effects. I provide a novel, ecumenical response to such arguments according to which typical variants are right, and which respects Incompatibility, Realism, and Objectivism, using the thesis that the colors of objects depend on the colors of objects in their surrounds. Michael Weisberg's account of scientific models concentrates on the ways in which models are similar to their targets. He intends not merely to explain what similarity consists in, but also to capture similarity judgments made by scientists. In order to scrutinize whether his account fulfills this goal, I outline one common way in which scientists judge whether a model is similar enough to its target, namely maximum likelihood estimation method (MLE). Then I consider whether Weisberg's account could capture the judgments involved in this practice. I argue that his account fails for three reasons. First, his account is simply too abstract to capture what is going on in MLE. Second, it implies an atomistic conception of similarity, while MLE operates in a holistic manner. Third, Weisberg's atomistic conception of similarity can be traced back to a problematic set-theoretic approach to the structure of models. Finally, I tentatively suggest how these problems might be solved by a holistic approach in which models and targets are compared in a non-set-theoretic fashion. Pautz (Perceiving the world , 2010) has argued that the most prominent naive realist account of hallucination-negative epistemic disjunctivism-cannot explain how hallucinations enable us to form beliefs about perceptually presented properties. He takes this as grounds to reject both negative epistemic disjunctivism and naive realism. Our aims are two: First, to show that this objection is dialectically ineffective against naive realism, and second, to draw morals from the failure of this objection for the dispute over the nature of perceptual experience at large. The role intellectual virtues play in scientific inquiry has raised significant discussions in the recent literature. A number of authors have recently explored the link between virtue epistemology and philosophy of science with the aim to show whether epistemic virtues can contribute to the resolution of the problem of theory choice. This paper analyses how intellectual virtues can be beneficial for successful resolution of theory choice. We explore the role of virtues as well as vices in scientific inquiry and their beneficial effects in the context of theory choice. We argue that vices can play a role in widening the set of potential candidate theories and support our claim with historical examples and normative arguments from formal social epistemology. We argue that even though virtues appear to be neither necessary nor sufficient for scientific success, they have a positive effect because they accelerate successful convergence amongst scientists in theory choice situations. Presentism states that everything is present. Crucial to our understanding of this thesis is how we interpret the 'is'. Recently, several philosophers have claimed that on any interpretation presentism comes out as either trivially true or manifestly false. Yet, presentism is meant to be a substantive and interesting thesis. I outline in detail the nature of the problem and the standard interpretative options. After unfavourably assessing several popular responses in the literature, I offer an alternative interpretation that provides the desired result. This interpretation is then used to clarify the distinction between 'real change' from mere variation and temporal relativisation. Reflecting on my solution, I try to diagnose the source of confusion over these issues. Then, building upon Fine's (Modality and tense, 2005) distinction between ontic and factive presentism, I elucidate what the presentist thesis specifically concerns and how best to formalise it. In the process I distinguish a weak and strong (extended) version of the presentist thesis. Finally, I end by drawing out some limitations of the paper. The common chief complaint of abdominal pain, nausea, and vomiting should prompt a broad differential diagnosis list. When a 17-year-old previously healthy male presented to a primary care clinic with these symptoms, it provoked a detailed workup and found a startling diagnosis of type 1 diabetes mellitus (T1DM). This article provides an overview of recognizing T1DM in children and adolescents with a thorough and diagnostic evaluation. NPs must be aware of special prescribing considerations for medication safety when managing the care of older adults with herpes zoster. Age-related physiologic changes of the body impact the pharmacokinetics and pharmacodynamics of antiviral and pain medications and can lead to potential adverse events. There is a need for treatment options in patients with type 2 diabetes mellitus and kidney disease to achieve glucose targets without risk of hypoglycemia. This article describes management options for these patients using glucose-lowering therapies, in particular dipeptidyl peptidase-4 inhibitors. Acute delirium complicates care and can be easily overlooked in older adults with preexisting mental illness. Evidence-based measures have demonstrated that early diagnosis, identification, and correction of modifiable factors can lead to improved care and less morbidity in these patients. Opioid therapy for patients with chronic pain is increasing in frequency along with rates of opioid abuse. Many screening tools are available to assess for the risk of opioid abuse. NPs should use screening tools that are cross-validated for use in chronic pain patients in the Canadian primary healthcare setting. Bronchopulmonary sequestration (BPS) is a lung mass that does not communicate with the tracheobronchial tree or the pulmonary arterial vasculature, and thus does not play a role in oxygenation. This article discusses the etiology of BPS, as well as its pathophysiology, signs and symptoms, imaging studies used to diagnose, and treatment options in both pediatric and adult patients. We study odd-lot trading and determine if an odd-lot trade results from odd-lot orders or if odd-lots are a result of orders broken into multiple trades. We confirm that odd-lot transactions contribute to price discovery. Our finding that odd-lot transactions contain substantial information is not being driven by orders that are originally larger than 100 shares and subsequently divided into odd-lot transactions. We further find that odd-lot transactions resulting from odd-lot orders add more to price discovery than odd-lot transactions resulting from orders for 100 or more shares. Additionally, we find that more price contribution occurs when non-high frequency traders trade in an odd-lot transaction. (C) 2017 Elsevier B.V. All rights reserved. Many policyholders surrender their life insurance policies early, leading to substantial monetary losses for private households. Surrender can be explained rationally if it constitutes the last resort providing liquidity in the event of an urgent need of cash. Yet we find clear evidence in German panel data that for more than half of all surrendered contracts investors had cheaper options available to provide the required liquidity. This finding demonstrates that there must be other factors influencing this important life decision. We provide a behavioral explanation, focusing on the role of individual decision heuristics, financial literacy, and financial advice. In particular, we show that financial literacy and financial advice can mitigate the behavioral temptation to lapse, while the tendency to rely on heuristics increases lapse probability. (C) 2017 Elsevier B.V. All rights reserved. We investigate and improve momentum spillover from stocks to corporate bonds, i.e. the phenomenon that past winners in the equity market are future winners in the corporate bond market. We find that a momentum spillover strategy exhibits strong structural and time-varying default risk exposures that cause a drag on the profitability of the strategy and lead to large drawdowns if the market cycle turns from a bear to a bull market. By ranking companies on their firm-specific equity return, instead of their total equity return, the default risk exposures halve, the Sharpe ratio doubles and the drawdowns are substantially reduced. (C) 2017 Elsevier B.V. All rights reserved. Literature on Losses Given Default (LGD) usually focuses on mean predictions, even though losses are extremely skewed and bimodal. This paper proposes a Quantile Regression (QR) approach to get a comprehensive view on the entire probability distribution of losses. The method allows new insights on covariate effects over the whole LGD spectrum. In particular, middle quantiles are explainable by observable covariates while tail events, e.g., extremely high LGDs, seem to be rather driven by unobservable random events. A comparison of the QR approach with several alternatives from recent literature reveals advantages when evaluating downturn and unexpected credit losses. In addition, we identify limitations of classical mean prediction comparisons and propose alternative goodness of fit measures for the validation of forecasts for the entire LGD distribution. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates the impact of financial penalties on the profitability and stock performance of banks. Using a unique dataset of 671 financial penalties imposed on 68 international listed banks over the period 2007 to 2014, we find a negative relation between financial penalties and pre-tax profitability but no relation with after-tax profitability. This result is explained by tax savings, as banks are allowed to deduct specific financial penalties from their taxable income. Moreover, our empirical analysis of the stock performance shows a positive relation between financial penalties and buy-and-hold returns, indicating that investors are pleased that cases are closed, that the banks successfully manage the consequences of misconduct, and that the financial penalties imposed are smaller than the accrued economic gains from the banks' misconduct. This argument is supported by the positive abnormal returns accompanying on the announcement of a financial penalty. (C) 2017 Elsevier B.V. All rights reserved. We analyse the dynamics of the pass-through of banks' marginal cost to bank lending rates over the 2008 crisis and the euro area sovereign debt crisis in France, Germany, Greece, Italy, Portugal and Spain. We measure banks' marginal cost by their rate on new deposits, contrary to the literature that focuses on money market rates. This allows us to account for banks' risks. We focus on the interest rate on new short-term loans granted to non-financial corporations in these countries. Our analysis is based on an error-correction approach that we extend to handle the time-varying long-run relationship between banks' lending rates and, banks' marginal cost, as well as stochastic volatility. Our application is based on a harmonised monthly database from January 2003 to October 2014. We estimate the model within a Bayesian framework, using Markov Chain Monte Carlo methods (MCMC). We reject the view that the transmission mechanism is permanent over time. The long-run relationship moved with the sovereign debt crises to a new one, with a slower pass-through and higher bank lending rates. Its developments are heterogeneous from one country to the other. Impediments to the transmission of monetary rates depend on the heterogeneity in banks marginal costs and therefore, its risks. We also find that rates to small firms increase compared to large firms in a few countries. Using a VAR model, we show that overall, the effect of a shock on the rate of new deposits on the unexpected variances of new loans has been less important since 2010. These results confirm the slowdown in the transmission mechanism. (C) 2015 Elsevier B.V. All rights reserved. This paper investigates portfolio selection in the presence of transaction costs and ambiguity about return predictability. By distinguishing between ambiguity aversion to returns and to return predictors, we derive the optimal dynamic trading rule in closed form within the framework of Garleanu and Pedersen (2013), using the robust optimization method. We characterize its properties and the unique mechanism through which ambiguity aversion impacts the optimal robust strategy. In addition to the two trading principles documented in Garleanu and Pedersen (2013), our model further implies that the robust strategy aims to reduce the expected loss arising from estimation errors. Ambiguity-averse investors trade toward an aim portfolio that gives less weight to highly volatile return-predicting factors, and loads less on the securities that have large and costly positions in the existing portfolio. Using data on various commodity futures, we show that the robust strategy outperforms the corresponding non-robust strategy in out-of-sample tests. (C) 2017 Elsevier B.V. All rights reserved. I assess the effectiveness of macroprudential policy tools in containing credit cycles per se or the impact of portfolio inflows on the cycles in major emerging market economies. The results show that borrower-based tools, measures with a domestic focus, and domestic reserve requirements are particularly effective. The findings are, in most cases, stronger for the recent period during which most of the macroprudential actions are undertaken, and generally hold for alternative definitions of credit cycle, the monetary policy stance, and portfolio inflows. Weaker results emerge for financial-institutions-based or foreign-currency related macroprudential tools. (C) 2017 Elsevier B.V. All rights reserved. This paper explores stock return predictability by exploiting the cross-section of oil futures prices. Motivated by the principal component analysis, we find the curvature factor of the oil futures curve predicts monthly stock returns: a 1% per month increase in the curvature factor predicts 0.4% per month decrease in stock market index return. This predictive pattern is prevailing in non-oil industry portfolios, but is absent for oil-related portfolios. The in- and out-of-sample predictive power of the curvature factor for non-oil stocks is robust and outperforms many other predictors, including oil spot prices. The predictive power of the curvature factor comes from its ability to forecast supply-side oil shocks, which only affect non-oil stocks and are hedged by oil-related stocks. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates the relationship between local banking structures and SMEs' access to debt and performance. Using a unique dataset on bank branch locations in Poland and firm-, county-, and bank-level data, we conclude that a strong position for local cooperative banks facilitates access to bank financing, lowers financial costs, boosts investments, and favours growth for SMEs. Moreover, counties in which cooperative banks hold a strong position are characterized by a more rapid pace of new firm creation. The opposite effects appear in the majority of cases for local banking markets dominated by foreign-owned banks. Consequently, our findings are important from a policy perspective because they show that foreign bank entry and industry consolidation may raise valid concerns for SME prospects in emerging economies. (C) 2017 Elsevier B.V. All rights reserved. We use a proprietary trade-and account-level dataset of short sales to investigate the profitability of individual investors short-selling in the Korean stock market from August 1, 2007, to May 31, 2010. Using actual data on short-covering transactions, we find that the average profit is 26,810 Korean won (roughly USD 24.4) per trade per hour, and about 44% of shorted trades are covered within a day. We also find that the profitability of short-selling decreases as the hours-to-cover increases. Account-level analyses show that investors who sell short more firms make higher profits than those who sell short fewer firms and that the profitability of short-selling is persistent. We attribute the profitability to short-sellers' ability to exploit short-run price reversals and information processing skills. (C) 2017 Published by Elsevier B.V. Many studies have indicated that histone deacetylase (HDAC) inhibitors are promising agents for the treatment of cancer. With the aim to search for novel potent HDAC inhibitors, we designed and synthesized two series of hydroxamates and 2-aminobenzamides compounds as HDAC inhibitors and antitumor agents. Those compounds were investigated for their HDAC enzymatic inhibitory activities and in vitro anti-proliferation activities against diverse cancer cell line (A549, HepG2, MGC80-3 and HCT116). Most of the synthesized compounds displayed potent HDAC inhibitory activity and anti proliferative activity. In particular, Compound 12a, N-(2-aminopheny1)-4-[(4-fluorophenoxy)methyl] benzamide, was shown to have the most HDAC inhibitory activity (70.6% inhibition at 5 mu M) and anti-tumor activity with IC50 value of as low as 3.84 mu M against HepG2 human liver hepatocellular carcinoma cell line, more than 4.8-fold lower than CS055 and 5.9-fold lower than CI994. HDAC isoform selectivity assay indicated 12a is a potent HDAC2 inhibitor. Docking study of 12a suggested that it bound tightly to the binding pocket of HDAC2. Further investigation showed that 12a could inhibit the migration and colony formation of A549 cancer cells. Furthermore, 12a remarkably induced apoptosis and G2/M phase cell cycle arrest in A549 cancer cells. Those results indicated that compound 12a could be a promising candidate for treatment of cancer. (C) 2017 Published by Elsevier Masson SAS. A series of 4,5-indolyl-N-hydroxyphenylacrylamides, as HDAC inhibitors, has been synthesized and evaluated in vitro and in vivo. 4-Indolyl compounds 13 and 17 functions as potent inhibitors of HDAC1 (IC50 1.28 nM and 134 nM) and HDAC 2 (IC50 0.90 and 0.53 nM). N-Hydroxy-3-{4-[2-(1H-indo1-4-y1)-ethylsulfamoyl]-phenyl}-actylamide (13) inhibited the human cancer cell growth of PC3, A549, MDA-MB-231 and AsPC-1 with a GI(50) of 0.14, 0.25, 0.32, and 0.24 mu M, respectively. In in vivo evaluations bearing prostate PC3 xenografts nude mice model, compound 13 suppressed tumor growth with a tumor growth inhibition (TGI) of 62.2%. Immunohistochemistry of protein expressions, in PC-3 xenograft model indicated elevated acetyl-histone 3 and prominently inhibited HDAC2 protein expressions. Therefore, compound 13 could be a suitable lead for further investigation and the development of selective HDAC 2 inhibitors as potent anti-cancer compounds. (C) 2017 Elsevier Masson SAS. All rights reserved. Protein tyrosine phosphatase 1B (PTP1B) is a key negative regulator of insulin signaling pathway. Inhibition of PTP1B is expected to improve insulin action. Appropriate selectivity and permeability are the gold standard for excellent PTP1B inhibitors. In this work, molecular hybridization-based screening identified a selective competitive PTP1B inhibitor. Compound 10a has IC50 values of 199 nM against PTP1B, and shows 32-fold selectivity for PTP1B over the closely related phosphatase TCPTP). Molecule docking and molecular dynamics studies reveal the reason of selectivity for PTP1B over TCPTP. Moreover, the cell permeability and cellular activity of compound 10a are demonstrated respectively. (C) 2017 Elsevier Masson SAS. All rights reserved. Two types of 2-pyridinecarboxaldehyde thiosemicarbazones Ga(III) complexes, which are 2:1 and 1:1 ligand/Ga(III) complexes, were synthesized and determined by X-ray single crystal diffraction. The antiproliferative activity of these Ga(III) complexes have been examined to illuminate the structure-activity relationships essential to form Ga(III) complexes with remarkable anticancer activity. In addition, Ga(III) complexes where the metal/ligand ratio was 1:1 (C4) had observably higher antiproliferative activity than 1:2 (C3). Ga(III) complexes caused a marked increase of caspase-3 and 9 activity in NCI-H460 cells compared to the metal free ligand. Caspase activation was somewhat mediated by the release of Cyt C from mitochondria after incubation with selected agents. Both types of Ga(III) complexes showed more effective in inhibition of the G1/S transition than the ligand alone. (C) 2017 Elsevier Masson SAS. All rights reserved. Various neoglycosphingolipids were efficiently synthesized in a one-step reaction by the coupling of free sugars with an N-alkylaminooxy-functionalized ceramide analogue. The bioactivity studies demonstrated that most of these compounds could upregulate the expression of matrix metalloproteinase-9 (MMP-9, extracellular matrix proteins associated with tumor migration) in murine melanoma B16 cells in a similar manner to the natural ganglioside monosialodihexosylganglioside (GM3), which highlights the potential use of these neoglycosphingolipids as inhibitors of tumor migration. (C) 2017 Elsevier Masson SAS. All rights reserved. Cardiovascular diseases (CVOs) are the main cause of deaths worldwide. Up-to-date, hypertension is the most significant contributing factor to CVDs. Recent clinical studies recommend calcium channel blockers (CCBs) as effective treatment alone or in combination with other medications. Being the most clinically useful CCBs, 1,4-dihydropyridines (DHPs) attracted great interest in improving potency and selectivity. However, the short plasma half-life which may be attributed to the metabolic oxidation to the pyridine-counterparts is considered as a major limitation for this class. Among the most efficient modifications of the DHP scaffold, is the introduction of biologically active N3-substituted dihydropyrimidine mimics (DHPMs). Again, some potent DHPMs showed only in vitro activity due to first pass effect through hydrolysis and removal of the N3-substitutions. Herein, the synthesis of new N3-substituted DHPMs with various functionalities linked to the DHPM core via two-carbon spacer to guard against possible metabolic inactivation is described. It was designed to keep close structural similarities to clinically efficient DHPs and the reported lead DHPMs analogues, while attempting to improve the pharmacokinetic properties through better metabolic stability. Applying whole batch clamp technique, five compounds showed promising L- and T- type calcium channel blocking activity and were identified as lead compounds. Structure requirements for selectivity against Ca(v)1.2 as well against Ca(v)3.2 are described. (C) 2017 Elsevier Masson SAS. All rights reserved. Two series of novel EPAC antagonists are designed, synthesized and evaluated in an effort to develop diversified analogues based on the scaffold of the previously identified high-throughput (HTS) hit 1 (ESI09). Further SAR studies reveal that the isoxazole ring A of 1 can tolerate chemical modifications with either introduction of flexible electron-donating substitutions or structurally restrictedly fusing with a phenyl ring, leading to identification of several more potent and diversified EPAC antagonists (e.g., 10 (NY0617), 14 (NY0460), 26 (NY0725), 32 (NY0561), and 33 (NY0562)) with low micromolar inhibitory activities. Molecular docking studies on compounds 10 and 33 indicate that these two series of compounds bind at a similar site with substantially different interactions with the EPAC proteins. The findings may serve as good starting points for the development of more potent EPAC antagonists as valuable pharmacological probes or potential drug candidates. (C) 2017 Elsevier Masson SAS. All rights reserved. Activation of nuclear factor erythroid-2-related factor 2 (Nrf2) has been proven to be an effective means to prevent the development of cancer, and natural curcumin stands out as a potent Nrf2 activator and cancer chemopreventive agent. In this study, we synthesized a series of curcumin analogs by introducing the geminal dimethyl substituents on the active methylene group to find more potent Nrf2 activators and cytoprotectors against oxidative death. The geminally dimethylated and catechol-type curcumin analog (compound 3) was identified as a promising lead molecule in terms of its increased stability and cytoprotective activity against the tert-butyl hydroperoxide (t-BHP)-induced death of HepG2 cells. Mechanism studies indicate that its cytoprotective effects are mediated by activating the Nrf2 signaling pathway in the Michael acceptor- and catechol-dependent manners. Additionally, we verified by using copper and iron ion chelators that the two metal ion-mediated oxidations of compound 3 to its corresponding electrophilic sigma-quinone, contribute significantly to its Nrf2-dependent cytoprotection. This work provides an example of successfully designing natural curcumin-directed Nrf2 activators by a stability-increasing and proelectrophilic strategy. (C) 2017 Elsevier Masson SAS. All rights reserved. Synthetic analogs of 1 alpha,25-dihydroxyvitamin D-3 (1,25(OH)(2)D-3) have been developed with the goal of improving the biological profile of the natural hormone for therapeutic applications. Derivatives of 1,25(OH)(2)D-3 with the oxolane moiety branched in the side chain at carbon C20, act as Vitamin D nuclear Receptor (VDR) superagonists being several orders of magnitude more active than the natural ligand. Here, we describe the synthesis and biological evaluation of three diastereoisomers of (1S, 3R)-Dihydroxy-(20S)-[(2 ''-hydroxy-2 ''-propyl)-tetrahydrofury1]-22,23,24,25,26,27-hexanor-1 alpha-hydroxyvitamin D3, with different stereochemistry at positions C2 and C5 of the oxolane ring branched at carbon C22 (1, C2RC5S; 2, C2SC5R; 3, C2SC5S). These compounds act as weak VDR agonist in transcriptional assays with compound 3 being the most active. X-ray crystallographic analysis of the VDR ligand-binding domain accommodating the three compounds indicates that the oxolane group branched at carbon C22 is not constrained as in case of compound with oxolane group branched at C20 leading to the loss of interactions of the triene group and increased flexibility of the C/D-rings and of the side chain. (C) 2017 Elsevier Masson SAS. All rights reserved. The serineiarginine-rich protein kinases (SRPKs) have frequently been found with altered activity in a number of cancers, suggesting they could serve as potential therapeutic targets in oncology. Here we describe the synthesis of a series of twenty-two trifluoromethyl arylamides based on the known SRPKs inhibitor N-(2-(piperidin-l-yl)-5-(trifluoromethyl)phenyl)isonicotinamide (SRPIN340) and the evaluation of their antileukemia effects. Some derivatives presented superior cytotoxic effects against myeloid and lymphoid leukemia cell lines compared to SRPIN340. In particular, compounds 24, 30, and 36 presented IC50 values ranging between 6.0 and 35.7 mu M. In addition, these three compounds were able to trigger apoptosis and autophagy, and to exhibit synergistic effects with the chemotherapeutic agent vincristine. Furthermore, compound 30 was more efficient than SRPIN340 in impairing the intracellular phosphorylation status of SR proteins as well as the expression of MAP2K1, MAP2K2, VEGF, and RON oncogenic isoforms. Therefore, novel compounds with increased intracellular effects against SRPK activity were obtained, contributing to medicinal chemistry efforts towards the development of new anticancer agents. (C) 2017 Elsevier Masson SAS. All rights reserved. Based on our previous screening hit compound 1, a series of novel indole-pyrimidine hybrids possessing morpholine or thiomorpholine moiety were synthesized via an efficient one-pot multistep synthetic method. The antiproliferative activities of the synthesized compounds were evaluated in vitro against four cancer cell lines including HeLa, MDA-MB-231, MCF-7, and HCT116. The results revealed that most compounds possessed moderate to excellent potency. The IC50 values of the most promising compound 15 are 0.29, 4.04, and 9.48 mu M against MCF-7, HeLa, and HCI116 cell lines, respectively, which are 48.0, 4.9, and 1.8 folds more active than the lead compound 1. Moreover, fluorescence-activated cell sorting analysis revealed that compound 14 showing the highest activity against HeLa (IC50 = 2.51 mu M) displayed a significant effect on G(2)/M cell-cycle arrest in a concentration-dependent manner in HeLa cell line. In addition, representative nine active hybrids were evaluated for tubulin polymerization inhibitory activities, and compound 15 exhibited the most potent anti-tubulin activity showing 42% inhibition at 10 mu M. These preliminary results encourage a further investigation on indole-pyrimidine hybrids for the development of potent anticancer agents that inhibit tubulin polymerization. (C) 2017 Elsevier Masson SAS. All rights reserved. Novel analogues of oxadiazole-substituted naphtho[2,3-b]thiophene-4,9-diones were synthesized in which the tricyclic quinone skeleton was systematically replaced with simpler moieties, such as structures with fewer rings and open-chain forms, while the oxadiazole ring was maintained. In addition, variants of the original 1,2,4-oxadiazole ring were explored. Overall, the complete three-ring quinone was essential for potent suppression of human keratinocyte hyperproliferation, whereas analogous anthraquinones were inactive. Also, the oxadiazole ring per se was not sufficient to elicit activity. However, rearrangement of the heteroatom positions in the oxadiazole ring resulted in highly potent inhibitors with compound 24b being the most potent analogue of this series showing an IC50 in the nanomolar range. Furthermore, experiments in isolated enzymatic assays as well as in the keratinocyte-based hyperproliferation assay did not support a major role of redox cycling in the mode of action of the compounds. (C) 2017 Elsevier Masson SAS. All rights reserved. Multivalent ligands that exhibit high binding affinity to influenza hemagglutinin (HA) trimer can block the interaction of HA with its sialic acid receptor. In this study, a series of multivalent pentacyclic triterpene-functionalized per-O-methylated cyclodextrin (CD) derivatives were designed and synthesized using 1, 3-dipolar cycloaddition click reaction. A cell-based assay showed that three compounds (25, 28 and 31) exhibited strong inhibitory activity against influenza A/WSN/33 (H1N1) virus. Compound 28 showed the most potent anti-influenza activity with IC50 of 4.7 mu M. The time-of-addition assay indicated that compound 28 inhibited the entry of influenza virus into host cell. Further hemagglutination inhibition (HI) and surface plasmon resonance (SPR) assays indicated that compound 28 tightly bound to influenza HA protein with a dissociation constant (K-D) of 4.0 mu M. Our results demonstrated a strategy of using per-O-methylated,Q-CD as a scaffold for designing multivalent compounds to disrupt influenza HA protein-host receptor protein interaction and thus block influenza virus entry into host cells. (C) 2017 Elsevier Masson SAS. All rights reserved. A PG-tb1 hapten from the West Beijing strains of Mycobacterium tuberculosis cell wall has been efficiently synthesized and conjugated to CRM197 in a simple way as linker-equipped carbohydrate by applying squaric acid chemistry for an original neoglycoprotein, creating a potent T-dependent conjugate vaccine. The intermediate monoester can be easily purified and the degree of incorporation can be monitored by MALDI-TOF mass spectrometry. After administered systemically in mice without any adjuvant, the conjugate induced high antigen-specific IgG levels in serum. Furthermore, following the third immunization, significant antibody titers frequently exceeding 0.8 million were observed in the sera of mice vaccinated with PG-CRM197 conjugate which showed the potential for preparation of TB vaccine. (C) 2017 Elsevier Masson SAS. All rights reserved. c-Met/HGF signaling pathway plays an important role in cancer progression, and it was considered to be related to poor prognosis and drug resistance. Based on metabolite profiling of (S)-7-fluoro-6-(1-(6-(1methyl-1H-pyrazol-4-y1)-1H-imidazo[4,5-b]pyrazin-1-yl)ethypquinoline (1), a series of 2-substituted or 3-substituted-6-(1-(1H-[1,2,31triazolo[4,5-b]pyrazin-1-yl)ethyl)quinoline derivatives was rationally designed and evaluated. Most of the 3-substituted derivatives not only exhibited potent activities in both enzymatic and cellular assays, but also were stable in liver microsomes among different species (human, rat and monkey). SAR investigation revealed that introducing of N-methyl-IH-pyrazol-4-yl group at the 3-position of quinoline moiety is beneficial to improve the inhibitory potency, especially in the cellular assays. The influence of fluorine atom at 7-position or 5, 7-position of quinoline moiety and substituents at the 6-position of triazolo[4,5-b]pyrazine core on overall activity is not very significant. Racemate 14, an extremely potent and exquisitely selective c-Met inhibitor, demonstrated favorable pharmacokinetic properties in rats, no significant AO metabolism and effective tumor growth inhibition in c-Met over expressed NSCLC (H1993 cell line) and gastric cancer (SNU-5 cell line) xenograft models. Docking analysis indicated that besides the typical interactions of most selective c-Met inhibitors, the intramolecular halogen bond and additional hydrogen bond interactions with kinase are beneficial to the binding. These results may provide deep insight into potential structural modifications for developing potent c-Met inhibitors. (C) 2017 Elsevier Masson SAS. All rights reserved. Previously, we reported the discovery of a series of N-hydroxycinnamamide-based HDAC inhibitors, among which compound 11y exhibited high HDAC1/3 selectivity. In this current study, structural derivatization of Ily led to a new series of benzamide based HDAC inhibitors. Most of the compounds exhibited high HDACs inhibitory potency. Compound 11a (with 4-methoxybenzoyl as N-substituent in the cap and 4-(aminomethyl) benzoyl as the linker group) exhibited selectivity against HDAC1 to some extent, and showed potent antiproliferative activity against several tumor cell lines. In vivo studies revealed that compound 11a displayed potent oral antitumor activity in both hematological tumor cell U937 xenograft model and solid tumor cell HCT116 xenograft model with no obvious toxicity. Further modification of benzamide 3, hla and 19 afforded new thienyl and phenyl compounds (50a, 50b, 63a, 63b and 63c) with dramatic HDAC1 and HDAC2 dual selectivity, and the fluorine containing compound 56, with moderate HDAC3 selectivity. (C) 2017 Elsevier Masson SAS. All rights reserved. We synthesized two mixed-ligand Cu(II) complexes containing different aroylhydrazone ligands and a pyridine co-ligand, namely, [Cu(L1)(Py)] (Cl) and [Cu(L2)(Py)(Br)] (C2) (L1 = (E)-2-hydroxy-W-((2-hydroxynaphthalen-1-yl)methylene)benzohydrazide, Py = pyridine, L2 = (E)-2-hydroxy-W-(phenyl(pyridin-2-yl)methylene)benzohydrazide), and assessed their chemical and biological properties to understand their marked activity. C2 showed better anticancer activity than Cl in various human cancer cell lines, including the cisplatin-resistant lung cancer cell line A549cisR. Both Cu(II) complexes, especially C2, displayed promising anti-metastatic activity against HepG2 cells. Spectroscopic titration and agarose gel electrophoresis experiments indicated that C2 exhibited binding affinity toward calf-thymus DNA and efficient pBR322 DNA-cleaving ability. Further mechanistic studies showed that C2 effectively induced DNA damage and thus led to cell cycle arrest at the G2/M phase, and also stimulated mitochondrial dysfunction mediated by reactive oxygen species and caspase-dependent apoptosis. (C) 2017 Elsevier Masson SAS. All rights reserved. Adenosine induces bronchial hyperresponsiveness and inflammation in asthmatics through activation of A(2B) adenosine receptor (A(2B)AdoR). Selective antagonists have been shown to attenuate airway reactivity and improve inflammatory conditions in pre-clinical studies. Hence, the identification of novel, potent and selective A2BAdoR antagonist may be beneficial for the potential treatment of asthma and Chronic Obstructive Pulmonary Disease (COPD). Towards this effort, we explored several prop-2-ynylated C8-aryl or heteroaryl substitutions on xanthine chemotype and found that 1-prop-2-ynyl-1H-pyrazol-4-yl moiety was better tolerated at the C8 position. Compound 59, exhibited binding affinity (K-i) of 62 nM but was non-selective for A(2B)AdoR over other AdoRs. Incorporation of substituted phenyl on the terminal acetylene increased the binding affinity (KO significantly to <10 nM. Various substitutions on terminal phenyl group and different alkyl substitutions on N-1 and N-3 were explored to improve the potency, selectivity for A(2B)AdoR and the solubility. In general, compounds with meta-substituted phenyl provided better selectivity for A(2B)AdoR compared to that of para-substituted analogs. Substitutions such as basic amines like pyrrolidine, piperidine, piperazine or cycloalkyls with polar group were tried on terminal acetylene, keeping in mind the poor solubility of xanthine analogs in general. However, these substitutions led to a decrease in affinity compared to compound 59. Subsequent SAR optimization resulted in identification of compound 46 with high human A2BAdoR affinity (K-i = 13 nM), selectivity against other AdoR subtypes and with good pharmacokinetic properties. It was found to be a potent functional A(2B)AdoR antagonist with a K-i; of 8 nM in cAMP assay in hA(2B)-HEK293 cells and an IC50 of 107 nM in IL-6 assay in NIH-3T3 cells. Docking study was performed to rationalize the observed affinity data. Structure activity relationship (SAR) studies also led to identification of compound 36 as a potent A(2B)AdoR antagonist with K; of 1.8 nM in CAMP assay and good aqueous solubility of 529 mu M at neutral pH. Compound 46 was further tested for in vivo efficacy and found to be efficacious in ovalbumin-induced allergic asthma model in mice. (C) 2017 Elsevier Masson SAS. All rights reserved. Human sirtuin 2 (SIRT2) plays pivotal roles in multiple biological processes such as cell cycle regulation, autophagy, immune and inflammatory responses. Dysregulation of SIRT2 was considered as a main aspect contributing to several human diseases, including cancer. Development of new potent and selective SIRT2 inhibitors is currently desirable, which may provide a new strategy for treatment of related diseases. Herein, a structure-based optimization approach led to new 2-((4,6-dimethylpyrimidin-2-yl) thio)-N-phenylacetamide derivatives as SIRT2 inhibitors. SAR analyses with new synthesized derivatives revealed a number of new potent SIRT2 inhibitors, among which 28e is the most potent inhibitor with an IC50 value of 42 nM. The selectivity analyses found that 28e has a very good selectivity to SIRT2 over SIRT1 and SIRT3. In cellular assays, 28e showed a potent ability to inhibit human breast cancer cell line MCF-7 and increase the acetylation of alpha-tubulin in a dose-dependent manner. This study will aid further efforts to develop highly potent and selective SIRT2 inhibitors for the treatment of cancer and other related diseases. (C) 2017 Elsevier Masson SAS. All rights reserved. Epoxyazadiradione (1), a major compound derived from Neem oil, showed modest anti-plasmodial activity against CQ-resistant and CQ-sensitive strains of the most virulent human malaria parasite P. falciparum. A series of analogues were synthesized by modification of the key structural moieties of this high yield natural product. Out of the library of all compounds tested, compounds 3c and 3g have showed modest anti-plasmodial activity against CQ-sensitive (IC50 2.8 +/- 0.29 mu M and 1.5 +/-.01 mu M) and CQ-resistant strains (IC50 13 +/- 1.08 mu M and 1.2 +/- 0.14), while compounds 3k, 31 and 3m showed modest activity against CQ-sensitive strain of P. falciparum with IC50 values of 2.3 +/- 0.4 AM, 2.9 +/- 0.1 mu M and 1.7 +/- .06 mu M, respectively. Additionally, cytotoxic properties of these derivatives against SIHA, PANC 1, MDA-MB-231, and IMR-3 cancer cell lines were also studied and the results indicated that low cytotoxic potentials of all the derivatives which indicating the high selectivity index of the compounds. (C) 2017 Elsevier Masson SAS. All rights reserved. The 2-oxindole nucleus is the central core to develop new anticancer agents and its substitution at the 3 position can effect antitumor activity. Utilizing a pharmacophore hybridization approach, a novel series of antiproliferative agents was obtained by the modification of the structure of 3-substituted-2-oxindole pharmacophore by the attachment of the alpha-bromoacryloyl moiety, acting as a Michael acceptor, at the 5-position of 2-oxindole framework. The impact of the substituent at the 3-position of 2-oxindole core on the potency and selectivity against a panel of seven different cancer cell lines was examined. We found that these hybrid molecules displayed potent antiproliferative activity against a panel of four cancer cell lines, with one-to double digit nanomolar 50% inhibitory concentrations (IC50). A distinctive selective antiproliferative activity was obtained towards CCRF-CEM and RS4; 11 leukemic cell lines. In order to study the possible mechanism of action, we observed that the two most active compounds namely 3(E) and 6(Z) strongly induce apoptosis that follow the mitochondrial pathway. Interestingly a decrease of intracellular reduced glutathione content (GSH) and reactive oxygen species (ROS) production was detected in treated cells compared with controls suggesting that these effects may be involved in their mechanism of action. (C) 2017 Elsevier Masson SAS. All rights reserved. A new class of optical isomers of 2-arylbenzofuran derivatives were synthesized and evaluated as potential beta-amyloid plaques imaging agents. Both lipophilicity and signal-to-noise ratio were significantly improved by adding a chiral hydroxyl group to 1-fluoro-3-(oxidanyl)propan-2-ol side chain. These derivatives displayed moderate to high binding affinity towards A beta(1-42) aggregates. Four tracers possessing potent binding affinity (K-i < 30 nM) were chosen for further investigation. In in vitro autoradiography studies, the four selected probes showed effective binding to A beta plaques in Tg mouse and AD human brain tissue after labeled by F-18. The purified enantiomers displayed apparent discrepancy in biodistribution experiments in normal mice, for (S)-enantiomers provided rather faster clearance than (R)-enantiomers. All in all, (S)-[F-18]17 (K-i = 14.6 nM) with excellent pharmacoldnetics (brain(2 min) = 8.60% ID/ g, brain(2 min)/brain(60) (min) = 14.1) deserves further evaluation. (C) 2017 Elsevier Masson SAS. All rights reserved. DNA methyltransferases (DNMTs) and histone deacetylases (HDACs) are important epigenetic targets during anticancer drug development. Recent study indicates that DNMT inhibitors and HDAC inhibitors display synergistic effects in certain cancers, therefore, development of molecules targeting both DNMT and HDAC is of therapeutic advantage against these cancers. Based on the structure of DNMT inhibitor NSC-319745 and the pharmacophore characteristics of HDAC inhibitors, a series of hydroxamic acid derivatives of NSC-319745 were designed and synthesized as DNMT and HDAC multifunctional inhibitors. Most compounds displayed potential DNMT inhibitory potency and potent HDAC inhibitory activity, especially compound 15a showed much better DNMT1 inhibitory potency than NSC-319745, and inhibited HDAC1, HDAC6 with IC50 values of 57, 17 nM, respectively. Furthermore, the synthesized compounds exhibited significant cytotoxicity against human cancer cells K562 and U937. Further mechanistic studies demonstrated that 15a treatment in U937 increased histones H3K9 and H4K8 acetylation, prompted P16 CpG islands demethylation and upregulated P16 expression, regulated apoptosis-related protein expression on the cellular level and induced remarkable U937 apoptosis. Moreover, genotoxicity of representative compounds was evaluated. In summary, our study provided a practical drug design strategy targeting multiple enzymes, and 15a represents a novel and promising lead compound for the development of novel epigenetic inhibitors as antitumor agents. (C) 2017 Elsevier Masson SAS. All rights reserved. Naturally occurring styryl lactone, crassalactone D (1), unnatural 4-epi-crassalactone D (2), and the corresponding 7-epimers (3 and 4) have been synthesized starting from D-glucose. The key step of the synthesis is a new one-pot sequence that commenced with a Z-selective Wittig olefination of suitably functionalized sugar lactols with a stabilized ylide, (methoxycarbonylmethylene)-triphenylphosphorane, in dry methanol, to afford 1 or 3, in the mixtures with the corresponding 4-epimers (2 or 4, respectively). A number of 6-0-cinnamoyl derivatives of styryl lactones 1-4 have been prepared, bearing electron donating or electron withdrawing functionalities in the C-4 position of cinnamic acid residue. The synthesized products were evaluated for their in vitro antiproliferative activity against selected human tumour cell lines, whereupon very potent cytotoxicities have been recorded in many cases. SAR analysis indicated some important structural features responsible for biological activity, such as stereochemistry at the C-4 and C-7 positions, as well as the nature of a substituent at the C-4 position in the aromatic ring of cinnamoate moiety. Flow cytometry and Western blot analysis data gave insight in the mechanism underlying antiproliferative effects of the synthesized compounds. (C) 2017 Elsevier Masson SAS. All rights reserved. Hybrid molecules are used as anticancer agents to improve effectiveness and diminish drug resistance. So, the current study aimed to introduce twenty novel phenothiazine sulfonamide hybrids 5-22, 24 and 25 of promising anticancer activity. Compounds 11 and 13 revealed, more potent anticancer properties (IC50 8.1 and 8.8 mu M) than that of the reference drug (doxorubicin, IC50 = 9.8 mu M) against human breast cancer cell line (T47D). To determine the mechanism of their anticancer activity, compounds 5, 6, 7,11, 13, 14,16, 17, 19 and 22 that showed promising activity on T47D, were evaluated for their aromatase inhibitory effect. The study results disclose that the most potent aromatase inhibitors 11 and 13 showed the lowest IC50 (5.67 mu M and 6.7 mu M), respectively on the target enzyme. Accordingly, the apoptotic effect of the most potent compound 11 was extensively investigated and showed a marked increase in Bax level up to 55,000 folds, and down-regulation in Bcl2 to 5.24*10(-4) folds, in comparison to the control. Furthermore, the effect of compound 11 on caspases 3, 8 and 9 was evaluated and was found to increase their levels by 20, 34, and 8.9 folds, respectively, which indicates the activation of both intrinsic and extrinsic pathways. Also, the effect of compound 11 on the cell cycle and its cytotoxic effect were examined. Moreover, a molecular docking and computer aided ADMET studies were adopted to confirm their mechanism of action. (C) 2017 Elsevier Masson SAS. All rights reserved. Cell division cycle phosphatases CDC25 A, B and C are involved in modulating cell cycle processes and are found overexpressed in a large panel of cancer typology. Here, we describe the development of two novel quinone-polycycle series of CDC25A and C inhibitors on the one hand la-k, coumarin-based, and on the other 2a-g, quinolinone-based, which inhibit either enzymes up to a sub-micro molar level and at single digit micro molar concentrations, respectively. When tested in six different cancer cell lines, compound 2c displayed the highest efficacy to arrest cell viability, showing in almost all cell lines sub-micro molar IC50 values, a profile even better than the reference compound NCS95397. To investigate the putative binding mode of the inhibitors and to develop quantitative structure-activity relationships, molecular docking and 3-D QSAR studies were also carried out. Four selected inhibitors, la, ld, 2a and 2c have been also tested in A431 cancer cells; among them, compound 2c was the most potent one leading to cell proliferation arrest and decreased CDC25C protein levels together with its splicing variant. Compound 2c displayed increased phosphorylation levels of histone H3, induction of PARP and caspase 3 cleavage, highlighting its contribution to cell death through pro-apoptotic effects. (C) 2017 Elsevier Masson SAS. All rights reserved. Toll-like receptor 9 (TLR9) is a major therapeutic target for numerous inflammatory disorders. Development of small molecule inhibitors for TLR9 remains largely empirical due to lack of structural understanding of potential TLR9 antagonism by small molecules and due to the unusual topology of the ligand binding surface of the receptor. To develop a structural model for rational design of small molecule TLR9 antagonists, an enhanced homology model of human TLR9 (hTLR9) was constructed. Binding mode analysis of a series of molecules having characteristic molecular geometry, flexibility and basicity was conducted based on crystal structure of the inhibitory DNA (iDNA) bound to horse and bovine TLR9. Interaction with specific amino acid residues in four leucine rich repeat (LRR) regions of TLR9 was identified to be critical for antagonism by small molecules. The biological validation of TLR9 antagonism and its correlation with probe-receptor interactions led to a reliable model that could be used for development of novel small molecules with potent TLR9 antagonism (IC50 30-100 nM) with excellent selectivity against TLR7. (C) 2017 Elsevier Masson SAS. All rights reserved. Targeting Pim-1 kinase recently proved to be profitable for conquering cancer proliferation. In the current study, we report the design, synthesis and biological evaluation of two novel series of 2-amino cyanopyridine series (5a-g) and 2-oxocyanopyridine series (6a-g) targeting Pim-1 kinase. All of the newly synthesized compounds were evaluated for their in vitro anticancer activity against a panel of three cell lines, namely, the liver cancer cell line (HepG2), the colon cancer cell line (HCT-116) and the breast cancer cell line (MCF-7). Most of the compounds showed good to moderate anti-proliferative activity against HepG2 and HCT-116 cell lines while only few compounds showed significant cytotoxic activity against MCF-7 cell line. Further, the Pim-1 kinase inhibitory activity for the two series was evaluated where most of the tested compounds showed marked Pim-1 kinase inhibitory activity (26%-89%). Moreover, determination of the IC50 values unraveled very potent molecules in the sub-micromolar range where compound 6c possessed an IC50 value of 0.94 mu M. Moreover, apoptosis studies were conducted on the most potent compound 6c to evaluate the proapoptotic potential of our compounds. Interestingly, it induced the level of active caspase 3 and boosted the Bax/Bcl2 ratio 22704 folds in comparison to the control. Finally, a molecular docking study was conducted to reveal the probable interaction with the Pim-1 kinase active site. (C) 2017 Elsevier Masson SAS. All rights reserved. The extracellular signal-regulated kinase (ERK) is one of the most important molecular targets for cancer that controls diverse cellular processes such as proliferation, survival, differentiation and motility. Similarly, the Rb (retinoblastoma protein) is a tumor suppressor protein and its function is to prevent excessive cell growth by inhibiting cell cycle progression. When the cell is ready to divide, pRb is phosphorylated, becomes inactive and allows cell cycle progression. Herein, we discovered a new series of tetrahydrocarbazoles as dual inhibitors of pERK and pRb phosphorylation. The in-house small molecule library was screened for inhibition of pERK and pRb phosphorylation, which led to the discovery of tetrahydrocarbazole series of compounds as potential leads. N-(3-methylcyclopenty1)-6-nitro-2,3,4,4a,9,9a-hexahydro-1H-carbazol-2-amine (1) is the dual inhibitor lead identified through screening, displaying inhibition of PERK and pRb phosphorylation with IC50 values of 5.5 and 4.8 mu M, respectively. A short structure-activity relationship (SAR) study has been performed, which identified another dual inhibitor 9-methyl-N-(4-methylbenzy1)-2,3,4,4a,9,9a-hexahydro-1H-carbazol-2-amine (16) with IC50 values 4.4 and 3.5 mu M for inhibition of pERK and pRb phosphorylation, respectively. This compound has a potential for further lead optimization to discover promising molecularly-targeted anticancer agents. (C) 2017 Elsevier Masson SAS. All rights reserved. A series of flexible urea derivatives have been synthesized and demonstrated as selective cardiac myosin ATPase activator. Among them 1-phenethyl-3-(3-phenylpropyl)urea (1, cardiac myosin ATPase activation at 10) mu M = 51.1%; FS = 18.90; EF = 12.15) and 1-benzyl-3-(3-phenylpropyl)urea (9, cardiac myosin ATPase activation = 53.3%; FS = 30.04; EF = 18.27) showed significant activity in vitro and in vivo. The change of phenyl ring with tetrahydropyran-4-yl moiety viz., 1-(3-phenylpropy1)-3-((tetrahydro-2H-pyran-4-y1) methyl)urea (14, cardiac myosin ATPase activation = 81.4%; FS = 20.50; EF = 13.10), and morpholine moiety viz., 1-(2-morpholinoethyl)-3-(3-phenylpropyl)urea (21, cardiac myosin ATPase activation = 44.0%; FS = 24.79; EF = 15.65), proved to be efficient to activate the cardiac myosin. The potent compounds 1, 9, 14 and 21 were found to be selective for cardiac myosin over skeletal and smooth myosins. Thus, these urea derivatives are potent scaffold to develop as a newer cardiac myosin activator for the treatment of systolic heart failure. (C) 2017 Elsevier Masson SAS. All rights reserved. Activated checkpoint kinase 2 (Chk2) is a tumor suppressor as one of the main enzymes that affect the cell cycle. 2-Biarylbenzimidazoles are potent selective class of Chk2 inhibitors; the structure-based design was applied to synthesize a new series of this class with replacing the lateral aryl group by substituted pyrazoles. Ten pyrazole-benzimidazole conjugates from the best fifty candidates according to docking programs have been subjected to chemical synthesis in this study. The activities of the conjugates 5-14 as checkpoint kinase inhibitors and as antitumor alone and in combination with genotoxic drugs were evaluated. The effect of compounds 7 and 12 on cell-cycle phases was analyzed by flow cytometry analysis. Antitumor activity of compounds 7 and 12 as single-agents and in combinations with doxorubicin was assessed in breast cancer bearing animals induced by MNU. The Results indicated that compounds 5-14 inhibited Chk2 activity with high potency (IC50 52.8 nM-5.5 nM). The cytotoxicity of both cisplatin and doxorubicin were significantly potentiated by the most of the conjugates against MCF-7 cell lines. Compounds 7 and 12 and their combinations with doxorubicin induced the cell cycle arrest in MCF-7 cells. Moreover, compound 7 exhibited marked higher antitumor activity as a single agent in animals than it's combination with doxorubicin or doxorubicin alone. The combination of compound 12 with doxorubicin was greatly effective on animal than their single-dose treatment. In conclusion, pyrazole-benzimidazole conjugates are highly active Chk2 inhibitors that have anticancer activity and potentiate activity of genotoxic anticancer therapies and deserve further evaluations. (C) 2017 Elsevier Masson SAS. All rights reserved. Eluding the involvement of solvents in organic synthesis and introducing environment friendly procedures can control environmental problems. A facile and an efficient solvent free mechanochemical method (grinding) is achieved to synthesize novel bis-biphenyl substituted thiazolidinones using nontoxic and cheap N-acetyl glycine (NAG). Organocatalytic condensation of a series of Schiff's bases bearing different substituents with thioglycolic acid produces a variety of thiazolidinones derivatives in good to excellent yield. In vitro inhibition studies against mushroom tyrosinase of these thiazolidinone analogues revealed that many of them possessed good to excellent tyrosinase inhibition at low micro molar concentrations. In particular, six compounds exhibited potent inhibitory potential with IC50 values ranging from 0.61 +/- 0.31 to 21.61 +/- 0.11 mu M as compared with that of standard kojic acid (IC50 6.04 +/- 0.11 M). Further molecular docking studies revealed that the thiazolidinones moiety plays a key role in the inhibition mechanism by well fitting into the enzyme bounding pocket. (C) 2017 Elsevier Masson SAS. All rights reserved. Vector control of disease-transmitting mosquitoes by insecticides has a central role in reducing the number of parasitic- and viral infection cases. The currently used insecticides are efficient, but safety concerns and the development of insecticide-resistant mosquito strains warrant the search for alternative compound classes for vector control. Here, we have designed and synthesized thiourea-based compounds as non-covalent inhibitors of acetylcholinesterase 1 (AChEl) from the mosquitoes Anopheles gambiae (An. gambiae) and Aedes aegypti (Ae. aegypti), as well as a naturally occurring resistant conferring mutant. The N-aryl-N'-ethyleneaminothioureas proved to be inhibitors of AChEl; the most efficient one showed submicromolar potency. Importantly, the inhibitors exhibited selectivity over the human AChE (hAChE), which is desirable for new insecticides. The structure-activity relationship (SAR) analysis of the thioureas revealed that small changes in the chemical structure had a large effect on inhibition capacity. The thioureas showed to have different SAR when inhibiting AChEl and hAChE, respectively, enabling an investigation of structure-selectivity relationships. Furthermore, insecticidal activity was demonstrated using adult and larvae An. gambiae and Ae. aegypti mosquitoes. (C) 2017 Elsevier Masson SAS. All rights reserved. In this study, SiO2-EDTA was prepared by silanization reaction between N-(trimethoxysilylpropyl) ethylenediamine triacetic acid, trisodium salt (EDTA-silane) and hydroxyl groups for enhanced removal of Pb(II), Pb(II)-Cit (the clathrate generated by Pb(II) and trisodium citrate dehydrate(Cit)) and Pb(II)EDTA(20%) from aqueous solutions. SiO2-EDTA composites were characterized using SEM, TEM, EDX-mapping, FTIR, XPS and TGA analyses. The influence of solution pH, initial concentration, contact time and co-existing interferents were also studied. Results demonstrated that the composite successfully adsorbed 147.52, 107.65 and 124.18 mg g(-1) of Pb(II), Pb(II)-Cit, and Pb(II)-EDTA (20%), respectively with the initial Pb(II) concentration of 100 mg L-1. Kinetics study revealed that the adsorption rate was significantly high at the beginning and then reached to equilibrium within 1.0 h. Moreover, Pb(II) adsorption capacities were found to considerably affected by co-existing cations and not inhibited by natural organic matter (NOM). Characterization analyses confirmed that EDTA was successfully assembled on SiO2 which had been used as a supporting matrix due to its huge specific surface area. Findings from this study suggested that the present composite could be considered as a promising adsorbent for large scale treatment of wastewater containing elevated Pb(II) concentration. (C) 2017 Elsevier Inc. All rights reserved. Transition metal oxides are great promising anode materials with much higher theoretical electrochemical capacities for lithium ion battery compared with the commercialized carbon materials while serious capacity fading and poor cycle stability caused by large volume change and sluggish kinetics must be addressed for their practical application. Herein, we demonstrated a novel strategy to synthesize 2D layered mesoporous-MoO2/graphene (meso-MoO2/rGO) electrode materials using KIT-6/rGO as a template and ammonium molybdate as a precursor via a nanocasting method. By combining graphene with MoO2 and endowing it mesoporous structure, 2D layered meso-MoO2/rGO electrode materials are expected to show superior electrical conductivity, structured flexibility, and chemical stability, which may provide uninhibited conducting pathways for fast charge transfer and transport between oxide nanoparticles and graphene. In addition, mesoporous MoO2 is also anticipated to optimize Li+ transport in pore walls and fast electrolyte transport within highly ordered mesopores. As a result, meso-MoO2/rGO electrode materials possess an ordered mesoporous structure with a superior electrochemical performance. The electrochemical performances were examined using galvanostatical charge-discharge, cyclic voltammetry, and electrochemical impedance spectroscopy (EIS) techniques. Benefiting from the combining effects of mesoporous MoO2 and 2D layered graphene, meso-MoO2/rGO electrode materials alleviate the volume effect and give an enhanced discharge and charge capacity and robust cycle stability. The meso-MoO2/rGO composite delivers the first discharge capacity of 1160.6 mA h g(-1) and its reversible capacity is 801 mA h g(-1) after 50 cycles, making it promising for potential uses as high performance anode materials in lithium-ion battery. (C) 2017 Elsevier Inc. All rights reserved. Polyethersulfone (PES) ultrafiltration membrane with enhanced simultaneous permeability and fouling-resistance property was prepared using a new synthesized aromatic polyamide (PA-6) as an additive. A series of asymmetric membranes were prepared by adding different amounts of PA-6 to the casting solution using the phase inversion induced by immersion precipitation method. Attenuated total reflection-Fourier transform infrared spectra (ATR-FTIR) and water contact angle measurement confirmed the PA-6 enrichment at the membrane surface and increased the membrane hydrophilicity and wettability. The SEM images elucidated the effect of PA-6 addition on the PES membrane morphology by increasing the pore density. The results of filtration performance, which carried out by dead-end filtration of bovine serum albumin (BSA) solution showed that the permeability and fouling resistance property was improved by optimizing the PA-6 content. When the PA-6 content was 2 wt%, the permeability reached approximately 10 times over the pure PES membrane. In comparison to the blend membrane of PES and 2 wt% of polyvinyl pyrrolidone (PVP), the blend membrane of 2 wt% of PA-6 showed significant flux recovery ability. The rejection of all the blended membranes was approximately at high point over 95%. In addition, the results were compared with those obtained using PVP as a usual additive. Although the PVP blended membranes exhibited higher permeability, they showed lower antifouling properties. Finally, a membrane with 1 wt% PVP and 1 wt% PA-6 was prepared and showed the best performance regarding improved permeability and antifouling properties. (C) 2017 Elsevier Inc. All rights reserved. Maxwell-Stefan diffusion with single-site Langmuir isotherm was used to model the flow of gas inside micro pores of HT-Silica membrane. Coupled with Van't Hoff and Arrhenius equations, the diffusivity and energy contributed by the surface affinity and gas kinetics were quantified and evaluated. Results indicate that all of the four gases being studied were affected by the surface affinity to a significant extent. The surface affinity contributed 62% of the energy in the adsorption of CO2, 48% of the energy in the adsorption of CH4, 48% of the energy in the adsorption of N-2 and 46% of the energy in the adsorption of H-2. This explains the reason for the higher CO2 permeability despite the fact that the CO2 gas molecules are heavier than the other gas molecules being compared in the analysis. (C) 2017 Elsevier Inc. All rights reserved. The catalytic activity and stability of amino-based MIL-53(Al) materials were tested in Knoevenagel condensation of benzaldehyde with malononitrile reaction. The amine content was modulated by using different ratios of benzene-1,4-dicarboxylic acid (BDC) and 2-amino-benzene-1,4- dicarboxylic acid (NH2-BDC) as organic ligands for the synthesis. The amino-based MIL-53(Al) material synthesized with equimolar amounts of BDC and NH2-BDC (NH2(50%)-MIL-53(Al)) showed the best catalytic performance in terms of benzaldehyde conversion and benzylidenemalononitrile product yield. This equimolar ratio provided the best balance between the amount of amine basic sites and the pore size of the framework. The catalytic activity was also tested for several aldehydes with different molecular sizes and chemical substituents and ethyl cyanoacetate as another methylene compound. The NH2(50%)-MIL-53(Al) material also displayed a remarkable catalytic activity and stability compared to other amino-containing MOF materials (UiO-66-NH2, MIL-101(Al)-NH2 and IRMOF-3) and Na-exchanged beta zeolite. The NH2(50%)-MIL-53(Al) material was considerably active at 40 degrees C, being its catalytic performance enhanced when using methanol as solvent in the Knoevenagel condensation reaction. The reusability and stability of the NH2(50%)-MIL-53(Al) was shown by a set of five consecutive reaction cycles without appreciable loss of activity and high catalyst recoveries. (C) 2017 Elsevier Inc. All rights reserved. To achieve a proper efficiency in methanol-to-olefin (MTO) process, a series of core-shell ZSM-5@MnO nanocatalysts with different shell thicknesses (6.3-13.7 nm) were successfully synthesized. Approaching bottom-up synthesis through hydrothermal-precipitation method provides well-control condition for core-shell nanoparticles formation. Aging time as an effective synthesis factor controls the rate of precipitation. So, the influence of aging time (5, 10 and 15 h) on the control of the nanometric thickness of shell was investigated. The prepared samples were characterized by XRD, FESEM, TEM, EDX, BET-BJH, FTIR, UV-Vis and NH3-TPD analyses, which in excellent consistency they confirm the formation of core-shell nanoparticles. The result of the both TEM and UV analyses indicate that increasing the aging time led to increase shell thickness. In synthesized core-shell nanocatalyst, ZSM-5@MnO(5) with the minimum shell thickness as well as the maximum core to shell ratio showed the highest olefin productivity enhancement of 15% in comparison with bare ZSM-5. In fact, hybrid ZSM-5@MnO catalyst derived benefits from MnO selectivity toward ethylene as well as high acidic strength of ZSM-5. The exact engineering of core-shell composition changes the shape selectivity of nanocatalyst toward more valuable light olefins (ethylene) and the optimization of the well-controlled shell thickness significantly improves stability and catalytic performance of hybrid catalysts. (C) 2017 Elsevier Inc. All rights reserved. Carbon materials with a complex core-shell structure and adjustable porosity are fabricated by subsequent surface polymerization of twin monomers on carbon black and silica particles. The twin monomers 2,2'-spirobi[4H-1,3,2-benzodioxasiline] (Spiro) and tetrafurfuryloxysilane (TFOS) are polymerized in one step to an inorganic/organic hybrid material, which contains nanostructured silica and phenolic resin or poly(furfuryl alcohol), respectively, as organic polymer. After carbonization and the removal of silica, a porous carbon shell with defined porosity is obtained. In this process, Spiro based materials produce microporous carbon and TFOS based materials produce a mesoporous carbon. Quantities of monomer, catalyst and substrate can be varied. This allows to create a library of porous carbon materials with different properties such as controlled porosity, morphology and hierarchically structuring. Thus, mesoporous carbon with a microporous shell can be achieved by using carbon black particles as substrate and Spiro as twin monomer. Furthermore, carbon hollow spheres with a double shell with hierarchically structuring can be synthesized by subsequent polymerization of TFOS and Spiro on silica particles. The porous carbon materials were characterized by quantitative elemental analysis, thermogravimetric measurements, SEM/EDX, TEM, nitrogen sorption isotherms and mercury porosimetry. (C) 2017 Elsevier Inc. All rights reserved. Transformation of biomass wastes into sustainable low cost carbon materials is now a topic of great interest. Here, we describe porous carbon from biomass derived waste shrimp shells and its application in two different energy storage systems. The unique porous structure with the presence of heteroatoms (O, N) makes it promising material for both lithium ion batteries and supercapacitors. When applied as anode materials for lithium ion batteries, the as-prepared carbon showed a specific capacity as high as 1507 mA h g(-1) and 1014 mA h g(-1) at current densities of 0.1 A g(-1) and 0.5 A g(-1), respectively, good rate performance and superior cycling stability. The porous carbon-based supercapacitor also delivered a specific capacitance of 239 F g(-1) at a current density of 0.5 A g(-1) in 6 M KOH electrolyte. The specific capacitance retention is 99.4% even after 5000 charge-discharge cycles, indicating excellent cycling stability. The superior electrochemical performances for both lithium ion batteries and supercapacitors could be ascribed to the high specific surface area, porous structure and nitrogen doping effect. (C) 2017 Elsevier Inc. All rights reserved. We report a detailed study on the concentration profiles and ion irradiation induced desorption of hydrogen from porous silicon (pSi) by on-line elastic recoil detection analysis (ERDA). 100 MeV Ag ions have been employed to analyze the pSi samples prepared at different etching current densities. The observed blue shift in the photoluminescence of pSi with increase in etching current density is consistent with previous reports. Here we find that the concentration of hydrogen in near surface regions decreases with increase in etching current density. It is also observed that the concentration of hydrogen is greater in the near surface region and decreases rapidly as a function of depth in porous silicon. Further, the ion irradiation induced desorption of hydrogen from pSi has been characterized by the second order decay, indicating that the hydrogen desorbs in molecular form. The rate of desorption is found to be higher from the deeper layers when compared to that of near-surface regions, possibly due to higher diffusivity of elemental hydrogen within the deeper layer. Strong electronic excitations produced by probing beam are expected to be responsible for the observed non-thermal dissociation of Si-H bonds and consequent desorption of hydrogen from the surface of pSi in molecular form. These results provide useful information to elucidate 1) the role of hydrogen in determining the optical properties of porous/nano-crystalline silicon and 2) mechanisms that govern the non-thermal dissociation of Si-H bonds. (C) 2017 Elsevier Inc. All rights reserved. Heterometal atom incorporation in CHA aluminosilicate zeolites ([Al, M]-CHA, M = Fe, Ga, Sn) was successfully achieved by hydrothermal conversion of heterometal-incorporated FAU aluminosilicates ([Al, M]-FAU). X-ray powder diffraction (XRD), scanning electron microscopy, diffuse reflectance UV-vis spectroscopy, magic angle spinning NMR, and nitrogen adsorption measurements confirmed the formation of [Al, M]-CHA zeolites with heterometal atoms occupying homogeneously distributed tetrahedral coordination sites. The choice of hydrothermal conversion using [Al, M]-FAU was prompted by the inability to produce [Al, M]-CHA zeolites from amorphous hydrogels. The hydrothermal conversion of [Al, Fe]-FAU to [Al, Fe]-CHA was characterized by XRD, electrospray ionization mass spectrometry, and UV-vis spectroscopy. Analyses of liquid and solid phases during synthesis indicated that metal species present in the solid phase play an important role in the transformation process. We also investigated the effectiveness of Cu-loaded CHA zeolites for the selective catalytic reduction (SCR) of NOx by ammonia (NH3-SCR). Catalytic performance depended strongly on the kind and/or amount of heterometal atom in the [Al, M]-CHA zeolite. Although all fresh catalysts exhibited similar NO conversion efficiencies, there was a difference in NH3 conversion effectiveness at low reaction temperatures. Cu-loaded [Al, Ga]-CHA provided almost 100% NH3 conversion at 150 degrees C. The [Al, Sn]-CHA catalyst exhibited high stability even after hydrothermal treatment at 900 degrees C for 4 h. These results confirm hydrothermal conversion as an effective method for synthesizing heterometal-incorporated zeolite catalysts with high NH3-SCR performance. (C) 2017 Elsevier Inc. All rights reserved. We succeeded in the self-assembly of 8-hydroxyquinoline (8Hq)-Li (1), Al (III) and Cu (II) complexes into the interlayer surfaces of layered silicate magadiite (Na2Si14O29.nH2O) and investigated the luminescence of organic metal-chelates in the confined spaces. The measurements, including X-ray diffraction (XRD), Fourier-transform infrared spectra (FT-IR), thermo-gravimetric analysis and differential thermal analysis (TG/DTA), scanning electron microscopy (SEM), energy-dispersive X-ray spectrometer (EDS), ultraviolet visible spectroscopy (UV Vis) and Photoluminescence spectra (PL), confirmed that the metal-organic chelates species were immobilized onto the silicate sheets via ion-dipole interaction and free diffusion. The encapsulation was obtained by a flexible solid-solid reaction, and the present reaction and products have a potential of application to industrial uses. A speculative mechanism is proposed for reaction by the combination of solid-solid and solid-gas intercalation. Furthermore, it was found that the complexes in the interlayer space showed a special fluorescence property than in the crystal state and the confined space of phyllosilicate shows an influence on properties of complexes. Here, a possibility of synthesizing metal-organic complexes that encapsulated in phyllosilicate is given. (C) 2017 Elsevier Inc. All rights reserved. Sorption of vapors of Cd at 480-600 degrees C and of Zn at 750-880 degrees C by SBA-15 was studied. The amounts of deposited metal ranged from 1 to 11% of the mass of silica, and they were especially high with long deposition times (up to 3 days) at relatively low temperatures. The specific surface areas after metal deposition were lower than those of the original SBA-15 silica, and they decreased with metal contents. The products of deposition of metals on silica can be considered as new materials, metal-modified silicas. (C) 2017 Elsevier Inc. All rights reserved. Triplet-triplet annihilation based upconversion emission (TTA-UC), through energy transfer processes among organic dyes, has been achieving great attentions for the potential applications in different fields; an important step forward the application of TTA-UC systems in real devices is the incorporation of the dye couple into solid supports. In this work mesoporous silica (SBA) with regular pores and silica nanoparticles (SNs)' with core-shell structure were prepared and loaded with 2,3,7,8,12,13,17,18-octaethI-21H,23H porphine platinum(II) (PtOEP) and 1,3,6,8-Tetraphenylpyrene (TPPy) which act as antenna/sensitizer of the TTA-UC process and as light emitting species, respectively. The samples were fully characterized by TEM imaging, XRDP, steady-state and time resolved fluorescence and phosphorescence measurements. No upconverted emission could be detected for the SBA samples because the mesoporous matrix offered a rigid location to the dyes resulting in aggregate and excimer-like species with modified electronic properties. On the other hand, TTA-UC was recorded on SNs samples; in the latter case, steady-state and time resolved phosphorescence and fluorescence measurements indicated that PtOEP was entrapped in monomeric form and TPPy was mainly present as monomer. The careful and detailed photophysical characterization of the obtained nanostructured materials enables the optimization of the conditions to achieve light upconversion in solid matrices. (C) 2017 Elsevier Inc. All rights reserved. Transition-metal oxides have been widely explored as the anode materials for lithium-ion batteries (LIBs) because of its low cost and high energy/power density. However, the electrode pulverization and capacity fading during cycling lead to poor cycling performance. Herein, ultrathin ZnCo2O4 nanosheets with desired mesoporosity and high surface area are prepared by a facile hydrothermal approach. Such ZnCo2O4 nanostructures show excellent lithium storage performance as anode materials for LIBs. At a current density of 1 A g(-1), the ultrathin ZnCo2O4 nanosheets present an initial specific capacity of 1251 mAh g(-1) and the specific capacity remains at 810 mAh g-1 even after 200 discharge charge cycles. (C) 2017 Elsevier Inc. All rights reserved. Three dimensional (3-D) mesoporous silica with large interconnecting pores are found suitable for aminosilane grafting to achieve high CO2 adsorption with higher amine loading. In the present study, cubic KIT-6 is functionalized by (3-aminopropyl)triethoxysilane, N-[3-(trimethoxysilyl)propyl]ethylenediamine and N-1-(3-trimethoxysilylpropyle)diethylene triamine with various concentrations of water in aqueous solvent by post grafting method. The effect of water in grafting of aminosilane onto KIT-6 is analyzed by N-2 adsorption/desorption, TEM micrograph, TG analysis and CO2 adsorption. The TG analysis suggests that, surface density of aminosilane increases with increase in water concentration in grafting solvent and TEM micrograph apparently shows the more intense aminosilane on mesoscopic level. The maximum adsorption capacity is 1.60, 2.09 and 2.59 mmol CO2/g-adsorbent for WK.20AP, WK.20DA and WK.10TA, respectively at 30 degrees C and 1.0 bar. The CO2/N-2 selectivity is much higher for aqueous solution grafted adsorbent. Moreover, aqueous solution grafted adsorbents are regenerable showing stable sorption performance till 20 cycles. Thus, aqueous solution grafting is a more efficient way to graft the aminosilane on mesoporous silica as well as design CO2/N-2 selective adsorbent. (C) 2017 Elsevier Inc. All rights reserved. A general procedure to synthesize the Al-containing layered CDO precursor (PreCDO) is presented, allowing its preparation under broad Si/Al molar ratios by using novel pyrrole-derived organic molecules as organic structure directing agents (OSDAs). The direct calcination of the PreCDO materials results in crystalline Al-containing small-pore CDO zeolites with controlled Al species in tetrahedral coordination. In contrast, mild acid treatments on the PreCDO materials allow achieving medium-pore interlayer expanded CDO zeolites (IEZ-CDO). These expanded zeolites show high crystallinity, high porosity and controlled Si/Al molar ratios. Finally, preliminary catalytic results indicate that the Al-containing CDO and IEZ-CDO samples show good activity and selectivity for the selective catalytic reduction (SCR) of NOx, and methanol-to-olefins (MTO) processes, respectively. (C) 2017 Elsevier Inc. All rights reserved. Mesoporous silicas with different pore size were modified by aminosilanes with different molecular length. CO2 adsorption capacity was improved by all aminosilane modification, and the CO2 adsorption capacity was further improved with increasing the aminosilane density. However, CO2 adsorption capacity and the amine efficiency of (3-trimethoxysilylpropyl)diethylentriamine (TA) modified mesoporous silicas with small pores such as MCM-41(2.9 or 3.1 nm) and SBA-15(6.2 or 7.1 nm) were significantly decreased at high aminosilane density. In contrast, TA modified SBA-15 with a large pore (10.6 nm) exhibited further improvement of CO2 adsorption capacity and amine efficiency. N-2 adsorption desorption measurements suggested that the pore plugging occurred at high aminosilane density in TA modified mesoporous silicas with small pores. In situ FT-IR spectra and the half-value period of CO2 equilibrium adsorption capacity clearly shows the CO2 adsorption kinetics in these adsorbents with small pores were significantly lower than that in the adsorbent with large pores. To develop the CO2 adsorbents with high adsorption performance, the correlation between aminosilane molecular length, pore size of mesoporous silica and aminosilane density should be considered. (C) 2017 Elsevier Inc. All rights reserved. The preparation of NCC bentonite nanocomposite using waste paper as the source of NCC was conducted in this study. The adsorption performance of the composite was tested for the removal of Pb(II) and Hg(II) from aqueous solution in single and binary systems. Langmuir and Freundlich adsorption isotherms were employed to correlate pure component adsorption isotherm. Langmuir equation can represent the experimental data better than Freundlich, the gin for lead is higher than mercury in both systems (single system q(m) = 0.44 mmol/g (Pb) and 0.23 mmol/g (Hg) for composite). All systems exhibit endothermic process, except for bentonite which shows the exothermic process. The modification of extended Langmuir model for a binary system with the inclusion of fractional loading and heat of adsorption was proposed in this study. The modified extended Langmuir model could represent the experimental data better than original extended Langmuir equation. (C) 2017 Elsevier Inc. All rights reserved. We present the results of a thermodynamic and kinetics of adsorption study of CO2 sorbed in ZIF-8. Adsorption isotherms were measured at ten temperatures between 133 K and 237 K. Consistent with what has been reported for this system at higher temperatures, there is only one step in the sorption isotherms up to reaching the saturated vapor pressure. We obtained the isosteric heat of adsorption as a function of loading from the isotherm data. The isosteric heat displays a non-monotonic dependence on loading: there is a maximum in the isosteric heat at high sorbent loadings. For all loadings below the maximum, the isosteric heat is an increasing function of loading. We also measured the time required to reach equilibrium after a dose of gas was added to the sorbent to establish the sorption kinetics for this system. While the general trend in the equilibration times is that equilibration times are shorter for increasing loading, we found a small local maximum in a region in the neighborhood of where the isosteric heat has its peak value. The thermodynamic and kinetic results obtained for CO2 in ZIF-8 are compared with those obtained in prior studies of 02 and Xe in ZIF-8. (C) 2017 Elsevier Inc. All rights reserved. Al-ITQ-13 zeolites with different Si/A1 atom ratios above 25 were successfully synthesized in short crystallization time by using seeds in the gel. A variety of synthesis conditions were investigated in-depth and the resultant zeolite samples were characterized by XRD, SEM, N-2 physisorption and FTIR spectra. Al-TQ-13 zeolite samples synthesized with different Si/AI ratios exhibit similar textural properties. The amount and strength of acid sites decrease with increasing Si/Al ratio of Al-ITQ-13 zeolites. It was found that H-Al-ITQ-13 samples with low Si/A1 ratios show a very high B/L ratio (such as 2.40) at a desorption temperature of 573 K. 1-Butene cracking reaction over proton form Al-ITQ-13 zeolites (Si/Al atom ratio from 47 to 179) was carried out in a continuous flow stainless steel reactor. It was found that the higher of the aluminium content in Al-ITQ-13 (i.e. lower Si/Al ratio), the better selectivity for ethylene and propylene yield. While the hydrogen transfer reaction and alkylation reaction were suppressed with increasing Si/A1 ratio. (C) 2017 Elsevier Inc. All rights reserved. In the presented manuscript an influence of the mesoporosity generation in commercial ZSM-5 zeolite on its catalytic performance in two environmental processes, such as NO reduction with ammonia (NH3SCR, Selective Catalytic Reduction of NO with NH3) and NH3 oxidation (NH3-SCO, Selective Catalytic Oxidation of NH3) was examined. Micro-mesoporous catalysts with the properties of ZSM-5 zeolite were obtained by desilication with NaOH and NaOH/TPAOH (tetrapropylammonium hydroxide) mixture with different ratios (TPA+/OH- = 0.2, 0.4, 0.6, 0.8 and infinity) and for different durations (1, 2, 4 and 6 h). The results of the catalytic studies (over the Cu-exchanged samples) showed higher activity of this novel mesostructured group of zeolitic materials. Enhanced catalytic performance was related to the generated mesoporosity (improved Hierarchy Factor (HF) of the samples), that was observed especially with the use of Pore Directing Agent (PDA) additive, TPAOH. Applied desilication conditions did not influence significantly the crystallinity of the samples (X-ray diffraction analysis (XRD)), despite the treatment for 6 h in NaOH solution, which was found to be too severe to preserve the zeolitic properties of the samples. The modified porous structure and accessibility of acid sites (increased surface acidity determined by temperature programmed desorption of ammonia (NH3-TPD)) influenced the red-ox properties of copper species introduced by ion-exchange method (temperature programmed reduction with hydrogen (H-2-TPR). Increased acidity of the micro-mesoporous samples, as well as the content of easily reducible copper species resulted in a significant improvement of Cu-ZSM-5 catalytic efficiency in the NH3-SCR and NH3-SCO processes. (C) 2017 Elsevier Inc. All rights reserved. In the current paper, mesocrystals are used as effective precursors to design nanoreactors with different kinds of enclosed porosity. The thermal treatment of hematite mesocrystalline nanoparticles is investigated as a post-processing tool for the engineering of internal organization of hierarchical structures. The porosity of starting materials and of particles thermally treated at different temperatures is investigated by transmission electron microscopy, nitrogen sorption and 360 degrees electron tomography. Virtual Capillary Condensation and Maximum Sphere Inscription are used as independent approaches for the quantitative assessment of internal porosity. The combination of experimental evidences and simulations provides a deep understanding of the internal topology of nanoreactors upon thermal treatment of mesocrystalline particles. This new design strategy may pave the way for exploring the use of the post treated mesocrystals as carriers to encapsulate materials for optoelectronic applications. (C) 2017 Elsevier Inc. All rights reserved. Micropore can efficiently tailor the diffused rate of reactant molecule in heterogeneous catalysis and enhance collision frequency with catalytic active sites, hence increasing catalytic properties. In this study, boosting catalytic activity for olefins epoxidation was obtained using micropore-enriched CuO-based silica catalyst. This special catalyst incorporated with highly-dispersed copper oxides was directly fabricated with anionic surfactant chelating Cu2+ as the template. Dispersed CuO species were in situ produced and encapsulated in the channels of mesoporous silica, avoiding the repeated calcination in the conventional modification process. Meanwhile, massive micropores (ca.1.6 nm) were directly formed. In addition, to receive optimal catalytic performance, controllable amounts of Cu2+ were employed for modifying anionic micelles. This obtained catalyst exhibited obviously stronger reducibility and better dispersity of CuO in comparison with the post-impregnated sample, more interestingly, the introduction of Cu2+ in the assemble process improved structural properties of silica. In addition, a proper loading (Cu/Si = 4.34%) of CuO was found to be preferable in the catalytic process. Finally, catalytic results revealed that the micropore-enriched catalyst exhibited obviously higher catalytic activity as compared to the mesoporous catalyst, and its further applications in various olefins epoxidation were preferable. 2017 Elsevier Inc. All rights reserved. We investigate systematic changes in corporate effective tax rates over the past 25 years and find that effective tax rates have decreased significantly. Contrary to conventional wisdom, the decline in effective tax rates is not concentrated in multinational firms; effective tax rates have declined at approximately the same rate for both multinational and domestic firms. Moreover, within multinational firms, both foreign and domestic effective rates have decreased. Finally, changes in firm characteristics and declining foreign statutory tax rates explain little of the overall decrease in effective rates. (C) 2017 Elsevier B.V. All rights reserved. This study evaluates the relation between hostile takeovers and 17 takeover laws from 1965 to 2014. Using a data set of largely exogenous legal changes, we find that certain takeover laws, such as poison pill and business combination laws, have no discernible impact on hostile activity, while others such as fair price laws have reduced hostile takeovers. We construct a Takeover Index from the laws and find that higher takeover protection is associated with lower firm value, consistent with entrenchment and agency costs. However, conditional on a bid, firms with more protection achieve higher premiums, consistent with increased bargaining power. (C) 2017 Elsevier B.V. All rights reserved. This paper examines the impact of stock liquidity on firm bankruptcy risk. Using the Securities and Exchange Commission decimalization regulation as a shock to stock liquidity, we establish that enhanced liquidity decreases default risk. Stocks with the highest default risk experience the largest improvements. We find two mechanisms through which stock liquidity reduces firm default risk: improving stock price informational efficiency and facilitating corporate governance by blockholders. Of the two mechanisms, the informational efficiency channel has higher explanatory power than the corporate governance channel. (C) 2017 Elsevier B.V. All rights reserved. We characterize the dynamic fragmentation of U.S. equity markets using a unique data set that disaggregates dark transactions by venue types. The "pecking order" hypothesis of trading venues states that investors "sort" various venue types, putting low-cost-low-immediacy venues on top and high-cost-high-immediacy venues at the bottom. Hence, midpoint dark pools on top, non-midpoint dark pools in the middle, and lit markets at the bottom. As predicted, following VIX shocks, macroeconomic news, and firms' earnings surprises, changes in venue market shares become progressively more positive (or less negative) down the pecking order. We further document heterogeneity across dark venue types and stock size groups. Published by Elsevier B.V. Private equity (PE) performance is persistent, with PE firms consistently producing high (or low) net-of-fees returns. We use a new variance decomposition model to isolate three components of persistence. We find high long-term persistence: the spread in expected net-of-fee future returns between top and bottom quartile PE firms is 7-8 percentage points annually. This spread is estimated controlling for spurious persistence, which arises mechanically from the overlap of contemporaneous funds. Performance is noisy, however, making it difficult for investors to identify the PE funds with top quartile expected future performance and leaving little investable persistence. (C) 2017 Elsevier B.V. All rights reserved. I examine the link between political uncertainty and firm investment using U.S. gubernatorial elections as a source of plausibly exogenous variation in uncertainty. Investment declines 5% before all elections and up to 15% for subsamples of firms particularly susceptible to political uncertainty. I use term limits as an instrumental variable (IV) for election closeness. Because close elections are related to economic downturns, I find that the effect of close elections on investment is understated by more than half by ordinary least squares (OLS). Post-election rebounds in investment depend on whether an incumbent is re-elected. Finally, I provide evidence that firms delay equity and debt issuances tied to investments before elections. (C) 2017 Published by Elsevier B.V. We study the consequences of a US deregulation allowing small firms to accelerate their public equity issuance. Post-deregulation, affected firms double their reliance on public equity and transition away from private investments in public equity compared to similar untreated firms. The net effect is a 5.7 percentage point or 49% increase in the annual probability of raising equity. This is accompanied by a reduction in equity issuance costs, an increase in investment, and a decrease in leverage. Our findings provide evidence that reducing equity issuance barriers benefits issuers even in highly developed markets. (C) 2017 Elsevier B.V. All rights reserved. We model the debt maturity choice of firms in the presence of fixed issuance costs in the primary market and search frictions in the secondary market for debt. In the secondary market, short maturities improve the bargaining position of sellers, which reduces the required issuance yield. Long maturities reduce reissuance costs. The optimally chosen maturity trades off both considerations. Equilibrium exhibits inefficiently short maturity choices. An individual firm does not internalize that a longer maturity increases expected gains from trade in the secondary market, which attracts more buyers and, hence, facilitates the sale of debt issued by other firms. (C) 2017 Elsevier B.V. All rights reserved. We examine the impact of the Global Settlement on affiliation bias in analyst recommendations. Using a broad measure of investment bank-firm relationships, we find a substantial reduction in analyst affiliation bias following the settlement for sanctioned banks. In contrast, we find strong evidence of bias both before and after the settlement for affiliated analysts at non-sanctioned banks. Our results suggest that the settlement led to an increase in the expected costs of issuing biased coverage at sanctioned banks, while concurrent self-regulatory organization rule changes were largely ineffective at reducing the influence of investment banking on analyst research at large non-sanctioned banks. (C) 2017 Elsevier B.V. All rights reserved. This paper addresses regulatory concerns that large shareholders of credit rating agencies can influence the rating process. Unlike Standard & Poor's, which is a privately held division of McGraw-Hill, Moody's is a public company listed on the NYSE. From 2001 to 2010, Moody's has two shareholders, Berkshire Hathaway and Davis Selected Advisors, which collectively own about 23.5% of Moody's. Moody's ratings on bonds issued by important investee firms of these two stable large shareholders are more favorable relative to S&P, as well as Fitch, ratings. We exploit Moody's initial public offering in 2000 to address endogeneity and to mitigate concerns that the results are driven by issuer characteristics or by the greater informativeness of Moody's ratings. S&P's parent, McGraw-Hill, has a large shareholder for much less time, and some weak evidence exists that S&P ratings are relatively more favorable toward the owners of McGraw-Hill. These findings are consistent with regulatory concerns about the ownership and governance of rating agencies, especially those that are publicly listed. Published by Elsevier B.V. This paper proposes a novel active contour model called weighted kernel mapping (WKM) model along with an extended watershed transformation (EWT) method for the level set image segmentation, which is a hybrid model based on the global and local intensity information. The proposed EWT method simulates a general spring on a hill with a fountain process and a rainfall process, which can be considered as an image pre-processing step for improving the image intensity homogeneity and providing the weighted information to the level set function. The WKM model involves two new energy functionals which are used to segment the image in the low dimensional observation space and the higher dimensional feature space respectively. The energy functional in the low dimensional space is used to demonstrate that the proposed WKM model is right in theory. The energy functional in the higher dimensional space obtains the region parameters through the weighted kernel function by utilising mean shift technique. Since the region parameters can better represent the values of the evolving regions due to the better image homogeneity, the proposed method can more accurately segment various types of images. Meanwhile, by adding the weighted information, the level set elements can be updated faster and the image segmentation can be achieved with fewer iterations. Experimental results on synthetic, medical and natural images show that the proposed method can increase the accuracy of image segmentation and reduce the iterations of level set evolution for image segmentation. (C) 2017 Elsevier B.V. All rights reserved. The semantic segmentation task is highly related to detection and apparently can provide complementary information for detection. In this paper, we propose integrating deep semantic segmentation feature maps into the original pedestrian detection framework which combines feature channels with AdaBoost classifiers. Firstly, we develop shallow-deep channels by concatenating shallow hand-crafted and deep segmentation channels to capture appearance clues as well as semantic attributes. Then a set of manually designed filters are utilized on the new channels to generate more response feature maps. Finally a cascade AdaBoost classifier is learned for hard negatives selection and pedestrian detection. With abundant feature information, our proposed detector achieves superior results on Caltech USA 10x and ETH dataset. (C) 2017 Elsevier B.V. All rights reserved. We present a new class of neural network using a variable structure model of neuron (VSMN). From this structure, we generate four models of neurons. For each model, we study different behaviors such as stable or equilibrium, degraded, hole, alternated, oscillator, harmonic, fractal, and chaos behaviors. Then we design different topologies and architectures of neural networks. These architectures are different from the classical ones; each layer of network contains different models of neurons, neurons can take four models by configuration of VSMN. We also present a numerical study describing the behavior of some models of neuron. We illustrate some results to show the efficiency of this new class of neural networks. We show that these neurons put tracks on their stimulators such as: signal track and half bounded region track with two high and low directions. Two applications in chaos and robotic are also given. (C) 2017 Elsevier B.V. All rights reserved. Matrix factorization (MF) is an increasingly important approach in the field of missing value prediction because recommender systems are rapidly becoming ubiquitous. MF-based collaborative filtering (CF) seeks to improve recommender performance by combining user-item matrix with MF. However, most MF-based approaches available at present could not obtain high prediction accuracy because of the sparse availability of user-item matrices in CF models. The present paper proposes a framework that involves two efficient MF, dynamic single-element-based CF-integrating manifold regularization (DSMMF) and dynamic single-element-based Tikhonov graph regularization non-negative MF (DSTNMF). The aim of this framework is to better use the intrinsic structure of user-item rating matrix and user/item content information, overcome the dimensionality curse and ill-posed problem of weighted graph NMF, and evade the frequent manipulations of indicator matrices that lack practicability. We validate the effectiveness of our proposed algorithms with respect to recommender performance by four indices on three datasets. We demonstrate that our proposed approaches lead to considerable improvement compared with several other state-of-the-art approaches. 2017 Elsevier B.V. All rights reserved. In this paper, we study a class of coupled time-delayed neural networks with discontinuous activations. Based on the new analysis techniques, framework of nonsmooth analysis, differential inclusion theory, both discontinuous controller and novel controller with continuous part are considered to realize the finite-time synchronization for discontinuous coupled neural networks. Some criteria are investigated in detail on constructing suitable Lyapunov functions cover 1-norm and quadratic-form, which were considered separately in the past. Finally, numerical examples are provided to show the correctness of our analysis. Our results are essentially new and they extend previously known researches. (C) 2017 Elsevier B.V. All rights reserved. Predictions regarding the solar greenhouse temperature and humidity are important because they play a critical role in greenhouse cultivation. On account of this, it is important to set up a predictive model of temperature and humidity that would precisely predict the temperature and humidity, reducing potential financial losses. This paper presents a novel temperature and humidity prediction model based on convex bidirectional extreme learning machine (CB-ELM). Simulation results show that the convergence rate of the bidirectional extreme learning machine (B-ELM) can further be improved while retaining the same simplicity, by simply recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. The performance of the CB-ELM model is compared with other modeling approaches by applying it to predict solar greenhouse temperature and humidity. The experiment results show that the CB-ELM model predictions are more accurate than those of the B-ELM, Back Propagation Neural Network (BPNN), Support Vector Machine (SVM), and Radial Basis Function (RBF). Therefore, it can be considered as a suitable and effective method for predicting the solar greenhouse temperature and humidity. (C) 2017 Elsevier B.V. All rights reserved. In this paper, the exponential weighted entropy (EWE) and exponential weighted mutual information (EWMI) are proposed as the more generalized forms of Shannon entropy and mutual information (MI), respectively. They are position-related and causal systems that redefine the foundations of information theoretic metrics. As the special forms of the weighted entropy and the weighted mutual information, EWE and EWMI have been proved that they preserve nonnegativity and concavity properties similar to Shannon frameworks. They can be adopted as the information measures in spatial interaction modeling. Paralleling with the normalized mutual information (NMI), the normalized exponential weighted mutual information (NEWMI) is also investigated. Image registration experiments demonstrate that EWMI and NEWMI algorithms can achieve higher aligned accuracy than MI and NMI algorithms. 2017 Elsevier B.V. All rights reserved. Convolutional neural networks (CNNs) with deep learning have recently achieved a remarkable success with a superior performance in computer vision applications. Most of CNN-based methods extract image features at the last layer using a single CNN architecture with orderless quantization approaches, which limits the utilization of intermediate convolutional layers for identifying image local patterns. As one of the first works in the context of content-based image retrieval (CBIR), this paper proposes a new bilinear CNN-based architecture using two parallel CNNs as feature extractors. The activations of convolutional layers are directly used to extract the image features at various image locations and scales. The network architecture is initialized by deep CNNs sufficiently pre-trained on a large generic image dataset then fine-tuned for the CBIR task. Additionally, an efficient bilinear root pooling is proposed and applied to the low-dimensional pooling layer to reduce the dimension of image features to compact but high discriminative image descriptors. Finally, an end-to-end training with backpropagation is performed to fine-tune the final architecture and to learn its parameters for the image retrieval task. The experimental results achieved on three standard benchmarking image datasets demonstrate the outstanding performance of the proposed architecture at extracting and learning complex features for the CBIR task without prior knowledge about the semantic meta-data of images. For instance, using a very compact image vector of 16-length, we achieve a retrieval accuracy of 95.7% (mAP) on Oxford 5K and 88.6% on Oxford 105K; which outperforms the best results reported by state-of-the-art approaches. Additionally, a noticeable reduction is attained in the required extraction time for image features and the memory size required for storage. 2017 Elsevier B.V. All rights reserved. We propose a new subspace clustering method that integrates feature and manifold learning while learning a low-rank representation of the data in a single model. This new model seeks a low-rank representation of the data using only the most relevant features in both linear and nonlinear spaces, which helps reveal more accurate data relationships in both linear and nonlinear spaces, because data relationships can be less afflicted by irrelevant features. Moreover, the graph Laplacian is updated according to the learning process, which essentially differs from existing nonlinear subspace clustering methods that require constructing a graph Laplacian as an independent preprocessing step. Thus the learning processes of features and manifold mutually enhance each other and lead to powerful data representations. Extensive experimental results confirm the effectiveness of the proposed method. (C) 2017 Elsevier B.V. All rights reserved. Constructing a visual appearance model is essential for visual tracking. However, relying only on the visual model during appearance changes is insufficient and may even interfere with achieving good results. Although several visual tracking algorithms emphasize motional tracking that estimates the motion state of the object center between consecutive frames, they suffer from accumulated error during run-time. As neither visual nor motional trackers are capable of performing well separately, several groups have recently proposed simultaneous visual and motional tracking algorithms. However, because tracking problems are often NP-hard, these algorithms cannot provide good solutions for the reason that they are driven top-down with low flexibility and often encounter drift problems. This paper proposes a spiral visual and motional tracking (SVMT) algorithm which, unlike existing algorithms, builds a strong tracker by cyclically combining weak trackers from both the visual and motional layers. In the spiral-like framework, an iteration model is used to search for the optimum until convergence, with the potential for achieving optimization. Three learned procedures including visual classification, motional estimation, and risk analysis are integrated into the generalized framework and implement corresponding modifications with regard to their performances. The experimental results demonstrate that SVMT performs well in terms of accuracy and robustness. (C) 2017 Elsevier B.V. All rights reserved. Iron ore sintering process is the second-most energy-consuming procedure in the iron making industry. The main energy for it is the combustion of coke, which consists primary of carbon. In order to improve the carbon efficiency, it is necessary to predict it. A comprehensive carbon ratio (CCR) was used to be the metric for estimating the carbon efficiency. An iron ore sintering process has the characteristics of autocorrelation of time series of CCR, multiple variables, linearity and nonlinearity, and time delay. In this study, a hybrid time series prediction model was built to predict the CCR based on these characteristics. It consists of two parts: time series prediction based on Elman recurrent neural network (RNN) and Elman-residuals prediction based on double joint linear-nonlinear extreme learning network (JLNELN). The Elman RNN with a context layer has the ability to model the dynamical and nonlinear components in the time series, and the double JLNELN with the input neurons not only connected to the hidden neurons but also to the output neurons has the ability to model both the nonlinear and linear components in the prediction residuals. Actual run data was collected to verify the validity of the devised hybrid model. Experiment results have shown that the hybrid model achieved much higher regression precision than a single Elman RNN, which shows the necessity and validity of the double JLNELN model in the prediction of the Elman residuals. The experiment results of the double JLNELN method also show higher regression precision than both a double extreme learning machine method and a single JLNELN method, which verified the validity of the JLNELN method and the double structure of the prediction model. (C) 2017 Elsevier B.V. All rights reserved. With an aim to overcome low efficiency and improve the performance of fuzzy clustering, two novel fuzzy clustering algorithms based on improved self-adaptive cellular genetic algorithm (IDCGA) are proposed in this paper. The new dynamic crossover and entropy-based two-combination mutation operations are constructed to prevent the convergence of the algorithms to a local optimum by adaptively modifying the probabilities of crossover and mutation as well as mutation step size according to dynamic adjusting strategies and judging criterions. Arnold cat map is employed to initialize population for the purpose of overcoming the sensitivity of the algorithms to initial cluster centers. A modified evolution rule is introduced to build a dynamic environment so as to explore the search space more effectively. Then a new IDCGA that combined these three processes is used to optimize fuzzy c-means (FCM) clustering (IDCGA-FCM). Furthermore, an optimal-selection-based strategy is presented by the golden section method and then a hybrid fuzzy clustering method (IDCGA2-FCM) is developed by automatically integrating IDCGA with optimal-selection-based FCM according to the variation of population entropy. Experiments were performed with six synthetic datasets and seven real-world datasets to compare the performance of our IDCGA-based clustering algorithms with FCM, other GA-based and PSO-based clustering methods. The results showed that the presented algorithms have high efficiency and accuracy. (C) 2017 Elsevier B.V. All rights reserved. Sparse representation based nonlocal self-similarity methods have been proved to be effective for single image super-resolution. However, as the noise level increases, these methods always lead to the aggravated blurring of image small scale structures, which means the failure to preserve the edge structures. In this paper, we propose a new single image super-resolution method by combining edge difference with nonlocal self-similarity constraints. In the proposed method, firstly, we extract the image texture feature in the main direction for dictionary learning with Principal Components Analysis (PCA) to ensure the learned subdictionaries contain the image texture structures. Then, we explore the one dimensional edge difference between LR image and degraded version (e.g., blurred, noisy, and down-sampled) of the image reconstructed by the sparse representation based nonlocal self-similarity method with the leaned PCA subdictionaries and utilize it as the edge difference constraint. Thirdly, we incorporate the edge difference constraint into the sparse representation model based nonlocal self-similarity to preserve the edge structures and nonlocal self-similarity structures simultaneously. Moreover, we propose a nonlocal structure tensor optimization model to further improve image quality, which can effectively mitigate the loss of image high-frequency texture and edge information. Experiments on natural images validate that our method outperforms other state-of-the-art methods, especially for the noise image. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates optimal coordination tracking control for nonlinear multi-agent systems (NMASs) with unknown internal states by using an adaptive dynamic programing (ADP) method. Actually, the optimal coordination control for MASs depends on the solutions to the coupled Hamilton-Jacobi-Bellman (HJB) equations which are almost impossible to be solved analytically. And what's worse is that the accurate system models are either infeasible or difficult to obtain in practical applications. To surmount these deficiencies, a neural network (NN) based observer is designed for each agent to reconstruct its internal states by utilizing the measurable input-output data rather than accurate system models. Based on the observed states and Bellman optimality principle, we derive optimal coordination control policies from the coupled HJB equations. In order to implement the proposed ADP method, a critic network framework is proposed for each agent to approximate its value function and help calculate the optimal coordination control policy. Then we prove the local coordination tracking errors and weight estimation errors are uniformly ultimately bounded (UUB) while the approximated control policies converge to their target values. Finally, two simulation examples are given to show the effectiveness of the proposed ADP method. (C) 2017 Elsevier B.V. All rights reserved. This paper investigates a kind of switched discrete-time neural network. Such neural network is composed of multiple sub-networks and switched different sub-networks according to the states of neural network. There is no common equilibrium for all of sub-networks, i.e., multiple equilibria coexist. Firstly, a bounded condition is presented for the switched discrete-time neural network. And then sufficient conditions are derived to ensure region stability of the equilibrium points of such neural network by mathematical analysis and nonsingular M-matrix theory. Four examples are presented to verify the validity of our results. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a synthetic adaptive fuzzy tracking control method is studied for a class of multi-input multi-output (MIMO) uncertain nonlinear systems with time-varying disturbances. The unknown nonlinear functions are approximated by employing generalized fuzzy hyperbolic model. A synthetic adaptive fuzzy control is designed by dynamic surface control, serial-parallel estimation model and the disturbance observer. Then by using Lyapunov stability theory, it is guaranteed that all the variables of the closed-loop systems are semi-globally uniformly ultimately bounded (SGUUB). Finally, a satisfactory tracking performance with faster and higher accuracy can be obtained by adjusting the parameters appropriately. A practical example simulation can demonstrate the effectiveness and applicability of the proposed control approach. (C) 2017 Elsevier B.V. All rights reserved. Many real-world machine learning tasks have very limited labeled data but a large amount of unlabeled data. To take advantage of the unlabeled data for enhancing learning performance, several semi supervised learning techniques have been developed. In this paper, we propose a novel semi-supervised ensemble learning algorithm, termed Multi-Train, which generates a number of heterogeneous classifiers that use different classification models and/or different features. During the training process, each classifier is refined using unlabeled data, which are labeled by the majority prediction of the rest classifiers. We hypothesize that the use of different models and different input features can promote the diversity of the ensemble, thereby improving the performance compared to existing methods such as the co-training and tri-training algorithms. Experimental results on the UCI datasets clearly demonstrated the effectiveness of using heterogeneous ensembles in semi-supervised learning. (C) 2017 Elsevier B.V. All rights reserved. Real-time learning needs algorithms operating in a fast speed comparable to human or animal, however this is a huge challenge in processing visual inputs. Research shows a biological brain can process complicated real-life recognition scenarios at milliseconds scale. Inspired by biological system, in this paper, we proposed a novel real-time learning method by combing the spike timing-based feed-forward spiking neural network (SNN) and the fast unsupervised spike timing dependent plasticity learning method with dynamic post-synaptic thresholds. Fast cross-validated experiments using MNIST database showed the high efficiency of the proposed method at an acceptable accuracy. (C) 2017 Elsevier B.V. All rights reserved. In order to properly characterize the preservation of structural features for synthetic aperture radar (SAR) image despeckling, a novel metric called the NRDSP (no-reference despeckling structure-preserving) is put forward and investigated. To begin with, the DSSIM (distance for structural similarity) metric is presented to characterize the distance between the ratio image and the despeckled result. As an improvement of the ENLR (equivalent number of looks of ratio images), the NENLR (nominal number of looks-oriented ENLR) metric is proposed with the advantage of rationally bounded function. The DSSIM and NENLR are indispensable and complementary for characterizing the degree of structure-preserving. Furthermore, based on the new factors of DSSIM and NENLR, the novel NRDSP metric is proposed to appropriately measure the structure-preserving performance. Lastly, we carry through some testing for simulated and real SAR images, and it is verified by the observers' evaluation as well as scientific data that the proposed NRDSP metric is more consistent with the perceptual perceptions than other metrics, and possesses the better effect for characterizing structure-preserving performance. (C) 2017 Published by Elsevier B.V. Learning with Fredholm kernel has attracted increasing attention recently since it can effectively utilize the data information to improve the prediction performance. Despite rapid progress on theoretical and experimental evaluations, its generalization analysis has not been explored in learning theory literature. In this paper, we establish the generalization bound of least square regularized regression with Fred holm kernel, which implies that the fast learning rate O(l(-1)) can be reached under mild conditions (l is the number of labeled samples). Simulated examples show that this Fredholm regression algorithm can achieve the satisfactory prediction performance. (C) 2017 Elsevier B.V. All rights reserved. This paper establishes input-to-state stability (ISS) and robust ISS of neural networks with Markovian switching (NNwMS). The M matrix algebraic condition for stochastic NNwMS is given; the result is then extended to stochastic time varying delays NNwMS. From the ISS condition of stochastic delayed NNwMS, we get robust ISS of NNwMS in two cases: delay perturbation in diffusion and delay perturbation in drift, respectively. These ISS criteria are readily to be checked only from the parameters of the NNwMS and also ensure exponential stability without input term. The results presented here include neural networks without Markovian switching as special cases. Two numerical examples are given to show the effectiveness of theoretical criteria. (C) 2017 Elsevier B.V. All rights reserved. Abrupt motions commonly cause conventional tracking methods to fail because they violate the motion smoothness constraint. To overcome this problem, we propose a novel SIFT flow tracker (SFT) and integrate it into a sparse representation-based tracking framework. In this method, we first introduce the SIFT flow method to address the tracking problem. The method can avoid the local-trap modes and cope with abrupt motion without any prior knowledge. Then, for obtaining the effective samples, we design a new hybrid sampling mechanism, which can sample the local and global predicted location according to confidence map. Finally, to adapt the target appearance variations, especially to partial occlusion, we embed SFT to L1 tracker and construct a unified framework to track both smooth and abrupt motion in time. Compared with several state-of-art tracking algorithms, experimental results demonstrate that our method achieves favorable performance in handling abrupt motion, even under target appearance variations including illumination changes, partial occlusion and pose changes. (C) 2017 Elsevier B.V. All rights reserved. In the chemical industry, fault diagnosis is a challenging task due to the complexity and the high demand of product efficiency and consistency in the chemical plant. In this paper, an improved Case-based reasoning method is proposed to predict the status of Tennessee Eastman process (TE process). Firstly, a case reduction method based on selective rules is proposed to maintain the scale of the case base for decreasing the time complexity of the diagnosis process. Afterwards, a case reuse strategy based on trustworthy radius is introduced with the group decision making, this reuse strategy is utilized to evaluate the suggested diagnosis result as trustworthy or untrustworthy by calculating out the trustworthy radius of each category. Finally, the cases that are evaluated as untrustworthy would be adjusted with the group decision making strategy to get the final diagnosis result to accomplish the whole diagnosis process of CBR. Compared with other methods, the diagnosis results obtained from the proposed approach showed higher accuracy and reliability, which demonstrated the improvement of the diagnosis performance of TE process as well as the effectiveness and superiority of the proposed method on fault diagnosis. (C) 2017 Elsevier B.V. All rights reserved. SemiBoost Mallapragada et al. (2009) is a boosting framework for semi-supervised learning, in which unlabeled data as well as labeled data both contribute to learning. Various strategies have been proposed in the literature to perform the task of selecting useful unlabeled data in SemiBoost. Recently, a multi-view based strategy was proposed in Le and Kim (2016), in which the feature set of the data is decomposed into subsets (i.e., multiple views) using a feature-decomposition method. In the decomposition process, the strategy inevitably results in some loss of information. To avoid this drawback, this paper considered feature-transformation methods, rather than using the decomposition method, to obtain the multiple views. More specifically, in the feature-transformation method, a number of views were obtained from the entire feature set using the same number of different mapping functions. After deriving the number of views of the data, each of the views was used for measuring corresponding confidences, for first evaluating examples to be selected. Then, all the confidence levels measured from the multiple views were combined as a weighted average for deriving a target confidence. The experimental results, which were obtained using support vector machines for well-known benchmark data, demonstrate that the proposed mechanism can compensate for the shortcomings of the traditional strategies. In addition, the results demonstrate that when the data is transformed appropriately into multiple views, the strategy can achieve further improvement in results in terms of classification accuracy. (C) 2017 Elsevier B.V. All rights reserved. Epileptic seizure prediction is limited by the unstable performance of suboptimal models. Studies of new methods for reliable preictal prediction have significant impact on the control, early care and online treatment of epileptic seizure. The traditional chaos measure does not effectively identify multiple states of epileptic electroencephalogram (EEG). A novel method was adopted to capture subtle chaotic dynamics for epileptic signals in fractional Fourier transform domain. Algorithm of the largest Lyapunov exponent was modified to adapt the transformed series by using an energy measure to determine appropriate fractional order. The performance of our proposed method was evaluated with an automatic model of preictal prediction using artificial neural networks as classifier. The results showed that the new model yielded higher accuracy in identifying the preictal state compared to the original largest Lyapunov exponent. Experimental results with noisy scalp epileptic EEGs also demonstrated the potential and robustness of our approach to discriminate preictal from interictal and ictal states, and it provided a novel methodology for reliable preictal prediction of epileptic seizure. (C) 2017 Elsevier B.V. All rights reserved. Extracellular recording from living neurons employing microelectrode arrays has attracted paramount attention in recent years as a way to investigate the functionality and disorders of the brain. To decipher useful information from the recorded signals, accurate and efficient neural spike activity detection and sorting becomes an essential prerequisite. Traditional approaches rely on thresholding to detect individual spikes and clustering to identify subset groups; however, these methods fail to identify temporally synchronous spikes due to neuronal synchrony. To address this challenge, we introduce a novel spike sorting algorithm incorporating both quantitative and probabilistic techniques to better approximate the ground truth information of the spike activity. A novel pre-clustering method for identifying key features that can form natural clusters and a dimension reduction technique for identifying the spiking activity are introduced. To address the temporal neuronal synchrony phenomenon leading to detection of multi neural overlapped spikes, a procedure for template spike shape estimation and iterative recognition is developed employing the cross correlation methodology tailored to individual neuron's spike rate. A performance comparison between the proposed method and existing techniques in terms of the number of spikes identified and efficiency of sorting the spikes is presented. The outcome shows the effectiveness of the proposed method in identifying temporally synchronous spikes. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a self-organizing map (SOM) neural network is used to visualize corrective actions of failure modes and effects analysis (FMEA). SOM is a popular unsupervised neural network model that aims to produce a low-dimensional map (typically a two-dimensional map) for visualizing high-dimensional data. With regards to FMEA, it is a popular methodology to identify potential failure modes for a product or a process, to assess the risk associated with those failure modes, also, to identify and carry out corrective actions to address the most serious concerns. Despite the popularity of FMEA in a wide range of industries, two well-known shortcomings are the complexity of the FMEA worksheet and its intricacy of use. To the best of our knowledge, the use of computation techniques for solving the aforementioned shortcomings is limited. The use of SOM in FMEA is new. In this paper, corrective actions in FMEA are described in their severity, occurrence and detect scores. SOM is then used as a visualization aid for FMEA users to see the relationship among corrective actions via a map. Color information from the SOM map is then included to the FMEA worksheet for better visualization. In addition, a Risk Priority Number Interval is used to allow corrective actions to be evaluated and ordered in groups. Such approach provides a quick and easily understandable framework to elucidate important information from a complex FMEA worksheet; therefore facilitating the decision-making tasks by FMEA users. The significance of this study is two-fold, viz., the use of SOM as an effective neural network learning paradigm to facilitate FMEA implementations, and the use of a computational visualization approach to tackle the two well-known shortcomings of FMEA. (C) 2017 Elsevier B.V. All rights reserved. Naive Bayes, k-nearest neighbors, Adaboost, support vector machines and neural networks are five among others commonly used text classifiers. Evaluation of these classifiers involves a variety of factors to be considered including benchmark used, feature selections, parameter settings of algorithms, and the measurement criteria employed. Researchers have demonstrated that some algorithms outperform others on some corpus, however, inconsistency of human labeling and high dimensionality of feature spaces are two issues to be addressed in text categorization. This paper focuses on evaluating the five commonly used text classifiers by using an automatically generated text document collection which is labeled by a group of experts to alleviate subjectivity of human category assignments, and at the same time to examine the influence of the number of features on the performance of the algorithms. (C) 2017 Elsevier B.V. All rights reserved. Transcranial Doppler (TCD) is a reliable technique with the advantage of being non-invasive for the diagnosis of cerebrovascular diseases using blood flow velocity measurements pertaining to the cerebral arterial segments. In this study, the recurrent neural network (RNN) is used to classify TCD signals captured from the brain. A total of 35 real, anonymous patient records are collected, and a series of experiments for stenosis diagnosis is conducted. The extracted features from the TCD signals are used for classification using a number of RNN models with recurrent feedbacks. In addition to individual RNN results, an ensemble RNN model is formed in which the majority voting method is used to combine the individual RNN predictions into an integrated prediction. The results, which include the accuracy, sensitivity, and specificity rates as well as the area under the Receiver Operating Characteristic curve, are compared with those from the Random Forest Ensemble model. The outcome positively indicates the usefulness of the RNN ensemble as an effective method for detecting and classifying blood flow velocity changes due to brain diseases. (C) 2017 Elsevier B.V. All rights reserved. This paper presents a competitive co-evolutionary (ComCoE) that engages a double elimination tournament (DET) to evolve artificial neural networks (ANNs) for undertaking data classification problems. The proposed model performs a global search by a ComCoE approach to find near optimal solutions. During the global search process, two populations of different ANNs compete and fitness evaluation of each ANN is made in a subjective manner based on their participations throughout a DET which promotes competitive interactions among individual ANNs. The adaptation and fitness evaluation processes drive the global search for a more competent ANN classifier. A winning ANN is identified from the global search. Then, the Scaled Conjugate Backpropagation algorithm, which is a local search, is performed to further train the winning ANN to obtain a precise solution. The performance of the proposed classification model is evaluated rigorously; its performance is compared with the baseline ANNs of the proposed model as well as other classifiers. The results indicate that the proposed model could construct an ANN which could produce high classification accuracy rates with a compact network structure. (C) 2017 Elsevier B.V. All rights reserved. Rapid and high-throughput protein purification methods are required to explore structure and function of several uncharacterized proteins. Isolation of recombinant protein expressed in Escherichia coli strain BL21 (DE3) depends largely on the efficient and speedy bacterial cell lysis, which is considered as the bottleneck during protein purification. Cells are usually lysed by either sonication or high pressure homogenization, both of which are slow, require special equipment, lead to heat generation, and may result in loss of protein's biological activity. We report here a novel method to lyse E. coli, which is rapid, and results in high yield of isolated protein. Here, we have carried out intracellular expression of lysozyme domain (LD) of mycobacteriophage D29 endolysin. LD remains non-toxic until chloroform is added into the culture medium that permeabilizes bacterial cell membrane and allows the diffusion of LD to the peptidoglycan layer causing latter's degradation ensuing cell lysis. Our method efficiently lyres E. coli in short duration. As a proof-of-concept, we demonstrate large scale isolation and purification of a subunit of E. coli RNA polymerase and GFP, when they are co-expressed with LD. We believe that our method will be adopted easily in high-throughput as well as large scale protein isolation experiments. (C) 2017 Elsevier Inc. All rights reserved. MicroRNAs (miRNAs) have key roles in gene expression and can be employed as biomarkers for early diagnosis of various diseases, especially cancers. Detection of miRNAs remains challenging and often requires detection platforms. Here, a horseradish peroxidase (HRP)-assisted hybridization chain reaction (HCR) for colorimetric detection of miR-155 was described. In the presence of target miRNA, the capture probe immobilized on the microplate sandwiched the target miR-155 with the 3' end of the reporter probe. Another exposed part of the RP at the 5'end triggered HCR producing double-stranded DNA polymers with multiple fluorescein isothiocyanates (FITC) for signal amplification. Finally, multiple HRP molecules were immobilized onto the long-range DNA nanostructures through FITC/anti-FITC monoclonal antibody interactions on the microplate for visualization by tetramethylbenzidine/H2O2 system and the colorless substrate turned into the blue product. To obtain accurate data, the absorbance at 450 nm was calculated by microplate reader. The detection limit was 31.8 fM (3.18 amol). Furthermore, this biosensor showed high specificity and was able to discriminate sharply between target miRNA and mismatched sequences. And this approach could be easily applied to the detection of miR-155 in serum sample, thereby ascribing it for a wide application. (C) 2017 Elsevier Inc. All rights reserved. This study examines the effects of CD use on enzymatic activity, following enzyme immobilization into nanofibers. There is almost no research available on the change in enzyme activity following interaction with cyclodextrin and electrospun nanofiber mats together. Laccase enzyme was immobilized into nanofibrous structures by various techniques, with and without gamma-CD addition, and the enzymatic activity of the laccase was analyzed. SEM, XRD, and FTIR analyses were used for the characterization of the resulting structures. Our results showed that cyclodextrin use has a positive effect on the enzyme's activity, and increases its stability. The enzymes treated by cyclodextrin showed activation after complex formation trials, and no activation loss or enzyme denaturation was detected. Our conclusions were supported by the enzyme activity test results, which also showed that immobilization by encapsulation methods gave better activity results than layering methods. Another important finding concerned the laccase's stable characteristics that helped to maintain its enzyme activation after the freeze drying process. Among all test groups, the best activity result was recorded by laccase-gamma-CD complex encapsulated PCL nanofibers with 96.48 U/mg. (C) 2017 Elsevier Inc. All rights reserved. Increased consumption of raw and par -boiled rice results in the formation of methylglyoxal (MG) at higher concentration and leads to complications in diabetic patients. Highly sensitive electrochemical biosensor was developed using glutathione (GSH) as a co-factor with vanadium pentoxide (V2O5) as a nano-interface for MG detection in rice samples. The Pt/V2O5/GSH/Chitosan bioelectrode displayed two well-defined redox peaks in its cyclic voltammograms for MG reduction. This occurred as two electron transfer process where MG gained two electrons from oxidized glutathione disulfide and formed hemithioacetal. The current density response of the fabricated bioelectrode was linear towards MG in the concentration range of 0.1-100 mu M with the correlation coefficient of 0.99, sensitivity of 1130.86 mu A cm(-2) mu M-1, limit of detection of 2 nM and response time of less than 18 s. The developed bioelectrode was used for the detection of MG in raw and par-boiled rice samples. (C) 2017 Elsevier Inc. All rights reserved. Systemic sclerosis (SSc) is a chronic autoimmune disease of the connective tissue. The variety and clinical relevance of autoantibodies in SSc patients have been extensively studied, eventually identifying agonistic autoantibodies targeting the platelet-derived growth factor receptor alpha (PDGFR alpha), and representing potential biomarkers for SSc. We used a resonant mirror biosensor to characterize the binding between surface-blocked PDGFR alpha and PDGFR alpha-specific recombinant human monoclonal autoantibodies (mAbs) produced by SSc B cells, and detect/quantify serum autoimmune IgG with binding characteristics similar to the mAbs. Kinetic data showed a conformation-specific, high-affinity interaction between PDGFR alpha and mAbs, with equilibrium dissociation constants in the low-to-high nanomolar range. When applied to total serum IgG, the assay discriminated between SSc patients and healthy controls, and allowed the rapid quantification of autoimmune IgG in the sera of SSc patients, with anti-PDGFR alpha IgG falling in the range 3.20-4.67 neq/L of SSc autoantibodies. The test was validated by comparison to direct and competitive anti-PDGFR alpha antibody ELISA. This biosensor assay showed higher sensibility with respect to ELISA, and other major advantages such as the specificity, rapidity, and reusability of the capturing surface, thus representing a feasible approach for the detection and quantification of high affinity, likely agonistic, SSc-specific anti-PDGFR alpha autoantibodies. (C) 2017 Elsevier Inc. All rights reserved. Botulinum neurotoxins (BoNTs) are the most toxic proteins in nature. Endopeptidase-mass-spectrometry (Endopep-MS) is used as a specific and rapid in-vitro assay to detect BoNTs. In this assay, immunocaptured toxin cleaves a serotype-specific-peptide-substrate, and the cleavage products are then detected by MS. Here we describe the design of a new peptide substrate for improved detection of BoNT type A (BoNT/A). Our strategy was based on reported BoNT/A-SNAP-25 interactions integrated with analysis method efficiency considerations. Integration of the newly designed substrate led to a 10-fold increase in the assay sensitivity both in buffer and in clinically relevant samples. (C) 2017 Elsevier Inc. All rights reserved. A microfluidic assay for monitoring the inhibition of thrombin peptidase activity was developed. The system, which utilised soluble reagents in continuous-flow injection mode, was configured so as to allow inhibitor titrations via gradient formation. This microfluidic continuous-flow injection titration assay (CFITA) enabled the potency of a set of small-molecule serine peptidase inhibitors (SPIs) to be evaluated. The results, compared to standard microtiter plate (MTP) data, indicated that a microfluidic CFITA provided an efficient and effective method for evaluating compound potency. Crucially, whereas for fast acting compounds the rank order of potency between the CFITA and MTP methods was preserved, for slow-acting compounds the observed CFITA potencies were significantly lower. These results, in conjunction with data from computer simulations, clearly demonstrated that continuous-flow assays, and perhaps microfluidic assays in general, must take into account binding kinetics when used to assess reaction criteria. (C) 2017 Elsevier Inc. All rights reserved. We present a universal amplified-colorimetric for detecting nucleic acid targets or aptamer-specific ligand targets based on gold nanoparticle-DNA (GNP-DNA) hybridization chain reaction (HCR). The universal arrays consisted of capture probe and hairpin DNA-GNP. First, capture probe recognized target specificity and released the initiator sequence. Then dispersed hairpin DNA modified GNPs were cross linked to form aggregates through HCR events triggered by initiator sequence. As the aggregates accumulate, a significant red-to purple color change can be easily visualized by the naked eye. We used miRNA target sequence (miRNA-203) and aptamer-specific ligand (ATP) as target molecules for this proof-of-concept experiment. Initiator sequence (DNA2) was released from the capture probe (MNP/DNA1/2 conjugates) under the strong competitiveness of miRNA-203. Hairpin DNA (H1 and H2) can be complementary with the help of initiator DNA2 to form GNP-H1/GNP-H2 aggregates. The absorption ratio (A(620)/A(520)) values of solutions were a sensitive function of miRNA-203 concentration covering from 1.0 x 10(-11) M to 9.0 x 10(-10) M, and as low as 1.0 x 10(-11) M could be detected. At the same time, the color changed from light wine red to purple and then to light blue have occurred in the solution. For ATP, initiator sequence (5'-end of DNA3) was released from the capture probe (DNA3) under the strong combination of aptamer-ATP. The present colorimetric for specific detection of ATP exhibited good sensitivity and 1.0 x 10(-8) M ATP could be detected. The proposed strategy also showed good performances for qualitative analysis and quantitative analysis of intracellular nucleic acids and aptamer-specific ligands. (C) 2017 Elsevier Inc. All rights reserved. Rapid diagnostic tests can be developed using ELISA for detection of diseases in emergency conditions. Conventional ELISA takes 1-2 days, making it unsuitable for rapid diagnostics. Here, we report the effect of reagents mixing via shaking or vortexing on the assay timing of ELISA. A 48-min protocol of ELISA involving 12-min incubations with reagent mixing at 750 rpm for every step was optimized. Contrary to this, time-optimized control ELISA performed without mixing produced similar results in 8 h, leaving a time gain of 7 h using the developed protocol. Collectively, the findings suggest the development of ELISA-based rapid diagnostics. (C) 2017 Elsevier Inc. All rights reserved. High resolution oxymetry study (HROS) of skeletal muscle usually requires 90-120 min preparative phase (dissection, permeabilization and washing). This work reports on the suitability of a rapid muscle preparation which by-passes this long preparation. For a few seconds only, muscle biopsy from pigs is submitted to gentle homogenization at 8000 rotations per minute using an ultra-dispersor apparatus. Subsequent HROS is performed using FCCP instead of ADP, compounds crossing and not plasma membrane, respectively. This simplified procedure compares favorably with classical (permeabilized fibers) HROS in terms of respiratory chain complex activities. Mitochondria from cells undergoing ultra dispersion were functionally preserved as attested by relative inefficacy of added cytochrome C (not crossing intact mitochondrial outer membrane) to stimulate mitochondrial respiration. Responsiveness of respiration to ADP (in the absence of FCCP) suggested that these intact mitochondria were outside cells disrupted by ultradispersion or within cells permeated by this procedure. (C) 2017 Elsevier Inc. All rights reserved. Skorobogatov constructed a bielliptic surface which is a counterexample to the Hasse principle not explained by the Brauer-Manin obstruction. We show that this surface has a 0-cycle of degree 1, as predicted by a conjecture of Colliot-Thelene. (C) 2017 Elsevier Inc. All rights reserved. In this note we give simple proofs for the irrationality of the numbers pi(4) and pi(6). (C) 2017 Elsevier Inc. All rights reserved. Let s(a, b) denote the classical Dedekind sum and S(a, = 12s(a, 6). For a given denominator q is an element of N, we study the numerators k is an element of Z of the values k/q, (k, q) = 1, of Dedekind sums S(a, b). Our main result says that if k is such a numerator, then the whole residue class of k modulo (q(2) - 1)q consists of numerators of this kind. This fact reduces the task of finding all possible numerators k to that of finding representatives for finitely many residue classes modulo (q2 1)q. By means of the proof of this result we have determined all possible numerators k for 2 <= q <= 60, the case q = 1 being trivial. The result of this search suggests a conjecture about all possible values k q, (k, q) = 1, of Dedekind sums S(a, b) for an arbitrary q is an element of N. (C) 2017 Elsevier Inc. All rights reserved. We prove a general transformation theorem that expresses the moments on quadratic products of binomial coefficients as linear sums of their four initial values. Sixteen summation formulae are presented explicitly as examples. They contain, as special cases, the previous results due to Chen-Chu (2009) and Miana et al. (2007, 2008, 2010). (C) 2017 Elsevier Inc. All rights reserved. Let f(n) be a multiplicative function with vertical bar f(n)vertical bar <= 1, q be a prime number and a be an integer with (a, q) = 1, chi be a non-principal Dirichlet character modulo q. Let epsilon be a sufficiently small positive constant, A be a large constant, q(1/2+epsilon) << N << q(A). In this paper, we shall prove that Sigma(n <= N) f(n)chi(n + a) << N log log q/log q and that Sigma(n <= N) f(n)chi(n + a(1)) . . . chi(n + a(t)) << N log log q/log q, where t >= 2, a(1), . . . , a(t) are distinct integers modulo q. (C) 2017 Elsevier Inc. All rights reserved. Let d = pq equivalent to 3 (mod 4) with prime p, q and q < p. We prove a precise bound in Hendy's theorem on imaginary quadratic fields Q(root-d) with class number h(-d) = 2 which is a full analogue of Frobenius-Rabinowitsch theorem (the case of class number one). Namely it is shown that h( d) = 2 if and only if the Levy-Hendy quadratic L-d(x) = qx(2) qx+ l(0) with l(0) = p + q/4 takes only prime values for integers x in the interval 0 <= x <= l(0) - 2. We discuss also two related conjectures in connection with generalizing a result of Chowla et al. ([1]) on the class number and the least prime quadratic residue. (C) 2017 Elsevier Inc. All rights reserved. The main aim of the article is to prove that the symmetric function Phi(n)(x, r) = Pi (i1+i2+...+in=r) (x(1)(i2) + x(2)(i2) + . . . +x(n)(in)) is Schur geometrically convex for x is an element of R-++(n) and fixed r is an element of N+ = {1,2, . . .}, where i(1), i(2), . . . , i(n) are non-negative integers. Further, we obtain (Phi(n) (x, r) is also Schur m-power convex for m <= 0. As applications, a Klamkin-Newman type inequality is derived. Finally, we list a counter example to illustrate Phi(n)(x,r), is neither Schur convex nor Schur concave. (C) 2017 Elsevier Inc. All rights reserved. We study a weighted divisor function Sigma(')(mn <= x) cos(2 pi rm theta(1)) sin(2 pi n theta(2))) where theta(1) (0 < theta(2), < 1) is a rational number. By connecting it with the divisor problem with congruence conditions, we establish an upper bound, mean-value, mean-square and some power-moments. (C) 2017 Elsevier Inc. All rights reserved. Inspired by representations of the class number of imaginary quadratic fields, in this paper, we give explicit evaluations of trigonometric series having generalized harmonic numbers as coefficients in terms of odd values of the Riemann zeta function and special values of L-functions subject to the parity obstruction. The coefficients that arise in these evaluations are shown to belong to certain cyclotomic extensions. Furthermore, using best polynomial approximation of smooth functions under uniform convergence due to Jackson and their log-sine integrals, we provide approximations of real numbers by combinations of special values of L-functions corresponding to the Legendre symbol. Our method for obtaining these results rests on a careful study of generating functions on the unit circle involving generalized harmonic numbers and the Legendre symbol, thereby relating them to values of polylogarithms and then finally extracting Fourier series of special functions that can be expressed in terms of Clausen functions. (C) 2017 Elsevier Inc. All rights reserved. Suppose rho 1 and rho 2 are two pure l-adic degree n representations of the absolute Galois group of a number field K of weights k(1) and k(2) respectively, having equal normalized Frobenius traces Tr(rho 1(sigma(upsilon)))/N upsilon(k1/2) and Tr(rho 2(sigma(upsilon)))/N upsilon(k2/2) at a set of primes upsilon of K with positive upper density. Assume further that the algebraic monodromy group of rho 1 is connected and rho 1 is absolutely irreducible. We prove that rho 1 and rho 2 are twists of each other by a power of the l-adic cyclotomic character times a character of finite order. As a corollary, we deduce a theorem of Murty and Pujahari proving a refinement of the strong multiplicity one theorem for normalized eigenvalues of newforms. (C) 2017 Elsevier Inc. All rights reserved. In this note, we study Artin's conjecture via group theory and derive Langlands reciprocity for certain solvable Galois extensions of number fields, which extends the previous work of Arthur and Clozel. In particular, we show that all nearly nilpotent groups and all groups of order less than 60 are of automorphic type. (C) 2017 Elsevier Inc. All rights reserved. In 1919, Ramanujan gave the identities Sigma(n >= 0) p(5n + a)q(n) = 5 Pi(n >= 1) (1 - q(5n))(5)/(1 - q(n))(6) and Sigma(n >= 0) p(7n + 5)q(n) = 7 Pi(n >= 1) (1 - q(7n))(3)/(1 - q(n))(4) + 49q Pi(n >= 1) (1 - q(7n))(7)/(1 - q(n))(8) and in 1939, H.S. Zuckerman gave similar identities for Sigma(n >= 0) p(25n + 24)q(n) , Sigma(n >= 0) p(49n + 47)q(n) and Sigma(n >= 0) p(13n + 6)q(n) From Zuckerman's paper, it would seem that this last identity is an isolated curiosity, but that is not the case. Just as the first four mentioned identities are well known to be the earliest instances of infinite families of such identities for powers of 5 and 7, the fifth identity is likewise the first of an infinite family of such identities for powers of 13. We will establish this fact and give the second identity in the infinite family. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we generalize recent work of Mizuhara, Sellers, and Swisher that gives a method for establishing restricted plane partition congruences based on a bounded number of calculations. Using periodicity for partition functions, our extended technique could be a useful tool to prove congruences for certain types of combinatorial functions based on a bounded number of calculations. As applications of our result, we establish new and existing restricted plane partition congruences, restricted plane overpartition congruences and several examples of restricted partition congruences. (C) 2017 Elsevier Inc. All rights reserved. We consider the quadratic exponential sums f(D)(alpha) = Sigma(d <= D) vertical bar Sigma(P/d < x <= 2P/d) e(d(2)x(2)alpha)vertical bar It is established that in some average sense, one has f(D)(alpha) << P-1/2+epsilon D-3/4. As applications, we improve two results concerning ternary problems in additive prime number theory. (C) 2017 Elsevier Inc. All rights reserved. We give a product expansion for the Drinfeld discriminant function in arbitrary rank r, which generalises the formula obtained by Gekeler for the rank 2 Drinfeld discriminant function. This enables one to compute the u-expansion of this function much more efficiently. The formula in this article uses an r - 1-dimensional parameter and as such provides a nice counterpoint to the formula previously obtained by Hamahata, which is written in terms of several 1-dimensional parameters. (C) 2017 Elsevier Inc. All rights reserved. The Littlewood Conjecture in Diophantine approximation can be thought of as a problem about covering R-2 by a union of hyperbolas centered at rational points. In this paper we consider the problem of translating the center of each hyperbola by a random amount which depends on the denominator of the corresponding rational. Using a randomized covering argument we prove that, not only is this randomized version of the Littlewood Conjecture true for almost all choices of centers, an even stronger statement with an extra factor of a logarithm also holds. (C) 2017 Elsevier Inc. All rights reserved. In 2016, the Hospice and Palliative Nurses Association created the Nessa Coyle Leadership Fund to recognize the career achievements of Nessa Coyle, PhD, RN, as a pioneer in the field of palliative nursing and an exemplar of leadership. The first lectureship recognizing this award was presented at the annual assembly of the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association in February 2017. This article is based on that lectureship and summarizes key leadership themes identified in Dr Coyle's career. Negative symptoms at the end of life are distressing for both the patient and family. Effective management of both physical and psychological symptoms improves quality of life and well-being, but intervention strategies are not always effective or feasible and often are exclusively pharmacologic. Developing treatment plans to meet symptom management needs is critical. A 2-site research study was conducted in southwest Ohio assessing effectiveness of Starlight Therapy in treating the negative symptoms associated with end of life. The study of 40 patients found the Starlight Therapy effective in treating the symptoms of anxiety, agitation, dyspnea, insomnia, and pain in 90% of the patients within a 30-minute period. The therapy was ineffective in only 4 patients. Physiological symptoms were measured upon initiating Starlight Therapy, 30 minutes after therapy, and 2 hours after therapy. Results found heart rate and respiratory rate significantly different from baseline to 30 minutes and from baseline to 2 hours (P < .05). Heart rate and respiratory rate were not significantly different from 30 minutes to 2 hours (P > .05). Further research is required to explore additional types of care, subjects, and sites, which could benefit from Starlight Therapy. Anticipatory grieving, grief associated with an impending loss, is common for patients facing end of life or for their families. There is little research on the outcomes of interventions for anticipatory grieving among hospitalized patients. A descriptive, comparative analysis of an existing valid and reliable data set that was obtained through routine nursing clinical practice using standardized nursing terminologies was completed. We applied data mining techniques on a targeted data set consisting of hospital episodes for end-of-life patients who were given a diagnosis of anticipatory grieving. Less than 50% of the patients given a diagnosis of anticipatory grieving met the expected ratings of monitored nursing outcomes at the time of death or discharge. Specifically, for the spiritual health outcome, only more than 50% of the patients met the expected outcome rating. For the comfortable death outcome, only 45.9% of the patients met the outcome rating. For the comfortable death outcome, patients were significantly more likely not to meet the expected outcome rating if they were also given a diagnosis belonging to the physical comfort class ((2)(1) = 8.99, P < .003). These results demonstrate that expected outcomes are not being met and suggest the need of better education for the clinicians about the diagnosis and treatment of anticipatory grieving. Despite the increased number of palliative care teams in the United States, access to palliative care in the hospital continues to be inadequate. The availability of a simple method to identify appropriate patients for palliative care may increase access. A pilot study was conducted using an observational prospective approach to analyze the effects of palliative interventions for those with a Rothman Index score of less than 40 and a length of stay of greater than 5 days for patients in the medical intensive care and step-down units in an urban teaching hospital, which provides tertiary palliative care. The Rothman Index is a validated formula providing a real-time measure of patient condition based on existing data in the electronic medical record. Patients receiving the palliative intervention had a decrease in the mean length of stay from 26.3 days for all other groups to 13.9 days. The odds ratio of a 30-day readmission for those patients without a palliative visit was 4.4. Costs were lowered by 54% for the palliative intervention group. The Rothman/length of stay trigger for palliative care intervention may have the potential to bend the cost curve for the health care system. This article reports results from a systematic search and thematic analysis of qualitative literature to identify key issues related to family-centered care, behaviors, and communication skills that support the parental role and improve patient and family outcomes in the pediatric intensive care unit. Five themes were identified: (1) sharing information, (2) hearing parental voices, (3) making decisions for or with parents, (4) negotiating roles, and (5) individualizing communication. These themes highlight several gaps between how parents want to be involved and how they perceive clinicians' engagement with them in the care of their child. Parental preferences for involvement differ in the domains of information sharing, decision making, and power sharing across a spectrum of parental roles from parents as care provider to care recipient. The pediatric intensive care unit setting may place clinicians in a double bind trying to both engage families and protect them from distress. Asking families of critically ill children about their preferences for participation across these domains may improve clinician-family relationships. This qualitative study used semistructured interviews to describe adolescents' responses at 7 and 13 months to siblings' neonatal intensive care unit/pediatric intensive care unit/emergency department death. At 7 months, adolescents were asked about events around the sibling's death; at 7 and 13 months, they were asked about concerns/fears, feelings, and life changes. Seventeen adolescents participated (13-18 years; mean,15 years); 65% were black, 24%, Hispanic, and 11%, white. Themes included death circumstances, burial events, thinking about the deceased sibling, fears, and life changes. Adolescents reported shock and disbelief that the sibling died; 80% knew the reason for the death; many had difficulty getting through burials; all thought about the sibling. From 7 to 13 months, fears increased, including losing someone and thoughts of dying. Adolescents reported more changes in family life and greater life changes in them (more considerate, mature) by 13 months; some felt that friends abandoned them after the sibling's death. Girls had more fears and changes in family life and themselves. Adolescent's responses to sibling death may not be visually apparent. One recommendation from this study is to ask adolescents how they are doing separately from parents because adolescents may hide feelings to protect their parents, especially their mothers. Older adolescents (14-18 years) and girls may have more difficulty after sibling death. Universal precautions for opioid safety, analogous to universal precautions for infection control, is one approach to managing the epidemic of prescription pain medication misuse that has been used in pain clinics, primary care practices, and in some hospices. In this project, a set of hospice-appropriate universal precautions was designed, drawing on hospice nursing strengths, and implemented in a midsize hospice agency. As care providers across the health continuum for patients with serious illness, advanced practice registered nurses (APRNs) are essential to quality palliative care. Many APRNs, however, lack palliative care education and training. To promote palliative-specific education for practicing APRNs, the End-of-Life Nursing Education Consortium (ELNEC) created an APRN curriculum and developed a 2-day course on clinical issues and program development in palliative care pertinent to APRN practice. In the past 3 years, more than 940 APRNs representing the spectrum of disease specialties, populations, and health care settings have attended this national course. Completion of precourse and postcourse surveys demonstrated that the course has improved their knowledge and their practice. ELNEC-APRN is an essential educational forum for APRNs to provide high-quality palliative care for patients with serious illness. Advanced practice registered nurses can incorporate content into their practice, integrate content into other educational forums, and use principles to build palliative care services. Many nurses report feeling ill prepared through their formal education to competently care for dying patients and their families. These deficits signal a need for curricular reform; however, current practices in the provision of palliative and end-of-life (EOL) care education must first be systematically evaluated to guide these reforms. This article will share the findings of a pilot study in which the context, input, process, and product evaluation model was used to guide a detailed evaluation of palliative and EOL care education within a baccalaureate nursing program. Critical aspects of palliative and EOL care education were identified from a decomposition of the End-of-Life Nursing Education Consortium (ELNEC) core curriculum. From the decomposition, a new instrument was developed and completed by nursing faculty members teaching in 1 baccalaureate program. Faculty members identified the ELNEC topics that were taught, the courses within which the content was provided, and the associated teaching methods used. Overall, 95.3% of ELNEC core curriculum content was included in the program; however, great interinstructor variability was noted. Clinical conference discussion/debriefing and lecture were most frequently used to teach ELNEC content. The content was addressed throughout the curriculum, particularly in ethics and aging didactic courses. Implications of the findings for future educational research are discussed. We used a standardized terminology to describe patient problems and the nursing care provided in a pilot study of a transitional palliative care intervention with patients and caregivers. Narrative phrases of a nurse's documentation were mapped to the Omaha System (problem, intervention, and target). Over the course of the intervention, 109 notes (1473 phrases) were documented for 9 adults discharged home (mean age, 68 years; mean number of morbid conditions, 7.1; mean number of medications, 15.4). Thirty-one of the 42 Omaha System problems were identified; the average number of problems per patient was 13. Phrases were mapped to all 4 problem domains (environmental, 2.6%; health-related behaviors, 52.3%; physiological, 30.8%; and psychosocial, 14.3%). Surveillance phrases were the most frequent (72.4%); case management phrases were at 20.9%, and teaching, guidance, and counseling phrases were at 6.7%. The number of problems documented per patient correlated with the time between the first and last notes ( = 0.76; P = .02) but not with the number of notes per patient ( = 0.51; P = .16). These results are the first to describe nursing interventions in transitioning palliative care from hospital to home with a standardized terminology. Linking interventions to patient problems is critical for describing effective strategies in transitioning palliative care from hospital to home. Palliative care is a growing specialty striving to improve quality of life in patients and families facing advanced illness, with demonstrated benefits. Outcomes regarding patients receiving community-based palliative care have not been extensively studied, with most focusing on patients with advanced cancer diagnoses. This article describes a community-based palliative care program, developed to provide care to patients with advanced illness. The mission, model of care, and program evolution are outlined; patient demographics, care settings, and comorbid diseases are reported. The average and median lengths of stay for patients who eventually transitioned to hospice care from 2012 to 2015 are compared with the affiliated hospice's total population and with national averages. Patients receiving community-based palliative care for a diagnosis of advanced illnesses who later transitioned to hospice had an increased median and total length of hospice stay as compared with other hospice referral sources and with national averages. For patients with advanced illnesses of many types, palliative care provided in the community setting may lead to earlier identification and referral to hospice as opposed to patients not receiving palliative care, with greater support at end of life. This essay - a collection of contributions from 10 scholars working in the field of biosemiotics and the humanities - considers nature in culture. It frames this by asking the question 'Why does biosemiotics need the humanities?'. Each author writes from the background of their own disciplinary perspective in order to throw light upon their interdisciplinary engagement with biosemiotics. We start with Donald Favareau, whose originary disciplinary home is ethnomethodology and linguistics, and then move on to Paul Cobley's contribution on general semiotics and Kalevi Kull's on biosemiotics. This is followed by Cobley (again) with Frederick Stjernfelt who contribute on biosemiotics and learning, then Gerald Ostdiek from philosophy, and Morten Tonnessen focusing upon ethics in particular. Myrdene Anderson writes from anthropology, while Timo Maran and Louise Westling provide a view from literary study. The essay closes with Wendy Wheeler reflecting on the movement of biosemiotics as a challenge, often via the ecological humanities, to the kind of so-called 'postmodern' thinking that has dominated humanities critical thought in the universities for the past 40 years. Virtually all the matters gestured to in outline above are discussed in much more satisfying detail in the topics which follow. This paper analyses Bohr's complementarity framework and applies it to biosemiotic studies by illustrating its application to three existing models of living systems: mechanistic (molecular) biology, Barbieri's version of biosemiotics in terms of his code biology and Marko's phenomenological version of hermeneutic biosemiotics. The contribution summarizes both Bohr's philosophy of science crowned by his idea of complementarity and his conception of the phenomenon of the living. Bohr's approach to the biological questions evolved - among other things - from the consequences of an epistemological lesson of quantum theory and in light of complementarity of observer as a priori living creature and ex post scientific explanation of the living. In a manifestation of the phenomenon of the living, each model of living system and its description makes accessible - from its own presuppositions, contexts and concepts - some features which are not accessible from the others. Nevertheless, for a general understanding of that phenomenon, incompatible sophisticated approaches are equally necessary. Bohr's epistemology of complementarity turns out to be a heuristic and methodical framework for testing the extent to which biosemiotics can become one of the special sciences or its potential as a cross-disciplinary branch of study. A new approach to landscape ecology involves the application of the eco-field hypothesis and the General Theory of Resources. In this study, we describe the putative eco-field of bark beetles as a spatial configuration with a specific meaning-carrier for every organism-resource interaction. Bark beetles are insects with key roles in matter and energy cycles in coniferous forests, which cause significant changes to forestry landscapes when outbreaks occur. Bark beetles are guided towards host trees by the recognition of semiotic signals using a specific eco-field. These signals mainly comprise a group of scents, which are called the odourtope. Their interactions with other organisms (fungi, bacteria, nematodes, predators, etc.) occur by sharing relevant information from the eco-field networks (representamen networks) in the forest ecosystem. The eco-field networks modulate the expansion of the realized semiotic niche of the bark beetle towards the potential semiotic niche. Moreover, the niche construction process can be initiated by interchanging signals among species living in the same place, where these signals allow the exploitation of the required resources. If different organisms are interdependent on signals in eco-field networks, then this process may result in the establishment of mutualistic relationships. This is an example of how evolutionary processes are initiated by the recognition of signals in a network of eco-fields. This paper describes some likely semiotic consequences of genetic engineering on what Gregory Bateson has called "the mental ecology" (1979) of future humans, consequences that are less often raised in discussions surrounding the safety of GMOs (genetically modified organisms). The effects are as follows: an increased 1) habituation to the presence of GMOs in the environment, 2) normalization of empirically false assumptions grounding genetic reductionism, 3) acceptance that humans are capable and entitled to decide what constitutes an evolutionary improvement for a species, 4) perception that the main source of creativity and problem solving in the biosphere is anthropogenic. Though there are some tensions between them, these effects tend to produce self-validating webs of ideas, actions, and environments, which may reinforce destructive habits of thought. Humans are unlikely to safely develop genetic technologies without confronting these escalating processes directly. Intervening in this mental ecology presents distinct challenges for educators, as will be discussed. In this paper I am advocating a structuralist theory of mental representation. For a structuralist theory of mental representation to be defended satisfactorily, the naturalistic and causal constraints have to be satisfied first. The more intractable of the two, i.e., the naturalistic constraint, indicates that to account for the mental representation, we should not invoke "a full-blown interpreting mind". So, the aim of the paper is to show how the naturalistic and causal constraints could be satisfied. It aims to offer a strategy for grounding the structure of the mental representations in nature. The strategy that I offer is inspired by Marcello Barbieri's code model of biosemiotics. This study measured the feasibility of completing a randomized control trial on an 8-week seated yoga program for older adults with osteoarthritis. Part of the feasibility of this program was to determine whether participants would continue the yoga practice at home using a guide book after the 8-week program. Findings demonstrated that once participants were not in a group setting for the yoga, they did not continue with yoga practice. This outcome demonstrates the need for group programs for older adults to promote adherence to movement-based programs. (Trial registration: ClinicalTrials.Gov: NCT02113410). To ensure patient communication in nursing, certain conditions must be met that enable successful exchange of beliefs, thoughts, and other mental states. The conditions that have received most attention in the nursing literature are derived from general communication theories, psychology, and ethical frameworks of interpretation. This article focuses on a condition more directly related to an influential coherence model of concept possession from recent philosophy of mind and language. The basic ideas in this model are (i) that the primary source of understanding of illness experiences is communicative acts that express concepts of illness, and (ii) that the key to understanding patients' concepts of illness is to understand how they depend on patients' lifeworlds. The article argues that (i) and (ii) are especially relevant in caring practice since it has been extensively documented that patients' perspectives on disease and illness are shaped by their subjective horizons. According to coherentism, nurses need to focus holistically on patients' horizons in order to understand the meaning of patients' expressions of meaning. Furthermore, the coherence model implies that fundamental aims of understanding can be achieved only if nurses recognize the interdependence of patients' beliefs and experiences of ill health. The article uses case studies to elucidate how the holistic implications of coherentism can be used as conceptual tools in nursing. Controlling labor pain is one of the basic goals for caregivers during the birthing process. There are many pharmacological and nonpharmacological methods that are used for controlling pain and helping the mother to cope with pain and have a favorable labor. The study was planned as a randomized, controlled experimental study to detect the effect of acupressure applied to Point LI4 on perceived labor pains. The study sample comprised 88 pregnant women (44 acupressure group, 44 control group), who complied with the study guidelines, agreed with the conditions of the study, and signed the informed consent. Acupressure was applied to the study group when cervical dilatation reached 4 to 5 cm and again when cervical dilation was 7 to 8 cm. Acupressure was applied to Point LI4 on both the hands at the same time from the beginning to the end of the contraction (16 times). Evaluation with the visual analog scale was made 6 times: when the pregnant woman was first admitted to the hospital, before and after acupressure, and within 2 hours after delivery. The control group received routine care. There were statistically significant differences between the groups in subjective labor pain scores (P < .0001). There was a significant difference between the groups in terms of total duration of labor. As shown from our study, applying acupressure to Point LI4 was found to be effective in decreasing the perception of labor pains and shortening the labor (P < .05). Mothers were pleased with this treatment, but they found it insufficient to control their pain. On March 11, 2013, the Great East Japan Earthquake (magnitude 9) hit the northern part of Japan (Tohoku), killing more than 15 000 people and leaving long-lasting scars, including psychological damage among evacuees, some of whom were health professionals. Little is known about meditation efficacy on disaster-affected health professionals. The present study investigated the effects of breathing-based meditation on seminar participants who were health professionals who had survived the earthquake. This study employed a mixed methods approach, using both survey data and handwritten qualitative data. Quantitative results of pre- and postmeditation practice indicated that all mood scales (anger, confusion, depression, fatigue, strain, and vigor) were significantly improved (N = 17). Qualitative results revealed several common themes (emancipation from chronic and bodily senses; holistic sense: transcending mind-body; re-turning an axis in life through reflection, self-control, and/or gratitude; meditation into mundane, everyday life; and coming out of pain in the aftermath of the earthquake) that had emerged as expressions of participant meditation experiences. Following the 45-minute meditation session, the present study participants reported improvements in all psychological states (anger, confusion, depression, fatigue, strain, and vigor) in the quantitative portion, which indicated efficacy of the meditation. Our analysis of the qualitative portion revealed what and how participants felt during meditating. Over the past several years, holistic nursing education has become more readily available to nurses working in high-income nations, and holistic practice has become better defined and promoted through countless organizational and governmental initiatives. However, global nursing community members, particularly those serving in low- and middle-income countries (LMICs) within resource-constrained health care systems, may not find holistic nursing easily accessible or applicable to practice. The purpose of this article is to assess the readiness of nursing sectors within these resource-constrained settings to access, understand, and apply holistic nursing principles and practices within the context of cultural norms, diverse definitions of the nursing role, and the current status of health care in these countries. The history, current status, and projected national goals of professional nursing in Rwanda is used as an exemplar to forward the discussion regarding the readiness of nurses to adopt holistic education into practice in LMICs. A background of holistic nursing practice in the United States is provided to illustrate the multifaceted aspects of support necessary in order that such a specialty continues to evolve and thrive within health care arenas and the communities it cares for. As the use of herbal medications continues to increase in America, the potential interaction between herbal and prescription medications necessitates the discovery of their mechanisms of action. The purpose of this study was to investigate the anxiolytic and antidepressant effects of curcumin, a compound from turmeric (Curcuma longa), and its effects on the benzodiazepine site of the -aminobutyric acid receptor A (GABA(A)) receptor. Utilizing a prospective, between-subjects group design, 55 male Sprague-Dawley rats were randomly assigned to 1 of the 5 intraperitoneally injected treatment groups: vehicle, curcumin, curcumin + flumazenil, midazolam, and midazolam + curcumin. Behavioral testing was performed using the elevated plus maze, open field test, and forced swim test. A 2-tailed multivariate analysis of variance and least significant difference post hoc tests were used for data analysis. In our models, curcumin did not demonstrate anxiolytic effects or changes in behavioral despair. An interaction of curcumin at the benzodiazepine site of the GABA(A) receptor was also not observed. Additional studies are recommended that examine the anxiolytic and antidepressant effects of curcumin through alternate dosing regimens, modulation of other subunits on the GABA(A) receptor, and interactions with other central nervous system neurotransmitter systems. Henry VIII (1491-1547) became King of England in 1509. He started out as a good monarch, sensible, reasonable and pleasant, but later his behaviour changed drastically. He became irascible, intolerant, violent and tyrannical. In January 1536, Henry had a serious jousting accident and was unconscious for 2h. It is generally believed that this accident played a major role in his personality change. Letters of that time, however, indicate that the change began insidiously in 1534 and became most drastic in 1535, a year before the accident. Henry had suffered from leg ulcers before and after the accident and had been constantly treated for them for many years. Sloane MS1047, now in the British Library in London, contains the prescriptions for the medications used to treat these ulcers. Many of the medications contain a high proportion of lead in various forms. Lead can be absorbed through skin, especially damaged skin. Absorbed lead can affect the brain, causing psychiatric problems, especially those associated with violence. The author presents a hypothesis that absorbed lead from his medications might have been a major factor in King Henry's personality change. Charles Bell, Francis Seymour Haden, Jean-Martin Charcot, Paul Richer, Henry Tonks and Harry Lamb were gifted draughtsmen. Some used their skills to illustrate their work, a few abandoned medicine altogether to become artists in their own right. With the exception of Haden few were able to combine an artistic and a medical career. Their medical training and their wartime experiences influenced their artistic portrayal of the wounded. Their significant contribution, however, resides in the way in which they influenced other greater artists through their teaching. Isaak Levitan (1860-1900) was one of Russia's most influential landscape artists. He lived a very short life, only 40 years, but left more than 1000 paintings. He suffered from mood fluctuations, and died as a result of serious heart disease. After an introduction related to the issue of creativity and mental disorders, a short biography of Levitan's life is outlined, followed by some examples of his mood and behavior. A section on the mood's reflection in Levitan's professional work is followed by a description of his romantic loves and disappointments and his relationship with his contemporary Russian, the writer Anton Chekhov. The anti-globulin test was described in 1945, and ever since has been synonymous with the lead author, Robin Coombs, a young veterinary surgeon, at that time embarking on a career in immunological research. This was marked by a number of important contributions in the field, including the description and categorisation of hypersensitivity reactions, co-authored with Philip Gell. Together they wrote the classical text, Clinical Aspects of Immunology, which has been updated and republished over the ensuing 50 years. Although Robin Coombs is best remembered for his contributions to medical immunology, he made a number of significant early advances in the field of veterinary immunology. Vivien Theodore Thomas (1910-1985) was an African-American laboratory technician and instructor of surgery at Johns Hopkins University, Baltimore. He was born as the grandson of a slave in Lousiana, working as a carpenter and subsequently as a laboratory technician after the great depression and the loss of his savings derailed his plans to become a doctor. In his role as a laboratory technician, he overcame challenging personal circumstances to become an innovator in paediatric cardiac surgery, despite having no formal college education. He played an important role in assisting Alfred Blalock and Helen Taussig in the development of the Blalock-Taussig' shunt, a procedure used to improve the survival of children with cyanotic congenital heart defects. He also contributed to major breakthroughs in research covering a spectrum of disorders such as traumatic shock, coarctation of the aorta and transposition of the great arteries. He acted as a teacher and mentor to a generation of surgical residents and technicians who went on to become leaders in their field across the USA. A television film based on his life was premiered by HBO in 2004 titled Something the Lord made'. John Goodsir, pioneer of the concept that all tissues are formed of cells, was born in 1814 into a family of medical practitioners in Anstruther, Fife, Scotland where he was captivated by the marine life he saw daily in his childhood. His ambition was to follow his father and grandfather in medicine. Aged 13, he studied at St Andrews University before being apprenticed to an Edinburgh dentist and completing an original analysis of the embryology of human dentition. He became the student of Robert Knox at the Royal College of Surgeons of Edinburgh and then Conservator of the University Anatomy museum. He exchanged this position for one at the College of Surgeons before accepting the full University post. Beginning in 1830 with the compound microscope, he studied natural history and anatomy, describing his discoveries to many societies. Appointed to the Edinburgh Chair of Anatomy in 1846, his investigations of the cell as the unit of all tissues were recognised internationally. A critic of Darwin, he believed that Man could not evolve. However, malnutrition, the death of a brother and of a friend and collaborator, Edward Forbes, contributed to progressive illness and Goodsir died at Wardie, Edinburgh in 1867. Herbert Mayo was a significant physiologist and an important figure in the London medical world of the 1820s and 1830s. And yet, a combination of poor decision-making and dabbling in heterodox medicine damaged his reputation. The life of Herbert Mayo illustrates that during the critical period before the 1858 Medical Act the boundary between orthodox and alternative medicine was porous. It also gives important insights into the politics of medicine at this time, particularly the significance of character to becoming a successful medical practitioner. Miss Davison was a medical artist at the Manchester Royal Infirmary and the University of Manchester from around 1918 until her retirement in 1957. She illustrated books and scientific papers on anthropology, anatomy and surgery, and became well known for her striking pictures produced by the Ross board technique'- a difficult process that she helped pioneer from the 1930s and which forms the bulk of the work she undertook for neurosurgeon Geoffrey Jefferson during the 1930s-1950s. His Neurosurgical Department became the main base for her work until his retirement in 1953. She was an active member of the Medical Artist Association (MAA) which she helped found in 1949. Felicjan Sawoj-Skadkowski was a Polish physician, General and politician who served as Polish Minister of Internal Affairs and was the last Prime Minister of Poland before the Second World War. The lack of basic sanitation in many of Poland's villages caused him to issue a decree that every household in Poland must have a latrine in working order. Wooden sheds were built in the backyards, subsequently named sawojkis'. The aim of this paper is to consider a class of degenerate damped hyperbolic equations with the critical nonlinearity involving an operator L that is X-elliptic with respect to a family of vector fields X. We prove the global existence of solutions and characterize their long time behavior. In particular, we show the semigroup generated by the equation has a compact, connected and invariant attractor. (C) 2017 Elsevier Inc. All rights reserved. Let C-3(n) denote the number of cubic partitions of n with 3-cores. In this paper, we establish the arithmetic properties and formulas for C-3(n) by employing Bailey's (6)psi(6) formula and theta function identities. (C) 2017 Elsevier Inc. All rights reserved. We establish a local Lipschitz regularity near the boundary of weak solutions to a general class of homogeneous quasilinear elliptic equations with Neumann boundary condition in bounded convex domains. (C) 2017 Elsevier Inc. All rights reserved. If phi is a bounded separately radial function on the unit ball, the Toeplitz operator T-phi is diagonalizable with respect to the standard orthogonal basis of monomials on the Bergman space. Given such a phi, we characterize bounded functions psi for which T-psi commutes with T-phi. Several examples are given to illustrate our results. (C) 2017 Elsevier Inc. All rights reserved. We study a special class of operators T satisfying the transmutation relation (d(2)/dx(2) - q)Tu = Td(2)/dx(2)u in the sense of distributions, where q is a locally integrable function, and u belongs to a suitable space of distributions depending on the smoothness properties of q. A method which allows one to construct a fundamental set of transmutation operators of this class in terms of a single particular transmutation operator is presented. Moreover, following [27], we show that a particular transmutation operator can be represented as a Volterra integral operator of the second kind. We study the boundedness and invertibility properties of the transmutation operators, and use these to obtain a representation for the general distributional solution of the equation d(2)u/dx(2) - qu = lambda u, lambda is an element of C, in terms of the general solution of the same equation with lambda = 0. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we study the non-reflecting boundary condition for the time-harmonic Maxwell's equations in homogeneous waveguides with an inhomogeneous inclusion. We analyze a series representation of solutions to the Maxwell's equations satisfying the radiating condition at infinity, from which we develop the so-called electric-to magnetic operator for the non-reflecting boundary condition. Infinite waveguides are truncated to a finite domain with a fictitious boundary on which the non-reflecting boundary condition based on the electric-to-magnetic operator is imposed. As the main goal, the well-posedness of the reduced problem will be proved. This study is important to develop numerical techniques of accurate absorbing boundary conditions for electromagnetic wave propagation in waveguides. (C) 2017 Elsevier Inc. All rights reserved. We start this work by recalling a class of globally hypoelliptic sublaplacians defined on the N-dimensional torus introduced by Himonas and Petronilho and we consider a new class of sublaplacians that generalizes this one and prove that it is globally {omega}-hypoelliptic if and only if the coefficients satisfy a diophantine condition involving a new concept of simultaneous approximability with exponent {omega}. Furthermore we prove that this new class is globally {omega}-hypoelliptic if and only if certain perturbations of its vector fields, by adding more derivatives with respect to other variables, are globally {omega}-hypoelliptic. We also recall the Petronilho's conjecture for the smooth hypoellipticity and present a new class of sublaplacians for which the Petronilho's conjecture holds true in the ultradifferentiable functions setup. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we confirm five conjectures on the relations between N(a, b, c, d; n) and t(a, b, c, d; n) given by Sun by utilizing (p, k)-parametrization of theta functions, where N(a, b, c, d; n) and t(a, b, c, d; n) denote the number of representations of n as ax(2) + by(2) + cz(2) + dw(2) and the number of representations of n as ax(x+1)/2 + by(y+1)/2 + cz(z+1)/2 + dw(w+1)/2, respectively, with a, b, c, d is an element of N+, n is an element of N, and x, y, z, w is an element of Z. (C) 2017 Elsevier Inc. All rights reserved. We prove that the maximal operators supported by submanifolds are bounded and continuous on the Triebel-Lizorkin spaces F-s(p,q)(R-d) and Besov spaces B-s(p,q)(R-d) for 0 < s < 1 and 1 < p, q < infinity. As a corollary, we also show that these operators are bounded and continuous on the fractional Sobolev spaces W-s,W-p(R-d) for 0 <= s < 1 and 1 < p < infinity. Our main results represent significant improvements as well as natural extensions of what was known previously. (C) 2017 Elsevier Inc. All rights reserved. We consider the problem {-Delta u + lambda u = u(p) in A u > 0 in A u = 0 on partial derivative A where A is an annulus in R-N, N >= 2, p is an element of (1, +infinity) and lambda is an element of [0, +infinity). Recent results ensure that there exists a sequence {p(k)} of exponents (p(k) > +infinity) at which a nonradial bifurcation from the radial solution occurs. Exploiting the properties of O(N - 1)-invariant spherical harmonics, we introduce two suitable cones K-1 and K-2 of O(N - 1)-invariant functions that allow to separate the branches of bifurcating solutions getting their unboundedness. (C) 2017 Elsevier Inc. All rights reserved. In this study, we consider the large time behavior of the solution to the one-dimensional isentropic compressible quantum Navier-Stokes-Poisson equations. The system describes a compressible particle fluid under quantum effects with the potential function of the self-consistent electric field. We show that if the initial data are close to a constant state with asymptotic values at far fields selected such that the Riemann problem on the corresponding Euler system admits a rarefaction wave with a strength that is not necessarily small, then the solution exists for all time and it tends to the rarefaction wave as t -> +infinity. The proof is based on the energy method by considering the effect of the self-consistent electric field and quantum potential in the viscous compressible fluid. In addition, we compare the quantum compressible Navier-Stokes-Poisson equations and the corresponding compressible Navier-Stokes-Poisson equations based on the large-time behavior of these two classes of models. (C) 2017 Elsevier Inc. All rights reserved. We extend Ando-Hiai's log-majorization for the weighted geometric mean of positive definite matrices into that for the Cartan barycenter in the general setting of probability measures on the Riemannian manifold of positive definite matrices equipped with trace metric. The main key is the settlement of the monotonicity problem of the Cartan barycenteric map on the space of probability measures with finite first moment for the stochastic order induced by the cone. We also derive a version of Lie-Trotter formula and related unitarily invariant norm inequalities for the Cartan barycenter as the main application of log-majorization. (C) 2017 Elsevier Inc. All rights reserved. This note concerns the sufficient condition for regularity of solutions to the evolution Navier-Stokes equations known in the literature as Prodi-Serrin's condition. H.-O. Bae and H.J. Choe proved in a 1997 paper that, in the whole space R-3, it is merely sufficient that two components of the velocity satisfy the above condition. Below, we extend the result to the half-space case R-+(n) under slip boundary conditions. We show that it is sufficient that the velocity component parallel to the boundary enjoys the above condition. Flat boundary geometry is not essential, as. shown in a forthcoming paper in cylindrical domains, prepared in collaboration with J. Bemelmans and J. Brand. (C) 2017 Elsevier Inc. All rights reserved. We introduce a quite large class of functions (including the exponential function and the power functions with exponent greater than one), and show that for any element f of this function class, a self-adjoint element a of a C*-algebra is central if and only if a <= b implies f (a) <= f (b). That is, we characterize centrality by local monotonicity of certain functions on C*-algebras. Numerous former results (including works of Ogasawara, Pedersen, Wu, and Molndr) are apparent consequences of our result. (C) 2017 Elsevier Inc. All rights reserved. We prove a realization formula and a model formula for analytic functions with modulus bounded by 1 on the symmetrized bidisc G =(def) {(z + w, zw) : vertical bar z vertical bar < 1, vertical bar w vertical bar < 1}. As an application we prove a Pick-type theorem giving a criterion for the existence of such a function satisfying a finite set of interpolation conditions. (C) 2017 The Authors. Published by Elsevier Inc. It is well known (see [8,14]) that the Libera operator L is bounded on the Besov space H-v(p,q,alpha) if and only if 0 < K-p,K-alpha,K-v := v - alpha - 1/p + 1. We prove unexpected results: the Hilbert matrix operator H, as well as the modified Hilbert operator <(H)over tilde> is bounded on H-v(p,q,alpha) if and only if 0 < K-p,K-alpha,K-v < 1. In particular, H, as well as (H) over tilde, is bounded on the Bergman space A(p,alpha) if and only if 1 < alpha + 2 < p and is bounded on the Dirichlet space D-alpha(p) = A(1)(p,alpha) if and only if max{-1, p - 2} < alpha < 2p - 2. Our results are substantial improvement of [11, Theorem 3.1] and of [6, Theorem 5]. (C) 2017 Elsevier Inc. All rights reserved. We consider a system of the form -Delta u = lambda(theta(1)v(+) + f (v)) in Omega; } -Delta v = lambda(theta(2)u(+) + g (u)) in Omega; u = 0 = v on partial derivative Omega, where s+ =(def) max{s, 0}, theta(1) and theta(2) are fixed positive constants, lambda is an element of R is the bifurcation parameter, and Omega subset of R-N (N > 1) is a bounded domain with smooth boundary partial derivative Omega (a bounded open interval if N = 1). The nonlinearities f,g : R -> R are continuous functions that are bounded from below, sublinear at infinity and have semipositone structure at the origin (f (0), g(0) < 0). We show that there are two disjoint unbounded connected components of the solution set and discuss the nodal properties of solutions on these components. Finally, as a consequence of these results, we infer the existence and multiplicity of solutions for lambda in a neighborhood containing the simple eigenvalue of the associated eigenvalue problem. (C) 2017 Elsevier Inc. All rights reserved. Recently, Leray's problem of the L-2-decay of a special weak solution to the Navier Stokes equations with nonhomogeneous boundary values was studied by the authors, exploiting properties of the approximate solutions converging to this solution. In this paper this result is generalized to the case of an arbitrary weak solution satisfying the strong energy inequality. (C) 2017 Elsevier Inc. All rights reserved. We consider a generalized Radon transform (GRT) that integrates a function f (x(1), x(2)) on R-2 over a family of curves x(2) = u + s phi(x(1) - c) with respect to the variable x(1), for a real valued continuous function phi on R, u, s is an element of R and a fixed c is an element of R. We investigate the inversion of the GRT via the inversion of the regular Radon transform (RT). Depending on some conditions on f and phi, we obtain some inversion formulas and also describe a method for the numerical reconstruction of f from its GRT. Numerical results are presented to demonstrate the feasibility of the proposed method. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a discontinuous non-self-adjoint (dissipative) Dirac operator with eigenparameter dependent boundary condition, and with two singular endpoints is studied. The interface conditions are imposed on the discontinuous point. Firstly, we pass the considered problem to a maximal dissipative operator L-h by using operator theoretic formulation. The self-adjoint dilation T-h of L-h in the space H is constructed, furthermore, the incoming and outgoing representations of T-h and functional model are also constructed, hence in light of Lax-Phillips theory, we derive the scattering matrix. Using the equivalence between scattering matrix and characteristic function, completeness theorem on the eigenvectors and associated vectors of this dissipative operator is proved. (C) 2017 Elsevier Inc. All rights reserved. This article is a contribution to the spectral theory of so-called eventually positive operators, i.e. operators T which may not be positive but whose powers T-n become positive for large enough n. While the spectral theory of such operators is well understood in finite dimensions, the infinite dimensional case has received much less attention in the literature. We show that several sensible notions of "eventual positivity" can be defined in the infinite dimensional setting, and in contrast to the finite dimensional case those notions do not in general coincide. We then prove a variety of typical Perron-Frobenius type results: we show that the spectral radius of an eventually positive operator is contained in the spectrum; we give sufficient conditions for the spectral radius to be an eigenvalue admitting a positive eigenvector; and we show that the peripheral spectrum of an eventually positive operator is a cyclic set under quite general assumptions. All our results are formulated for operators on Banach lattices, and many of them do not impose any compactness assumptions on the operator. (C) 2017 Elsevier Inc. All rights reserved. In this paper, for any symplectic matrix P, the existence of subharmonic P-l-solutions of the first order non-autonomous superquadratic Hamiltonian systems is considered. Under the convex condition, the existence of infinitely many geometrically distinct P-l-solutions is proved. (C) 2017 Elsevier Inc. All rights reserved. Song derives certain multi-variable rational identities by studying torus actions on some homogeneous manifolds and applying the Atiyah-Bott-Segal-Singer Lefschetz fixed point theorem. In this paper, we give a direct proof of these rational identities by using the q-Lucas theorem. Moreover, we also give a similar new rational identity. (C) 2017 Elsevier Inc. All rights reserved. This paper investigates the bifurcation of critical periods from a cubic rigidly isochronous center under any small polynomial perturbations of degree n. It proves that for n = 3, 4 and 5, there are at most 2 and 4 critical periods induced by periodic orbits of the unperturbed cubic system respectively, and in each case this upper bound is sharp. Moreover, for any n > 5, there are at most [n-1/2] critical periods induced by periodic orbits of the unperturbed cubic system. An example is given to show that the upper bound in the case of n = 11 can be reached. (C) 2017 Elsevier Inc. All rights reserved. For an eco-epidemic model with disease in the prey and periodic coefficients it is conjectured in Niu et al. (2011) [5] that, when the infected prey is permanent, there is a positive periodic orbit that is globally asymptotically stable in the interior of (R+)(3). In this paper we prove the existence part of the conjecture for a large family of periodic systems. (C) 2017 Elsevier Inc. All rights reserved. There are two symmetric families of four-body co-circular central configurations, namely the kite and isosceles trapezoid. Using mutual distances as coordinates, we prove that if the four-body central configuration is an isosceles trapezoid, then the diagonals of the isosceles trapezoid cannot be perpendicular to each other. Furthermore, we show that for any four-body co-circular central configuration, the diagonals of the quadrilateral cannot be perpendicular except that the configuration is a kite. (C) 2017 Elsevier Inc. All rights reserved. In this paper, the 3-D compressible MHD equations without thermal conductivity are considered. The existence of unique local classical solutions to the initial boundary value problem with Dirichlet or Navier-Slip boundary condition is established when the initial data are arbitrarily large, contains vacuum and satisfies some initial layer compatibility condition. The initial density needs not to be bounded away from zero and may vanish in some open set. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we obtain the Franke-Jawerth embedding property of Hajlasz-Besov and Hajlasz-Triebel-Lizorkin spaces on a measure metric space (chi, d, mu) which is Ahlfors regular with dimension "Q". As applications, we show that, when (chi, d, mu) is doubling and satisfies an Ahlfors lower bound condition with Q, then the Hajlasz-Besov space N-p,q(s) (chi) with p is an element of (Q, infinity], s is an element of (Q/p, 1] and q is an element of (0, infinity] and the Hajlasz-Triebel-Lizorkin space MA, (chi) with p is an element of (Q, infinity), s is an element of,(Q/p, 1] and q is an element of (Q/Q+s, infinity] are algebras under pointwise multiplication and, moreover, when chi is Ahlfors Q-regular, we characterize the class of all pointwise multipliers on the Hajlasz-Triebel-Lizorkin space M-p,q(s) (chi) for p is an element of (Q, infinity), s is an element of(Q/p, 1] and q is an element of (Q/Q+s, infinity] by its related uniform space. (C) 2017 Elsevier Inc. All rights reserved. We study meromorphic extensions of distance and tube zeta functions, as well as of geometric zeta functions of fractal strings. The distance zeta function zeta(A) (s) := integral(A delta) d(x, A)(s-N) dx, where delta > 0 is fixed and d(x, A) denotes the Euclidean distance from x to A, has been introduced by the first author in 2009, extending the definition of the zeta function zeta L associated with bounded fractal strings L = (l(j)) j >= 1 to arbitrary bounded subsets A of the N-dimensional Euclidean space. The abscissa of Lebesgue (i.e., absolute) convergence D(zeta(A)) coincides with D :=(dim) over bar (B) A, the upper box (or Minkowski) dimension of A. The (visible) complex dimensions of A are the poles of the meromorphic continuation of the fractal zeta function (i.e., the distance or tube zeta function) of A to a suitable connected neighborhood of the "critical line" {Re s = D}. We establish several meromorphic extension results, assuming some suitable information about the second term of the asymptotic expansion of the tube function vertical bar A(t)vertical bar as t -> 0(+), where At is the Euclidean t-neighborhood of A. We pay particular attention to a class of Minkowski measurable sets, such that vertical bar A(t)vertical bar = t(N-D) (M + O(t(r))) as t -> 0(+), with gamma > 0, and to a class of Minkowski nonmeasurable sets, such that vertical bar A(t)vertical bar = t(N-D) (G(log t(-1)) + O(t(gamma))) as t -> 0(+), where G is a nonconstant periodic function and gamma > 0. In both cases, we show that zeta(A) can be meromorphically extended (at least) to the open right half-plane {Re s > D-gamma} and determine the corresponding visible complex dimensions. Furthermore, up to a multiplicative constant, the residue of zeta(A) evaluated at s = D is shown to be equal to M (the Minkowski content of A) and to the mean value of G (the average Minkowski content of A), respectively. Moreover, we construct a class of fractal strings with principal complex dimensions of any prescribed order, as well as with an infinite number of essential singularities on the critical line {Re s = D}. Finally, using an appropriate quasiperiodic version of the above construction, with infinitely many suitably chosen quasiperiods associated with a two-parameter family of generalized Cantor sets, we construct "maximally-hyperfractal" compact subsets of R-N, for N >= 1 arbitrary. These are compact subsets of R-N such that the corresponding fractal zeta functions have nonremovable singularities at every point of the critical line {Re s = D}. (C) 2017 Elsevier Inc. All rights reserved. Given trigonometric monomials A(1), A(2), A(3), A(4), such that A(1), A(3) have the same signs as sin t, and A(2), A(4) the same signs as cost, and natural numbers n, m > 1, we study the family of Abel equations x' = (a(1)A(1)(t) + a(2)A(2)(t))x(m) + (a(3)A(3)(t) + a(4)A(4)(t))x(n), a(1), a(2), a(3), a(4) is an element of R. The center variety is the set of values a(1), a(2), a(3), a(4) such that the Abel equation has a center (every bounded solution is periodic). We prove that the codimension of the center variety is one or two. Moreover, it is one if and only if A(1) = A(3) and A(2) = A(4) and it is two if and only if the family has non-trivial limit cycles (different from x(t) = 0) for some values of the parameters. (C) 2017 Elsevier Inc. All rights reserved. We prove that for every nonnegative, increasing and right continuous function g on [0, n] with g(n) = 1 and g(epsilon) = 0 for some epsilon is an element of (0, n) there exists a doubling probability measure mu on [0,1](n) such that the distribution G(mu) of the lower local dimension of mu is g. (C) 2017 Published by Elsevier Inc. We are concerned with the Dirichlet problem for a class of Hessian type equations. Applying some new methods we are able to establish the C-2 estimates for an approximating problem under essentially optimal structure conditions. Based on these estimates, the existence of classical solutions is proved. (C) 2017 Elsevier Inc. All rights reserved. This paper concerns the finite element approximation of Dirichlet boundary control problems governed by elliptic equations. Different from the existing literatures, in which standard finite element method, mixed finite element method or Robin penalization method are used to deal with the underlying problems, we adopt an alternative penalization approach introduced by Nitsche called weak boundary penalization. Compared with the above methods, our discrete scheme not only keeps consistency and avoids penalization error, but also can be analyzed and computed conveniently as Neumann boundary control problems. Based on the weak boundary penalization method, we establish a finite element approximation to the Dirichlet boundary control problems and derive the a priori error estimates for the control, state and adjoint state. Numerical experiments are provided to confirm our theoretical results. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we consider a class of boundary value problems for p-Laplacian equation (Phi(p)(u'))' + h(t)f(t,u(t),u'(t)) = 0 with integral boundary conditions u(0) - alpha u'(0) = integral(1)(0) g1(s)u(s)ds, u(1) + beta u'(1) = f(0)(1) g2(s)u(s)ds. By using the Avery-Peterson fixed point theorem, we obtain the existence of at least three positive solutions. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we determine the radius of uniform convexity for three kinds of normalized Bessel functions of the first kind. In the cases considered the normalized Bessel functions are uniformly convex on the determined disks. Moreover, necessary and sufficient conditions are given for the parameters of the three normalized functions such that they to be uniformly convex in the open unit disk. The basic tool of this study is Bessel functions in series. (C) 2017 Elsevier Inc. All rights reserved. We study a weak solvability of one linear parabolic problem with a summable right hand side and non-smooth coefficients. We consider this problem as appropriate operator equation in some Banach space. Methods of the semigroups theory, fractional powers of operators and integral operators for solvability of this operator equation are used. (C) 2017 Elsevier Inc. All rights reserved. In this paper we extend the results of Brandolese and Schonbek [2] to the Boussinesq system with fractional dissipation. Let Lambda(alpha) and Lambda(beta) represent the fractional Laplacian dissipation in the velocity and the temperature equations, respectively. We show that if the initial data (u(0), theta(0)) is an element of L-sigma(2) x (L-1 boolean AND L-2), then vertical bar vertical bar theta(t)vertical bar vertical bar <= C(1 + t)(-d/2 beta) and vertical bar vertical bar u(t)vertical bar vertical bar <= C(1 + t)(max{0,1 - d/2 beta}) if beta not equal d/2, vertical bar vertical bar u(t)vertical bar vertical bar <= Cln(2 + t) if beta = d/2; if we additionally assume integral theta(0) = 0 and theta(0) is an element of L-1(1), then vertical bar vertical bar theta(t)vertical bar vertical bar <= C(1 + t)(-d+2/2 beta) and vertical bar vertical bar u(t)vertical bar vertical bar -> 0 as t -> infinity. Comparing [2], we don't need to assume that vertical bar vertical bar theta(0)vertical bar vertical bar(1) is sufficiently small when beta is an element of (0, d+1/2). (C) 2017 Elsevier Inc. All rights reserved. We suggest a somewhat new approach to the issue of Hyers-Ulam stability. Namely, let A, B be (real or complex) linear spaces, L : A -> B be a linear operator, N := ker L, and rho(A) and rho(B) be semigauges on A and B, respectively. We say that L is HU-stable with constant K >= 0 if for each x is an element of A such that rho(B) (Lx) <= 1 there exists z is an element of N with rho(A)(z - x) <= K. With that definition we obtain quite general outcomes concerning approximate solutions to some differential equations. (C) 2017 Published by Elsevier Inc. In this paper we study a general n-dimensional mixed type differential equation with state dependence. Many known works give the existence of solutions with the so-called Return Condition. In this paper, without requiring the Return Condition, we prove a non-local existence of solutions by using a technique of compactification and the dependence of the maximum of functions on the size of their domains. We apply our result to differential equations with iterates. (C) 2017 Elsevier Inc. All rights reserved. The low oxygen concentration limits the therapeutic efficacy of photodynamic therapy in treating cancer cells in hypoxia, since the cytotoxic 102 can't be effectively generated in this condition. To overcome this, we load photosensitizer into perfluorocarbon nanodroplet, which has a high oxygen capacity to enrich 02 for photodynamic consumption. Under the well-controlled hypoxic condition, we test its efficacy both in vitro and in vivo. This method can be successfully used for destroying cancer cells in hypoxic condition. (C) 2017 Published by Elsevier Inc. Spiroplasma eriocheiris, the cause of crab trembling disease, is a wall-less bacterium, related to Mycoplasmas, measuring 2.0-10.0 mu m long. It features a helical cell shape and a unique swimming mechanism that does not use flagella; instead, it moves by switching the cell helicity at a kink traveling from the front to the tail. S. eriocheiris seems to use a novel chemotactic system that is based on the frequency of reversal swimming behaviors rather than the conventional two-component system, which is generally essential for bacterial chemotaxis. To identify the genes involved in these novel mechanisms, we developed a transformation system by using oriC plasmid harboring the tetracycline resistant gene, tetM, which is under the control of a strong promoter for an abundant protein, elongation factor-Tu. The transformation efficiency achieved was 1.6 x 10(-5) colony forming unit (CFU) for 1 mu g DNA, enabling the expression of the enhanced yellow fluorescent protein (EYFP). (C) 2017 The Authors. Published by Elsevier Inc. Dysregulation of mammalian target of rapamycin (mTOR) in hepatocellular carcinoma (HCC) represents a valuable treatment target. Recent studies have developed a highly-selective and potent mTOR kinase inhibitor, CZ415. Here, we showed that nM concentrations of CZ415 efficiently inhibited survival and induced apoptosis in HCC cell lines (HepG2 and Huh-7) and primary-cultured human HCC cells. Meanwhile, CZ415 inhibited proliferation of HCC cells, more potently than mTORC1 inhibitors (rapamycin and RAD001). CZ415 was yet non-cytotoxic to the L02 human hepatocytes. Mechanistic studies showed that CZ415 disrupted assembly of mTOR complex 1 (mTORC1) and mTORC2 in HepG2 cells. Meanwhile, activation of mTORC1 (p-S6K1) and mTORC2 (p-AIKT Ser-473) was almost blocked by CZ415. In vivo studies revealed that oral administration of CZ415 significantly suppressed HepG2 xenograft tumor growth in severe combined immuno-deficient (SCID) mice. Activation of mTORC1/2 was also largely inhibited in CZ415-treated HepG2 tumor tissue. Together, these results show that CZ415 blocks mTORC1/2 activation and efficiently inhibits HCC cell growth in vitro and in vivo. (C) 2017 Elsevier Inc. All rights reserved. Responding to pro-metastatic cues such as low oxygen tension, cancer cells develop several different strategies to facilitate migration and invasion. During this process, expression levels of matrix metalloproteinases (MMPs) are up-regulated so that cancer cells can more easily enter or exit the circulation. In this report we show that message levels of the transcriptional modulator MKL1 were elevated in malignant forms of ovarian cancer tissues in humans when compared to more benign forms accompanying a similar change in MMP2 expression. MKL1 silencing blocked hypoxia-induced migration and invasion of ovarian cancer cells (SKOV-3) in vitro. Over-expression of MKL1 activated while MKL1 depletion repressed MMP2 transcription in SKOV-3 cells. MKL1 was recruited to the MMP2 promoter by NF-kappa B in response to hypoxia. Mechanistically, MKL1 recruited a histone methyltransferase, SET1, and a chromatin remodeling protein, BRG1, and coordinated their interaction to alter the chromatin structure surrounding the MMP2 promoter leading to transcriptional activation. Both BRG1 and SET1 were essential for hypoxia-induced MMP2 trans-activation. Finally, expression levels of SET1 and BRG1 were positively correlated with ovarian cancer malignancies in humans. Together, our data suggest that MKL1 promotes ovarian cancer cell migration and invasion by epigenetically activating MMP2 transcription. (C) 2017 Elsevier Inc. All rights reserved. The early evolution of angiosperms was marked by a number of innovations of the reproductive cycle including an accelerated fertilization process involving faster transport of sperm to the egg via a pollen tube. Fast pollen tube growth rates in angiosperms are accompanied by a hard shank-soft tip pollen tube morphology. A critical actor in that morphology is the wall-embedded enzyme pectin methylesterase (PME), which in type II PMEs is accompanied by a co-transcribed inhibitor, PMEI. PMEs convert the esterified pectic tip wall to a stiffer state in the subapical flank by pectin de-esterification. It is hypothesized that rapid and precise targeting of PME activity was gained with the origin of type II genes, which are derived and have only expanded since the origin of vascular plants. Pollen-active PMEs have yet to be reported in early-divergent angiosperms or gymnosperms. Gene expression studies in Nymphaea odorata found transcripts from four type II VGD1-like and 16 type I AtPPME1-like homologs that were more abundant in pollen and pollen tubes than in vegetative tissues. The near full-length coding sequence of one type II PME (NoPMEII-1) included at least one PMEI domain. The identification of possible VGD1 homologs in an early-diverging angiosperm suggests that the refined control of PMEs that mediate de-esterification of pectins near pollen tube tips is a conserved feature across angiosperms. The recruitment of type II PMEs into a pollen tube elongation role in angiosperms may represent a key evolutionary step in the development of faster growing pollen tubes. (C) 2017 Elsevier Inc. All rights reserved. Several studies have implicated estrogen and the estrogen receptor (ER) in the pathogenesis of benign prostatic hyperplasia (BPH); however, the mechanism underlying this effect remains elusive. In the present study, we demonstrated that estrogen (17 beta-estradiol, or E2)-induced activation of the G protein coupled receptor 30 (GPR30) triggered Ca2+ release from the endoplasmic reticulum, increased the "mitochondrial Ca2+ concentration, and thus induced prostate epithelial cell (PEC) apoptosis. Both E2 and the GPR30-specific agonist G1 induced a transient intracellular Ca2+ release in PECs via the phospholipase C (PLC)-inositol 1, 4, 5-triphosphate (IP3) pathway, and this was abolished by treatment with the GPR30 antagonist G15. The release of cytochrome c and activation of caspase-3 in response to GPR30 activation were observed. Data generated from the analysis of animal models and human clinical samples indicate that treatment with the GPR30 agonist relieves testosterone propionate (TP)-induced prostatic epithelial hyperplasia, and that the abundance of GPR30 is negatively associated with prostate volume. On the basis of these results, we propose a novel regulatory mechanism whereby estrogen induces the apoptosis of PECs via GPR30 activation. Inhibition of this activation is predicted to lead to abnormal PEC accumulation, and to thereby contribute to BPH pathogenesis. (C) 2017 Elsevier Inc. All rights reserved. Laminins are major cell-adhesive proteins of basement membranes that interact with integrins in a divalent cation-dependent manner. Laminin-511 consists of alpha 5, beta 1, and gamma 1 chains, of which three laminin globular domains of the alpha 5 chain (alpha 5/LG1-3) and a Glu residue in the C-terminal tail of chain gamma 1(gamma 1-Glu1607) are required for binding to integrins. However, it remains unsettled whether the Glu residue in the gamma 1 tail is involved in integrin binding by coordinating the metal ion in the metal ion-dependent adhesion site of beta 1 integrin (beta 1-MIDAS), or by stabilizing the conformation of alpha 5/LG1-3. To address this issue, we examined whether alpha 5/LG1-3 contain an acidic residue required for integrin binding that is as critical as the Glu residue in the gamma 1 tail; to achieve this, we undertook exhaustive alanine substitutions of the 54 acidic residues present in alpha 5/LG1-3 of the E8 fragment of laminin-511 (LM511E8). Most of the alanine mutants possessed alpha 6 beta 1 integrin binding activities comparable with wild-type LM511E8. Alanine substitution for alpha 5-Asp3198 and Asp3219 caused mild reduction in integrin binding activity, and that for alpha 5-Asp3218 caused severe reduction, possibly resulting from conformational perturbation of a5/LG1-3. When alpha 5-Asp3218 was substituted with asparagine, the resulting mutant possessed significant binding activity to alpha 6 beta 1 integrin, indicating that alpha 5-Asp3218 is not directly involved in integrin binding through coordination with the metal ion in 01-MIDAS. Given that substitution of gamma 1-Glu1607 with glutamine nullified the binding activity to alpha 6 beta 1 integrin, these results, taken together, support the possibility that the critical acidic residue coordinating the metal ion in beta 1-MIDAS is Glu1607 in the gamma 1 tail, but no such residue is present in alpha 5/LG1-3. (C) 2017 Elsevier Inc. All rights reserved. We demonstrated that ETV4 is a transcriptional activator of the NANOG gene in human embryonic carcinoma NCCIT cells. The endogenous expression of NANOG and ETV4 in naive cells was significantly down-regulated upon differentiation and by shRNA-mediated knockdown of ETV4. NANOG transcription was significantly upregulated by ETV4 overexpression. A putative ETS binding site (EBS) is present in the region (-285 to-138) of the proximal promoter. Site-directed mutagenesis of the putative EBS ((-196)AGGATT-(191)) abolished NANOG promoter activity and ETV4 interacted with this putative EBS both in vivo and in vitro. Our data provide the molecular details of ETV4-mediated NANOG gene expression. (C) 2017 Elsevier Inc. All rights reserved. Nitrogen (N) plays important roles as both a macronutrient and signal in plant growth and development. However, our understanding of N signaling and/or response mechanisms in plants is still limited. Here, we show that the mitogen-activated protein kinase kinase 9 (MKK9) is involved in plant N responses in Arabidopsis by regulating production of anthocyanins and the ability of N acquisition under low N conditions. Transgenic plants that express a constitutively active version of MKK9 (MKK9DD) showed decreased accumulation of anthocynanins and reduced expression of key anthocyanin biosynthetic genes under low N condition compared to the plants expressing the inactive form of MKK9 (MKK9KR). The decreased anthocyanin accumulation could be due to the increased N level in the MICK9DD plants as these plants were shown to accumulate more N and have higher expression of N acquisition-related genes under low N condition as compared with the MKK9KR plants. Taken together, our results suggest that MKK9 plays a role in plant adaptation to low N stress by modulating both anthocyanin accumulation and N status. (C) 2017 Elsevier Inc. All rights reserved. The VWA8 gene was first identified by the Kazusa cDNA project and named KIAA0564. Based on the observation, by similarity, that the protein encoded by KIAA0564 contains a Von Willebrand Factor 8 domain, KIAA0564 was named Von Willebrand Domain-containing Protein 8 (VWA8). The function of VWA8 protein is almost unknown. The purpose of this study was to characterize the tissue distribution, cellular location, and function of VWA8. In mice VWA8 protein was mostly distributed in liver, kidney, heart, pancreas and skeletal muscle, and is present as a long isoform and a shorter splice variant (VWA8a and VWA8b). VWA8 protein and mRNA were elevated in mouse liver in response to high fat feeding. Sequence analysis suggests that VWA8 has a mitochondrial targeting sequence and domains responsible for ATPase activity. VWA8 protein was targeted exclusively to mitochondria in mouse AML12 liver cells, and this was prevented by deletion of the targeting sequence. Moreover, the VWA8 short isoform overexpressed in insect cells using a baculovirus construct had in vitro ATPase activity. Deletion of the Walker A motif or Walker B motif in VWA8 mostly blocked ATPase activity, suggesting Walker A motif or Walker B motif are essential to the ATPase activity of VWA8. Finally, homology modeling suggested that VWA8 may have a structure most confidently similar to dynein motor proteins. (C) 2017 The Authors. Published by Elsevier Inc. Increased evidence has showed that normal high density lipoprotein (HDL) could convert to dysfunctional HDL in diseases states including coronary artery disease (CAD), which regulated vascular endothelial cell function differently. Long non-coding RNAs (IncRNAs) play an extensive role in various important biological processes including endothelial cell function. However, whether IncRNAs are involved in the regulation of HDL metabolism and HDL-induced changes of vascular endothelial function remains unclear. Cultured human umbilical vein endothelial cells (HUVECs) were treated with HDL from healthy subjects and patients with CAD and hypercholesterolemia for 24 h, then the cells were collected for IncRNA-Seq and the expressions of IncRNAs, genes and mRNAs were identified. The bioinformatic analysis was used to evaluate the relationship among IncRNAs, encoding genes and miRNAs. HDL from healthy subjects and patients with CAD and hypercholesterolemia leaded to different expressions of IncRNAs, genes and mRNAs, and further analysis suggested that the differentially expressed IncRNAs played an important role in the regulation of vascular endothelial function. Thus, HDL from patients with CAD and hypercholesterolemia could cause abnormal expression of IncRNAs in vascular endothelial cells to affect vascular function. (C) 2017 Elsevier Inc. All rights reserved. Ethyl pyruvate (EP) is a stable lipophilic pyruvate derivative. Studies demonstrated that EP shows potent anti-oxidation, anti-inflammatory and anti-coagulant effects. Inflammation and coagulation are closely interacted with platelet activation. However, it is unclear whether EP has anti-platelet effects. Therefore, we investigated the anti-platelet effect of EP in this study in vitro. We found that EP inhibited agonists induced platelets aggregation, ATP release and adhesion to collagen. Flow cytometric analysis revealed that EP inhibited agonist induced platelets PAC-1 binding, as well as P-selectin and CD40L expression. The underlying mechanism of action may involve the inhibition of platelet PI3K/Akt and Protein Kinase C (PKC) signaling pathways. Additionally, EP dose dependently inhibited platelet PS exposure induced by high concentration thrombin. Lactate dehydrogenase (LDH) activity assay and mice platelet count implied that EP may have no toxic effect on platelets. Therefore, we are the first to report that EP has potent anti-platelet activity and attenuates platelet PS exposure in vitro, suggesting that the inhibitory effects of EP on platelets may also play important roles in improvement of inflammation and coagulation disorder in related animal models. (C) 2017 Elsevier Inc. All rights reserved. Lung cancer is the leading cause of cancer death worldwide. Small-cell lung cancer (SCLC) is an aggressive type of lung cancer that shows an overall 5-year survival rate below 10%. Although chemotherapy using cisplatin has been proven effective in SCLC treatment, conventional dose of cisplatin causes adverse side effects. Photodynamic therapy, a form of non-ionizing radiation therapy, is increasingly used alone or in combination with other therapeutics in cancer treatment. Herein, we aimed to address whether low dose cisplatin combination with PDT can effectively induce SCLC cell death by using in vitro cultured human SCLC NCI-H446 cells and in vivo tumor xenograft model. We found that both cisplatin and PDT showed dose-dependent cytotoxic effects in NCI-H446 cells. Importantly, co-treatment with low dose cisplatin (1 M) and PDT (1.25 J/cm(2)) synergistically inhibited cell viability and cell migration. We further showed that the combined therapy induced a higher level of intracellular ROS in cultured NCI H446 cells. Moreover, the synergistic effect by cisplatin and PDT was recapitulated in tumor xenograft as revealed by a more robust increase in the staining of TUNEL (a marker of cell death) and decrease in tumor volume. Taken together, our findings suggest that low dose cisplatin combination with PDT can be an effective therapeutic modality in the treatment of SCLC patients. (C) 2017 Elsevier Inc. All rights reserved. Gene transcription is a central tenet of biology, traditionally measured by RT-PCR, microarray, or more recently, RNA sequencing. However, these measurements only provide a snapshot of the state of gene transcription and only represent an overall readout of complex transcriptional networks that regulate gene expression. In this report, we describe a novel strategy to dissect endogenous gene transcription regulation in live cells by knocking in a reporter gene, EGFP, under the control of the endogenous gene promoter, using the ARID1A gene as an example. The ARID1A gene, encoding a subunit of the ATP dependent chromatin remodeling complex SNF/SWI, has recently been identified as a tumor suppressor in multiple cancers. Despite studies that elucidate the mechanism of ARID1A's tumor suppressor function, little is known of the genes/events that regulate ARIDIA expression. Using the HEK293 cells as a model, we discovered novel aspects of ARIDIA transcription regulation in response to cell cycle progression, DNA damage, and microRNAs, exemplifying the potential of our strategy in providing new insight to the mechanism of gene transcription regulation. This strategy can be generalized to essentially any gene of interest, making it a powerful tool for the study of gene expression heterogeneity, especially in cancer cells, and a robust readout for high-throughput screening of agents that modulate gene transcription. (C) 2017 Elsevier Inc. All rights reserved. Globally, colorectal cancer (CRC) is common cause of cancer-related deaths. The high mortality rate of patients with colon cancer is due to cancer cell invasion and metastasis. Initiation of the epithelial-tomesenchymal transition (EMT) is essential for the tumorigenesis. Peroxiredoinxs (PRX1-6) have been reported to be overexpressed in various tumor tissues, and involved to be responsible for tumor progression. However, the exact role of PRX5 in colon cancer remains to be investigated enhancing proliferation and promoting EMT properties. In this study, we constructed stably overexpressing PRX5 and suppressed PRX5 expression in CRC cells. Our results revealed that PRX5 overexpression significantly enhanced CRC cell proliferation, migration, and invasion. On the other hand, PRX5 suppression markedly inhibited these EMT properties. PRX5 was also demonstrated to regulate the expression of two hallmark EMT proteins, E-cadherin and Vimentin, and the EMT-inducing transcription factors, Snail and Slug. Moreover, in the xenograft mouse model, showed that PRX5 overexpression enhances tumor growth of CRC cells. Thus, our findings first provide evidence in CRC that PRX5 promotes EMT properties by inducing the expression of EMT-inducing transcription factors. Therefore, PRX5 can be used as a predictive biomarker and serves as a putative therapeutic target for the development of clinical treatments for human CRC. (C) 2017 Elsevier Inc. All rights reserved. Abdominal aortic aneurysm (AAA) is relatively common in elderly patients with atherosclerosis. MURC (muscle-restricted coiled-coil protein)/Cavin-4 modulating the caveolae function of muscle cells is expressed in cardiomyocytes, skeletal muscle cells and smooth muscle cells. Here, we show a novel functional role of MURC/Cavin-4 in vascular smooth muscle cells (VSMCs) and AAA development. Both wild-type (WT) and MURC/Cavin-4 knockout (MURC-/-) mice subjected to periaortic application of CaCl2 developed AAAs. Six weeks after CaCl2 treatment, internal and external aortic diameters were significantly increased in MURC-/- AAAs compared with WT AAAs, which were accompanied by advanced fibrosis in the tunica media of MURC-/- AAAs. The activity of JNK and matrix metalloproteinase (MMP) -2 and -9 were increased in MURC-/- AAAs compared with WT AAAs at 5 days after CaCl2 treatment. At 6 weeks after CaCl2 treatment, MURC-/- AAAs exhibited attenuated JNK activity compared with WT AAAs. There was no difference in the activity of MMP-2 or -9 between saline and CaCl2 treatments. In MURC/Cavin-4-knockdown VSMCs, TNF alpha-induced activity of JNK and MMP-9 was enhanced compared with control VSMCs. Furthermore, WT, apolipoprotein E-/- (ApoE(-/-)), and MURC/Cavin-4 and ApoE double-knockout (MURC(-/-)ApoE(-/-)) mice were subjected to angiotensin II (Ang II) infusion. In both ApoE(-/-) and MURC(-/-)ApoE(-/-) mice infused for 4 weeks with Ang II, AAAs were promoted. The internal aortic diameter was significantly increased in Ang II-infused MURC(-/-)ApoE(-/-)mice compared with Ang II-infused ApoE(-/-) mice. In MURC/Cavin-4-knockdown VSMCs, Ang II-induced activity of JNK and MMP-9 was enhanced compared with control VSMCs. Our results suggest that MURC/Cavin-4 in VSMCs modulates AAA progression at the early stage via the activation of JNK and MMP-9. MURC/Cavin-4 is a potential therapeutic target against AAA progression. (C) 2017 Elsevier Inc. All rights reserved. In Saccharomyces cerevisiae the second messenger cyclic adenosine monophosphate (cAMP) and protein kinase A (PKA) play a central role in metabolism regulation, stress resistance and cell cycle progression. To monitor cAMP levels and PKA activity in vivo in single S. cerevisiae cells, we expressed an Epac-based FRET probe and a FRET-based A-kinase activity reporter, which were proven to be useful live-cell bio-sensors for CAMP levels and PKA activity in mammalian cells. Regarding detection of cAMP in single yeast cells, we show that in wild type strains the CFP/YFP fluorescence ratio increased immediately after glucose addition to derepressed cells, while no changes were observed when glucose was added to a strain that is not able to produce CAMP. In addition, we had evidence for damped oscillations in cAMP levels at least in SP1 strain. Regarding detection of PKA activity, we show that in wild type strains the FRET increased after glucose addition to derepressed cells, while no changes were observed when glucose was added to either a strain that is not able to produce CAMP or to a strain with absent PKA activity. Taken together these probes are useful to follow activation of the cAMP/PICA pathway in single yeast cells and for long times (up to one hour). (C) 2017 Elsevier Inc. All rights reserved. Nuclear receptor coactivator 6 (NCOA6) is a transcriptional coactivator and crucial for insulin secretion and glucose metabolism in pancreatic beta-cells. However, the regulatory mechanism of beta-cell function by NCOA6 is largely unknown. In this study, we found that the transcript levels of nicotinamide phosphoribosyltransferase (Nampt) were decreased in islets of NCOA6(+/-) mice compared with NCOA6(+/+) mice. Moreover, NCOA6 overexpression increased the levels of Nampt transcripts in the mouse pancreatic beta-cell line NIT-1. Promoter analyses showed that transcriptional activity of the Nampt promoter was stimulated by cooperation of sterol regulatory element binding protein-1c (SREBP-1c) and NCOA6. Additional studies using mutant promoters demonstrated that SREBP-1c activates Nampt promoter through the sterol regulatory element (SRE), but not through the E-box. Using chromatin immunoprecipitation assay, NCOA6 was also shown to be directly recruited to the SRE region of the Nampt promoter. Furthermore, treatment with nicotinamide mononucleotide (NMN), a product of the Nampt reaction and a key NAD+ intermediate, ameliorates glucose-stimulated insulin secretion from NCOA6+/- islets. These results suggest that NCOA6 stimulates insulin secretion, at least partially, by modulating Nampt expression in pancreatic beta-cells. (C) 2017 Elsevier Inc. All rights reserved. Diabetes during pregnancy is associated with abnormal placenta mitochondrial function and increased oxidative stress, which affect fetal development and offspring long-term health. Peroxisome proliferator-activated receptor-gamma coactivator-1 alpha (PGC-1 alpha) is a master regulator of mitochondrial biogenesis and energy metabolism. The molecular mechanisms underlying the regulation of PGC-1 alpha in placenta in the context of diabetes remain unclear. The present study examined the role of microRNA 130b (miR-130b-3p) in regulating PGC-1 alpha expression and oxidative stress in a placental trophoblastic cell line (BeWo). Prolonged exposure of BeWo cells to high glucose mimicking hyperglycemia resulted in decreased protein abundance of PGC-1 alpha and its downstream factor, mitochondrial transcription factor A (TFAM). High glucose treatment increased the expression of miR-130b-3p in BeWo cells, as well as exosomal secretion of miR-130b-3p. Transfection of BeWo cells with miR-130b-3p mimic reduced the abundance of PGC-1 alpha, whereas inhibition of miR-130b-3p increased PGC-1 alpha expression in response to high glucose, suggesting a role for miR-130b-3p in mediating high glucose-induced down regulation of PGC-1 alpha expression. In addition, miR-130b-3p anti-sense inhibitor increased TFAM expression and reduced 4-hydroxynonenal (4-HNE)-induced production of reactive oxygen species (ROS). Taken together, these findings reveal that miR-130b-3p down-regulates PGC-1 alpha expression in placental trophoblasts, and inhibition of miR-130b-3p appears to improve mitochondrial biogenesis signaling and protect placental trophoblast cells from oxidative stress. (C) 2017 Elsevier Inc. All rights reserved. EGFR-mutant lung adenocarcinomas contain a subpopulation of cells that have undergone epithelial-tomesenchymal transition and can grow independently of EGFR. To kill these cancer cells, we need a novel therapeutic approach other than EGFR inhibitors. If a molecule is specifically expressed on the cell surface of such EGFR-independent EGFR-mutant cancer cells, it can be a therapeutic target. We found that a mesenchymal EGFR-independent subline derived from HCC827 cells, an EGFR-mutant lung adenocarcinoma cell line, expressed angiotensin-converting enzyme 2 (ACE2) to a greater extent than its parental cells. ACE2 was also expressed at least partially in most of the primary EGFR-mutant lung adenocarcinomas examined, and the ACE2 expression level in the cancer cells was much higher than that in normal lung epithelial cells. In addition, we developed an anti-ACE2 mouse monoclonal antibody (mAb), termed H8R64, that was internalized by ACE2-expressing cells. If an antibody-drug conjugate consisting of a humanized mAb based on H8R64 and a potent anticancer drug were produced, it could be effective for the treatment of EGFR-mutant lung adenocarcinomas. (C) 2017 Elsevier Inc. All rights reserved. NKG2D, an activating receptor expressed on CD8(+) T lymphocytes, serves as a co-stimulation molecule by engagement with its ligands MICA/B and ULBPs to trigger immune activation against tumors. Currently, the biological function and clinical significance of NKG2D in gastric cancer remains unexplored. The study aims to investigate the expression of NKG2D in gastric cancer in association with clinical prognosis and its biological function. Real-time PCR was used to analyze NKG2D expression in paired cancer and adjacent non-malignant tissues in 139 gastric cancer patients between 2007 and 2010 in Shanghai Cancer Center. NKG2D expression showed no association with any clinical characteristic parameters. High NKG2D level was significantly associated with better outcome (P = 0.018 for OS, P = 0.041 for DFS). Using univariate Cox regression model, high NKG2D mRNA resulted in 43% risk reduction in gastric cancer patients (HR = 0.57, CI (0.36-0.91), P = 0.019). High NKG2D level displayed a significant association with longer OS in the multivariate analysis (HR = 0.59, CI (0.363-0.96), P = 0.034), independent of other prognostic factors including Lauren classification, neural infiltration, vascular/lymphatic invasion, TNM stage. Upon co-incubation with cancer cells, NKG2D expression in CD8+ T cells was markedly down-regulated. Functional study suggested that either blocking NKG2D or its ligand ULBP-2 could suppress tumor-killing activity of CD8(+) T cells. Our data showed that NKG2D receptor could be an independent favorable prognostic indicator for gastric cancer. Furthermore, decreased NKG2D expression might be the mechanism underlying immune evasion by tumors in gastric cancer. (C) 2017 Elsevier Inc. All rights reserved. Objective: Estrogen receptor alpha 36 (ER-alpha 36), a truncated variant of ER-alpha, is different from other nuclear receptors of the ER-alpha family. Previous findings indicate that ER-alpha 36 might be involved in cell growth, proliferation, and differentiation in carcinomas and primarily mediates non-genomic estrogen signaling. However, studies on ER-alpha 36 and cervical cancer are rare. This study aimed to detect the expression of ER-alpha 36 in cervical cancer; the role of ER-alpha 36 in 17-beta-estradiol (E2)-induced invasion, migration and proliferation of cervical cancer; and their probable molecular mechanisms. Methods: Immunohistochemistry and immunofluorescence were used to determine the location of ER-alpha 36 in cervical cancer tissues and cervical cell lines. CaSki and HeLa cell lines were transfected with lentiviruses to establish stable cell lines with knockdown and overexpression of ER-alpha 36. Wound healing assay, transwell invasion assay, and EdU incorporation proliferation assay were performed to evaluate the migration, invasion, and proliferation ability. The phosphorylation levels of mitogen-activated protein kinases/extracellular signal-regulated kinase (MAPK/ERK) signaling molecules were examined with western blot analysis. Results: ER-alpha 36 expression was detected in both cervical cell lines and cervical cancer tissues. Down regulation of ER-alpha 36 significantly inhibited cell invasion, migration, and proliferation. Moreover, upregulation of ER-alpha 36 increased the invasion, migration, and proliferation ability of CaSki and HeLa cell lines. ER-alpha 36 mediates estrogen-stimulated MAPK/ERK activation. Conclusion: ER-alpha 36 is localized on the plasma membrane and cytoplasm in both cervical cancer tissues and cell lines. ER-alpha 36 mediates estrogen-stimulated MAPK/ERK activation and regulates migration, invasion, proliferation in cervical cancer cells. (C) 2017 Elsevier Inc. All rights reserved. Mechanisms controlling endoplasmic reticulum (ER) Ca2+ homeostasis are important regulators of resting cytoplasmic Ca2+ concentration ([Ca2+](cyto)) and receptor-mediated Ca2+ signalling. Here we investigate channels responsible for ER Ca2+ leak in THP-1 macrophage and human primary macrophage. In the absence of extracellular Ca2+ we employ ionomycin action at the plasma membrane to stimulate ER Ca2+ leak. Under these conditions ionomycin elevates [Ca2+]cyto revealing a Ca2+ leak response which is abolished by thapsigargin. IP3 receptors (Xestospongin C, 2-APB), ryanodine receptors (dantrolene), and translocon (anisomycin) inhibition facilitated ER Ca2+ leak in model macrophage, with translocon inhibition also reducing resting [Ca2+](cyto). In primary macrophage, translocon inhibition blocks Ca2+ leak but does not influence resting [Ca2+](cyto). We identify a role for translocon-mediated ER Ca2+ leak in receptor-mediated Ca2+ signalling in both model and primary human macrophage, whereby the Ca2+ response to ADP (P2Y receptor agonist) is augmented following anisomycin treatment. In conclusion, we demonstrate a role of ER Ca2+ leak via the translocon in controlling resting cytoplasmic Ca2+ in model macrophage and receptor-mediated Ca2+ signalling in model macrophage and primary macrophage. (C) 2017 Elsevier Inc. All rights reserved. The meiotic G2/M1 transition is mostly regulated by posttranslational modifications, however, the crosstalk between different posttranslational modifications is not well-understood, especially in spermatocytes. Sumoylation has emerged as a critical regulatory event in several developmental processes, including reproduction. In mouse oocytes, inhibition of sumoylation caused various Meiotic defects and led to aneuploidy. However, the role of sumoylation in male reproduction has only begun to be elucidated. Given the important role of several SUMO targets (including kinases) in meiosis, in this study, the role of sumoylation was addressed by monitoring the G2/M1 transition in pachytene spermatocytes in vitro upon inhibition of sumoylation. Furthermore, to better understand the cross-talk between sumoylation and phosphorylation, the activity of several kinases implicated in meiotic progression was also assessed upon down-regulation of sumoylation. The results of the analysis demonstrate that inhibition of sumoylation with ginkgolic acid (GA) arrests the G2/Ml transition in mouse spermatocytes preventing chromosome condensation and disassembling of the synaptonemal complex. Our results revealed that the activity of PLK1 and the Aurora kinases increased during the G2/M1 meiotic transition, but was negatively regulated by the inhibition of sumoylation. In the same experiment, the activity of cAbl, the ERKs, and AKT were not affected or increased after GA treatment. Both the AURKs and PLKI appear to be "at the right place, at the right time" to at least, in part, explain the meiotic arrest obtained in the spermatocyte culture. (C) 2017 Elsevier Inc. All rights reserved. Metallothionein (MT) protein families are a class of small and universal proteins rich in cysteine residues. They are synthesized in response to heavy metal stresses to sequester the toxic ions by metal-thiolate bridges. Five MT family members, namely MtnA, MtnB, MtnC, MtnD and MtnE, have been discovered and identified in Drosophila. These five isoforms of MTs are regulated by metal responsive transcription factor dMTF-1 and play differentiated but overlapping roles in detoxification of metal ions. Previous researches have shown that Drosophila MtnB responds to copper (Cu), cadmium (Cd) and zinc (Zn). Interestingly in this study we found that Drosophila MtnB expression also responds to elevated iron levels in the diet. Further investigations revealed that MtnB plays limited roles in iron detoxification, and the direct binding of MtnB to ferrous iron in vitro is also weak. The induction of MtnB by iron turns out to be mediated by iron interference of other metals, because EDTA at even a partial concentration of that of iron can suppress this induction. Indeed, in the presence of iron, zinc homeostasis is altered, as reflected by expression changes of zinc transporters dZIP1 and dZnTl. Thus, iron-mediated MtnB induction appears resulting from interrupted homeostasis of other metals such as zinc, which in turns induced MtnB expression. Metal-metal interaction may more widely exist than we expected. (C) 2017 Elsevier Inc. All rights reserved. Aging of cardiac stem/progenitor cells (CSCs) impairs heart regeneration and leads to unsatisfactory outcomes of cell-based therapies. As the precise mechanisms underlying CSC aging remain unclear, the use of therapeutic strategies for elderly patients with heart failure is severely delayed. In this study, we used human cardiosphere-derived cells (CDCs), a subtype of CSC found in the postnatal heart, to identify secreted factor(s) associated with CSC aging. Human CDCs were isolated from heart failure patients of various ages (2-83 years old). Gene expression of key soluble factors was compared between CDCs derived from young and elderly patients. Among these factors, SFRP1, a gene encoding a Wnt antagonist, was significantly up-regulated in CDCs from elderly patients (>= 65 years old). sFRP1 levels was increased significantly also in CDCs, whose senescent phenotype was induced by anti-cancer drug treatment. These results suggest the participation of sFRP1 in CSC aging. We show that the administration of recombinant sFRP1 induced cellular senescence in CDCs derived from young patients, as indicated by increased levels of markers such as p16, and a senescence-associated secretory phenotype. In addition, co-administration of recombinant sFRP1 could abrogate the accelerated CDC proliferation induced by Wnt3A. Taken together, our results suggest that canonical Wnt signaling and its antagonist, sFRP1, regulate proliferation of human CSCs. Furthermore, excess sFRP1 in elderly patients causes CSC aging. (C) 2017 Elsevier Inc. All rights reserved. Activation of AMP-activated protein kinase (AMPK) could efficiently protect osteoblasts from dexamethasone (Dex). Here, we aim to induce AMPK activation through miRNA-mediated downregulating its phosphatase, protein phosphatase 2A (PP2A). We discovered that microRNA-429 ("miR-429") targets the catalytic subunit of PP2A (PP2A-c). Significantly, expression of miR-429 downregulated PP2A-c and activated AMPK (p-AMPK alpha 1 Thr172) in human osteoblastic cells (OB-6 and hFOB1.19 lines). Remarkably, miR-429 expression alleviated Dex-induced osteoblastic cell death and apoptosis. On the other hand, miR-429-induced AMPK activation and osteoblast cytoprotection were almost abolished when AMPK alpha 1 was either silenced (by targeted shRNA) or mutated (T172A inactivation). Further studies showed that miR-429 expression in osteoblastic cells increased NADPH (nicotinamide adenine dinucleotide phosphate) content to significantly inhibit Dex-induced oxidative stress. Such effect by miR-429 was again abolished with AMPK alpha 1 silence or mutation. Together, we propose that PP2A-c silence by miR-429 activates AMPK and protects osteoblastic cells from Dex. (C) 2017 Elsevier Inc. All rights reserved. Hyperoxia contributes to the development of bronchopulmonary dysplasia (BPD), a chronic lung disease of human infants that is characterized by disrupted lung angiogenesis. Adrenomedullin (AM) is a multifunctional peptide with angiogenic and vasoprotective properties. AM signals via its cognate receptors, calcitonin receptor-like receptor (Calcrl) and receptor activity-modifying protein 2 (RAMP2). Whether hyperoxia affects the pulmonary AM signaling pathway in neonatal mice and whether AM promotes lung angiogenesis in human infants are unknown. Therefore, we tested the following hypotheses: (1) hyperoxia exposure will disrupt AM signaling during the lung development period in neonatal mice; and (2) AM will promote angiogenesis in fetal human pulmonary artery endothelial cells (HPAECs) via extracellular signal-regulated kinases (ERK) 1/2 activation. We initially determined AM, Calcrl, and RAMP2 mRNA levels in mouse lungs on postnatal days (PND) 3, 7, 14, and 28. Next we determined the mRNA expression of these genes in neonatal mice exposed to hyperoxia (70% O-2) for up to 14 d. Finally, using HPAECs, we evaluated if AM activates ERK1/2 and promotes tubule formation and cell migration. Lung AM, Calcrl, and RAMP2 mRNA expression increased from PND 3 and peaked at PND 14, a time period during which lung development occurs in mice. Interestingly, hyperoxia exposure blunted this peak expression in neonatal mice. In HPAECs, AM activated ERK1/2 and promoted tubule formation and cell migration. These findings support our hypotheses, emphasizing that AM signaling axis is a potential therapeutic target for human infants with BPD. (C) 2017 Elsevier Inc. All rights reserved. Mitochondria Ca2+ overload has long been recognized as a cell death trigger. Unexpectedly, we demonstrated a signaling complex composed of Calmodulin (CaM), Arabidopsis thaliana Bcl-2-associated athanogene 5 (AtBAG5) and Heat-shock cognate 70 protein (Hsc70) within Arabidopsis thaliana mitochondria which transduces mitochondria Ca2+ elevations to suppress leaf senescence. Gain- and loss-of-function AtBAG5 mutant plants revealed that, mitochondria Ca2+ elevation significantly increase chlorophyll retention and decrease H2O2 level in dark-induced leaf senescence assay. Based on our findings, we proposed a molecular mechanism in which chronic mitochondria Ca2+ elevation reduced ROS levels and thus inhibits leaf senescence. (C) 2017 Elsevier Inc. All rights reserved. Chondroitin sulfate (CS) is a class of sulfated glycosaminoglycan (GAG) chains that consist of repeating disaccharide unit composed of glucuronic acid (GlcA) and N-acetylgalactosamine (GalNAc). CS chains are found throughout the pericellular and extracellular spaces and contribute to the formation of functional microenvironments for numerous biological events. However, their structure-function relations remain to be fully characterized. Here, a fucosylated CS (FCS) was isolated from the body wall of the sea cucumber Apostichopus japonicus. Its promotional effects on neurite outgrowth were assessed by using isolated polysaccharides and the chemically synthesized FCS trisaccharide beta-D-GalNAc(4,6-O-disulfate) (1-4)[alpha-L-fucose (2,4-O-disulfate) (1-3)-beta-D-GlcA. FCS polysaccharides contained the E-type disaccharide unit GlcA-GalNAc(4,6-O-disulfate) as a CS major backbone structure and carried distinct sulfated fucose branches. Despite their relatively lower abundance of E unit, FCS polysaccharides exhibited neurite outgrowth-promoting activity comparable to squid cartilage-derived CS-E polysaccharides, which are characterized by their predominant E units, suggesting potential roles of the fucose branch in neurite outgrowth. Indeed, the chemically synthesized FCS trisaccharide was as effective as CS-E tetrasaccharide in stimulating neurite elongation in vitro. In conclusion, FCS trisaccharide units with 2,4-O-disulfated fucose branches may provide new insights into understanding the structure-function relations of CS chains. (C) 2017 Elsevier Inc. All rights reserved. Abscisic acid (ABA)-induced physiological changes are conserved in many land plants and underlie their responses to environmental stress and pathogens. The PYRABACTIN RESISTANCE1/PYR1-LIKE/REGULATORY COMPONENTS OF ABA RECEPTORS (PYLs)-type receptors perceive the ABA signal and initiate signal transduction. Here, we show that the genome of Brassica rapa encodes 24 putative AtPYL-like proteins. The AtPYL-like proteins in Brassica rapa (BrPYLs) can also be classified into 3 subclasses. We found that nearly all BrPYLs displayed high expression in at least one tissue. Overexpression of BrPYL1 conferred ABA hypersensitivity to Arabidopsis. Further, ABA activated the expression of an ABA-responsive reporter in Arabidopsis protoplasts expressing BrPYL1. Overall, these results suggest that BrPYL1 is a putative functional ABA receptor in Brassica rapa. (C) 2017 Elsevier Inc. All rights reserved. We have generated a humanized anti-cocaine monoclonal antibody (mAb), which is at an advanced stage of pre-clinical development. We report here in vitro binding affinity studies, and in vivo pharmacokinetic and efficacy studies of the recombinant mAb. The overall aim was to characterize the recombinant antibody from each of the three highest producing transfected clones and to select one to establish a master cell bank. In mAb pharmacokinetic studies, after injection with h2E2 (120 mg/kg iv) blood was collected from the tail tip of mice over 28 days. Antibody concentrations were quantified using ELISA. The h2E2 concentration as a function of time was fit using a two-compartment pharmacokinetic model. To test in vivo efficacy, mice were injected with h2E2 (120 mg/kg iv), then one hour later injected with an equimolar dose of cocaine. Blood and brain were collected 5 min after cocaine administration. Cocaine concentrations were quantified using LC/MS. The affinity of the antibody for cocaine was determined using a [H-3] cocaine binding assay. All three antibodies had long elimination half-lives, 2-5 nM Kd for cocaine, and prevented cocaine's entry into the brain by sequestering it in the plasma. Pharmacokinetic and radioligand binding assays supported designation of the highest producing clone 85 as the master cell bank candidate. Overall, the recombinant h2E2 showed favorable binding properties, pharmacokinetics, and in vivo efficacy. (C) 2017 Elsevier Inc. All rights reserved. Thioredoxin reductase 1 (TXNRD1) is associated with susceptibility to acetaminophen (APAP)-induced liver damage. Methionine sulfoxide reductase A (MsrA) is an antioxidant and protein repair enzyme that specifically catalyzes the reduction of methionine S-sulfoxide residues. We have previously shown that MsrA deficiency exacerbates acute liver injury induced by APAP. In this study, we used primary hepatocytes to investigate the underlying mechanism of the protective effect of MsrA against APAP-induced hepatotoxicity. MsrA gene-deleted (MsrA(-/-)) hepatocytes showed higher susceptibility to APAP-induced cytotoxicity than wild-type (MsrA(-/-)) cells, consistent with our previous in vivo results. MsrA deficiency increased APAP-induced glutathione depletion and reactive oxygen species production. APAP treatment increased Nrf2 activation more profoundly in MsrA(-/-) than in MsrA I hepatocytes. Basal TXNRD1 levels were significantly higher in MsrA(-/-) than in MsrA(-/-) hepatocytes, while TXNRD1 depletion in both MsrA(-/-) and MsrA(-/-) cells resulted in increased resistance to APAP-induced cytotoxicity. In addition, APAP treatment significantly increased TXNRD1 expression in MsrA hepatocytes, while no significant change was observed in MsrA(-/-) cells. Overexpression of MsrA reduced APAP-induced cytotoxicity and TXNRDI expression levels in APAP-treated MsrA(-/-) hepatocytes. Collectively, our results suggest that MsrA protects hepatocytes from APAP-induced cytotoxicity through the modulation of TXNRDI expression. (C) 2017 Elsevier Inc. All rights reserved. The mevalonate pathway is prevalent in eukaryotes, archaea, and a limited number of bacteria. This pathway yields the fundamental precursors for isoprenoid biosynthesis, i.e., isopentenyl diphosphate and dimethylally diphosphate. In the downstream part of the general eukaryote-type mevalonate pathway, mevalonate is converted into isopentenyl diphosphate by the sequential actions of mevalonate kinase, phosphomevalonate kinase, and diphosphomevalonte decarboxylase, while a partial lack of the putative genes of these enzymes is sometimes observed in archaeal and bacterial genomes. The absence of these genes has led to the recent discovery of modified mevalonate pathways. Therefore, we decided to investigate the mevalonate pathway of Flavobacterium johnsoniae, a bacterium of the phylum Bacter-oidetes, which is reported to lack the genes of mevalonate kinase and phosphomevalonate kinase. This study provides proof of the existence of the general mevalonate pathway in E johnsoniae, although the pathway involves the kinases that are distantly related to the known enzymes. (C) 2017 Elsevier Inc. All rights reserved. CD44 and miR-221 are upregulated in hepatocellular carcinoma (HCC) cell lines and tumors, however a connection between the two has not been identified. As the expression of miR-221 directly correlated with CD44 in HCC cells, we hypothesized that miR-221 may directly or indirectly regulate CD44 expression. Inhibition of miR-221 with antisense in Sk-Hep-1 or SNU-449 cell lines reduced CD44 protein expression while miR-221 mimic increased CD44 protein levels. miR-221 antisense did not alter the CD44 mRNA levels in Sk-Hep-1 or SNU-449 cells suggesting that regulation of CD44 protein occurs post transcriptionally. To discover miRNAs that may be involved in the miR-221 regulation of CD44, we performed miRNA profiling in SNU-449 cells treated with anti-miR-221. Several miRNAs were increased with miR-221 inhibition including miR-708-5p, a miRNA that targets CD44. As miR-221 targets several regulators of the PI3K-AKT-mTOR pathway and a link between this pathway and CD44 has been previously shown in prostate cancer, we considered miR-221 regulation of CD44 may be through this pathway. Inhibition of miR-221 reduced p-4EBP1, a downstream effector of the PI3K-AKT-mTOR pathway. Likewise, inhibiting the PI3K-AKT-mTOR pathway with the ATP-competitive mTOR inhibitor PP242 reduced CD44 protein in SNU-423 and SNU-449 cells without altering CD44 mRNA levels. (C) 2017 Elsevier Inc. All rights reserved. The T-cell factor/Lymphoid enhancer factor (TCF/LEF; hereafter TCF) family of transcription factors are critical regulators of colorectal cancer (CRC) cell growth. Of the four TCF family members, TCF7L1 functions predominantly as a repressor of gene expression. Few studies have addressed the role of TCF7L1 in CRC and only a handful of target genes regulated by this repressor are known. By silencing TCF7L1 expression in HCT116 cells, we show that it promotes cell proliferation and tumorigenesis in vivo by driving cell cycle progression. Microarray analysis of transcripts differentially expressed in control and TCF7L1-silenced CRC cells identified genes that control cell cycle kinetics and cancer pathways. Among these, expression of the Wnt antagonist DICKKOPF4 (DICK4) was upregulated when TCF7L1 levels were reduced. We found that TCF7L1 recruits the C-terminal binding protein (CtBP) and histone deacetylase 1 (HDAC1) to the DKK4 promoter to repress DKK4 gene expression. In the absence of TCF7L1, TCF7L2 and beta-catenin occupancy at the DKK4 promoter is stimulated and DKK4 expression is increased. These findings uncover a critical role for TCF7L1 in repressing D1CK4 gene expression to promote the oncogenic potential of CRCs. (C) 2017 Elsevier Inc. All rights reserved. Huntington's disease (HD) has been recently shown to have a horizontally transmitted, prion-like pathology. Thus, the migration of polyglutamine-containing aggregates to acceptor cells is important for the progression of HD. These aggregates contain glyceraldehyde-3-phosphate dehydrogenase (GAPDH), which increases their intracellular transport and their toxicity. Here, we show that RX624, a derivative of hydrocortisone that binds to GAPDH, prevents the formation of aggregates of GAPDH-polyglutamine excreted into the culture medium by PC-12 rat cells expressing mutant huntingtin. RX624 was previously shown to be unable to penetrate cells and, thus, its principal therapeutic action might be the inhibition of polyglutamine-GAPDH complex aggregation in the extracellular matrix. The administration of RX624 to SH-SY5Y acceptor cells that incubated in conditioned medium from PC-12 cells expressing mutant huntingtin caused an approximately 20% increase in survival. This suggests that RX624 might be useful as a drug against polyglutamine pathologies, and that is could be administered exogenously without affecting target cell physiology. This protective effect was validated by the similar effect of an anti-GAPDH specific antibody. (C) 2017 Elsevier Inc. All rights reserved. Calcium sensing receptor (CaSR) mediates pathological cardiac hypertrophy. Mitochondria maintain their function through fission and fusion and disruption of mitochondrial dynamic is linked to various cardiac diseases. This study examined how inhibition of CaSR by the inhibitor Calhex(231) affected the mitochondrial dynamics in a hypertensive model in rats. Spontaneously hypertensive rats (SHRs) and Wistar Kyoto (WKY) rats were used in this study. Cardiac function and blood pressure was evaluated at the end of the study. SHRs showed increases in the ratio of heart weight to body weight and the levels of CaSR; all of these increases were suppressed by Calhex(231). Additionally, Calhex(231) treatment of SHRs changed the expression of proteins involved in mitochondrial dynamics. Our results demonstrated that CaSR activation induced cardiomyocyte apoptosis through the mitochondrial dynamics mediated apoptotic pathway in hypertensive hearts. (C) 2017 Elsevier Inc. All rights reserved. Sulfoquinovosyl diacylglycerol (SQDG) is present in the membranes of cyanobacteria or their descendants, plastids at species-dependent levels. We investigated the physiological significance of the intrinsic SQDG content in the cyanobacterium Synechococcus elongatus PCC 7942, with the use of its mutant, in which the genes for SQDG synthesis, sqdB and sqdX, were overexpressed. The mutant showed a 1.3-fold higher content of SQDG (23.6 mol% relative to total cellular lipids, cf., 17.1 mol% in the control strain) with much less remarkable effects on the other lipid classes. Simultaneously observed were 1.6-to 1.9-fold enhanced mRNA levels for the genes responsible for the synthesis of the lipids other than SQDG, as if to compensate for the SQDG overproduction. Meanwhile, the mutant showed no injury to cell growth, however, cell length was increased (6.1 +/- 23, cf., 3.8 +/- 0.8 mu m in the control strain). Accordingly with this, a wide range of genes responsible for cell division were 1.6-2.4-fold more highly expressed in the mutant. These results suggested that a regulatory mechanism for lipid homeostasis functions in the mutant, and that SQDG has to be kept from surpassing the intrinsic content in S. elongatus for repression of the abnormal expression of cell division-related genes and, inevitably, for normal cell division. (C) 2017 Elsevier Inc. All rights reserved. Transforming growth factor-beta2 (TGF-beta 2) induces Endothelial-Mesenchymal transition (EndoMT) and autophagy in a variety of cells. Previous studies have indicated that activation of autophagy might decrease TGF-beta 2 induced EndoMT. However, the precise role remains unclear. In the present study, we found that TGF-beta 2 could induce EndoMT and autophagy in human retinal microvascular endothelial cells (hRMECs). Activation of autophagy by Rapamycin or Trehalose could reduce the expression of Snail, demonstrating a role of autophagy in regulating Snail production both by transcriptional and post transcriptional mechanism. Co-immunoprecipitation (ColP) demonstrated that LC3 coimmunoprecipitated with Smad3 and western blot showed that autophagy inducers, Rapamycin and Trehalose, could decrease the phosphorylation level of Smad3. Therefore, our results demonstrate that autophagy counteracts the EndoMT process triggered by TGF-beta 2 by decreasing the phosphorylation level of Smad3. (C) 2017 Elsevier Inc. All rights reserved. To investigate septic lung injuries and the possible relief from injury by carbon monoxide (CO), rats were intraperitoneally (i.p.) administered water or the water-soluble CO-releasing molecule CORM (30 mg/kg body weight), followed by the successive administration of PBS or lipopolysaccharide (LPS, 15 mg/kg body weight, 6 h). The results in four experimental groups (control, LPS, LPS + CORM, CORM, n = 3 or 4 in each groups) were examined. Histological examination revealed the intravascular aggregation of erythrocytes in the lungs of the LPS group, and serological analysis showed a significant increase in D-dimer in the LPS group. Both the aggregation and D-dimer increase were ameliorated in the LPS + CORM group, suggesting that LPS-induced DIC in the lung is ameliorated by CORM. Proteomic as well as immunoblot analyses revealed that the levels of annexin A(2) (AnxA(2)) were significantly decreased in the LPS group, but were at control levels in the LPS + CORM group. Concordant with the levels of AnxA(2), the levels of both LC3 and collagen VI (COL VI) were decreased in the LPS group but not in the LPS + CORM group. Given the established roles of AnxA(2) in fibrinolysis as well as intracellular vesicle trafficking, AnxA(2) down-regulation should play an important role in the pathogenesis of septic lung injuries. (C) 2017 Elsevier Inc. All rights reserved. O-GlcNAc transferase (OGT) catalyzes the addition of O-GlcNAc to certain serine or threonine residue on a wide variety of cytosolic and nuclear proteins and regulates cellular activities such as signaling and transcription. Although there are emerging evidences that OGT plays important roles in breast cancer metastasis, the underlying mechanism is not fully understood. In this study, we demonstrated that up regulation of OGT correlates with breast cancer cells invasion. Over-expression of OGT stimulates cells invasion, while OGT silence exhibits the opposite effects. OGT is further identified as a target of micro-RNA24 (miR24). miR24 down-regulates OGT expression and subsequently suppresses cells invasion. Re expression of OGT significantly rescues miR24-mediated invasion repression. Furthermore, our data showed that FOXA1 is subjected to O-GlcNAcylation, which instabilizes FOXA1 protein and promotes breast cancer cells invasion. In conclusion, our results demonstrated that miR24 inhibits breast cancer cells invasion by targeting OGT and reducing FOXA1 stability. These results also indicated that OGT might be a potential target for the diagnosis and therapy of breast cancer metastasis. (C) 2017 Elsevier Inc. All rights reserved. Endoplasmic reticulum (ER) resident lectin chaperone calnexin (CNX) and calreticulin (CRT) assist folding of nascent glycoproteins. Their association with ERp57, a member of PDI family proteins (PDIs) which promote disulfide bond formation of unfolded proteins, has been well documented. Recent studies have provided evidence that other PDIs may also interact with CNX and CRT. Accordingly, it seems possible that the ER provides a repertoire of CNX/CRT-PDI complexes, in order to facilitate refolding of various glycoproteins. In this study, we examined the ability of PDIs to interact with CNX. Among them ERp29 was shown to interact with CNX, similarly to ERp57. Judging from the dissociation constant, its ability to interact with CNX was similar to that of ERp57. Results of further analyses by using a CNX mutant imply that ERp29 and ERp57 recognize the same domain of CNX, whereas the mode of interaction with CNX might be somewhat different between them. (C) 2017 Elsevier Inc. All rights reserved. BACKGROUND: Patients with erythematous skin are likely to receive a diagnosis of cellulitis; however, the accuracy of this diagnosis is approximately only 33%. The diagnosis of cellulitis should be made only after a thorough evaluation of all possible differential diagnoses. Cellulitis may be a primary process (superficial spreading infective process involving only the epidermis and dermis) versus a secondary (reactive) process incited by a subcutaneous process, such as an abscess, tenosynovitis, necrotizing fasciitis, and osteomyelitis. CASE PRESENTATION: A 50-year-old man was admitted to a general hospital with the diagnosis of cellulitis. He was initially treated with systemic antibiotics without improvement. Following consultation with a wound management physician, the patient received a diagnosis of a pretibial abscess and was treated with surgical evacuation and postoperative systemic antibiotic therapy guided by tissue cultures. A postoperative wound was successfully treated with inelastic compression therapy. CONCLUSIONS: This case demonstrates the potential for misdiagnosis when evaluating erythematous skin. Furthermore, concluding that the erythema is due to a primary cellulitis may result in monotherapy with systemic antimicrobial agents. In such cases, making a correct diagnosis through a skillful and complete physical examination of the patient, coupled with appropriate investigations, will lead to the best possible outcome. A comprehensive treatment approach may include systemic antimicrobials, as well as surgical options and compression therapy. KEYWORDS: cellulitis, inelastic compression, infrared thermometry, Levine culture technique, lipodermatosclerosis, venolymphedema, wound management BACKGROUND: Amish patients show a demonstrated preference for traditional, herbal remedies over modern medical interventions such as skin grafting. One such remedy is a mixture of Burn & Wound Ointment (B & W Ointment; Holistic Acres, LLC; Newcomerstown, Ohio) and steeped burdock leaves. Although both have demonstrated some antimicrobial and wound healing properties, burdock and/or the combination of B & W Ointment and burdock has never been studied to determine its purported ability to reduce pain, prevent infection, and accelerate wound healing. METHODS: A retrospective chart review was performed on 6 Amish patients treated with salve and burdock leaves instead of skin grafting following complex traumatic wounds to determine whether the traditional treatment incurred any patient harm. RESULTS: The time of wound epithelialization and healing complications were noted, among other data points. Time to full epithelialization ranged from 1 to 7 months. Time to full wound healing was proportional to wound size. CONCLUSIONS: Although the treatment presented here is unconventional, it did not cause harm to the patients studied. OBJECTIVE: To determine whether a blue light (405 nm) could inhibit the growth of Trichophyton mentagrophytes without using a photosensitizing material as part of the treatment protocol. DESIGN: Basic physiologic randomized trial using laboratory specimens (T mentagrophytes). INTERVENTIONS/METHODS: Plated on a growth medium, T mentagrophytes were exposed to 3 to 5 administrations of blue light at 20 J/cm 2 over 28 hours. Following 7 days of incubation, colony-forming units were counted and compared with nonirradiated controls. RESULTS: The study found 3, 4, and 5 administrations of blue light produced significant inhibition of T mentagrophytes (P < .05); 4 and 5 applications produced the greatest inhibition of growth (84.7% and 93.6% kill rates, respectively). CONCLUSIONS: The application of 405-nm light at a dose of 20 J/cm 2 is an effective in vitro inhibitor of T mentagrophytes. To give results similar to those seen when a photosensitizing material is included, 3 to 5 applications of this wavelength and dose condition delivered over 28 hours is likely needed. BACKGROUND: A fast and stable wound closure is important, especially for extended and unstable wounds found after burn injuries. Growth can regulate a variety of cellular processes, including those involved in wound healing. Growth differentiation factor 5 (GDF-5) can accelerate fibroblast cell migration, cell proliferation, and collagen synthesis, which are essential for wound healing. Nevertheless, no standardized evaluation of the effect of GDF-5 on the healing of full-thickness wounds has been published to date. METHODS: Five full-thickness skin defects were created on the backs of 6 minipigs. Three wounds were treated with GDF-5 in different concentrations with the help of a gelatin-collagen carrier, and 2 wounds served as control group. The first was treated with the gelatin carrier and an Opsite film(Smith & Nephew, FortWorth, Texas), and the other was treated solely with an Opsite film that was placed above all wounds and renewed every second day. RESULTS: Growth differentiation factor 5 accelerates wound closure (10.91 [SD, 0.99] days) compared with treatment with the carrier alone (11.3 [SD, 1.49] days) and control wounds (13.3 [SD, 0.94] days). Epidermal cell count of wounds treated with GDF-5 revealed a higher number of cells compared with the control group. In addition, mean epidermal thickness was significantly increased in GDF-5-treated wounds compared with the control wounds. CONCLUSIONS: Because of its ability to improve skin quality, GDF-5 should be considered when developing composite biomaterials for wound healing. OBJECTIVE: Negative-pressure wound therapy (NPWT) is the most modern and sophisticated method of temporary abdominal closure. The aim of the study was to determine the significant predictors for mortality in patients with NPWT. SETTING: University Clinical Centre Maribor, Slovenia MATERIALS AND METHODS: The authors performed a retrospective cohort study of all patients treated with NPWT between January 1, 2011, and December 31, 2014. RESULTS: In the univariate analysis, the type of wound closure, more than 7 NPWT changes, the total days with NPWT, and time to wound closure were significantly associated with death of the patient. In the multivariate analysis, only the number of more than 7 NPWT changes was found as a significant predictor for death (P = .038). CONCLUSIONS: Negative-pressure wound therapy is a method of choice for the treatment of open abdomen if there is a clear indication. However, clinicians should try all measures to remove the NPWT system and close the abdomen as soon as possible because prolonged use is associated with significantly higher mortality. BACKGROUND: A new polyurethane foam dressing impregnated with 3% povidone-iodine (Betafoam; Genewell, Seoul, Korea) was recently developed based on the hypothesis that its physical properties, including improved moisture-retention capacity and antimicrobial activity, are at least as good as those achieved with the current foam dressings that contain silver, but also associated with reduced cost and cytotoxicity to host cells. The purpose of this in vitro study was to evaluate the efficacy of Betafoam by comparing its physical properties, antimicrobial activity, and cytotoxicity with those of 3 silver foam dressings (Allevyn-Ag [Smith & Nephew, Hull, United Kingdom]; Mepilex-Ag [Molnlycke Health Care, Gothenburg, Sweden]; and PolyMem-Ag [Ferris MFG Corp, Burr Ridge, Illinois]) used worldwide. METHODS: This study measured each dressing's pore size, fluid absorption time, fluid absorption capacity, fluid retention capacity, antimicrobial activity against Staphylococcus aureus and Pseudomonas aeruginosa, and cytotoxicity to mouse fibroblasts. RESULTS: Betafoam had the smallest pore size, the fastest fluid absorption time, greatest fluid absorption, and best retention capacities among the tested foam dressings. Antimicrobial activity was not significantly different among the dressings. However, Betafoam also demonstrated the lowest cytotoxicity to the fibroblasts. CONCLUSIONS: Betafoam may result not only in desirable rapid regulation of exudation but also antimicrobial activity with minimal cytotoxicity to host cells that are key requirements for wound healing. Given the current reimbursement structure, the avoidance of a surgical site infection (SSI) is crucial. Although many risk factors are associated with the formation of an SSI, a proactive and interprofessional approach can help modify some factors. Postoperative strategies also can be applied to help prevent an SSI. If an SSI becomes a chronic wound, there are recommended guidelines and strategies that can foster healing. The current study, involving 219 teachers and 15,292 students, examined the relationship between teacher participation in a sustained professional development intervention designed to improve the quantity and quality of guided inquiry-based instruction in middle school science classrooms and subsequent student academic growth. Utilizing a quasi-experimental design, the growth scores of students of participating and non-participating teachers were compared to a benchmark measure established by a virtual comparison group (VCG) of similarly matched students. The results indicate that for all three MAP tests (Scientific Practices, Science Concepts, Science Composite) the students of participating teachers had significantly higher than expected growth relative to the VCG when compared to students of non-participants. In addition, students of teachers who participated in the PD intervention consistently exceeded the growth expectations of the benchmark VCG by up to 82 %. This study supports prior research findings that inquiry-based instruction helps improve students' achievement relative to scientific practices and also provides evidence of increasing student conceptual knowledge. This study focused on the requirements that chemical representations should meet in textbooks in order to enhance conceptual understanding. Specifically, the purpose of this study was to evaluate the chemical representations that are present in 7 secondary Lebanese chemistry textbooks. To achieve the latter purpose, an instrument adapted from Gkitzia, Salta, and Tzougraki (Chemistry Education Research and Practice, 12, 5-14, 2011) was used. This instrument depends on 5 basic criteria: (a) type or level, (b) surface features, (c) relatedness to text, (d) existence and properties of a caption, and (e) degree of correspondence between representations comprising a multiple representation. The results of the study revealed that the chemical representations used in the selected textbooks are focused on the macro level with either implicit or ambiguous labels. Moreover, the selected textbooks use very few multiple, hybrid, or mixed representations. In addition, most chemical representations are accompanied by problematic or no captions. Recommendations for textbook writers and future research are discussed in light of these findings. Competitions are discussed as a measure to foster students' interest, especially for highly gifted and talented students. In the current study, participants of a cognitive school competition in science were compared to non-participants of the same age group (14-15) who either did not participate in any competition or who participated in a non-cognitive sports competition. The study focused on goal orientations and competence beliefs and analyzed outcomes as a foundation for further improvements of enrichment measures and competitions with regard to fostering students' interest especially in science. The results showed considerable differences (and some unexpected similarities) between groups: Science competition participants were more learning goal oriented, had less performance avoidance goals, and showed less work avoidance than non-participants. Social self-concept was higher but was moderated by GPA. Considerable gender differences were found as well. These findings are discussed with regard to further research and possibilities for improvement of science competitions. A physics textbook for the 8th grade was analyzed, in particular the section on the interaction between electric current and magnetic field. The textbook is written in the Macedonian language, but is translated into Albanian, Serbian, and Turkish, which provides an opportunity to influence a larger population of children, in a larger ethnic area. Errors are found from both a didactic as well as from a physical point of view. The authors create many sources of misconceptions. The questions at the end of the chapter are of low level, according to Bloom's taxonomy. They are of the first and second levels mostly. In very rare cases, third-level questions can be found. The proposed experimental activities do not follow the safety precautions and are therefore dangerous to perform. Risk assessment of the laboratory activities is proposed. The main purpose of this study was to investigate the effects of cooperative learning based on conceptual change approach instruction on ninth-grade students' understanding in chemical bonding concepts compared to traditional instruction. Seventy-two ninth-grade students from two intact chemistry classes taught by the same teacher in a public high school participated in the study. The classes were randomly assigned as the experimental and control group. The control group (N = 35) was taught by traditional instruction while the experimental group (N = 37) was taught cooperative learning based on conceptual change approach instruction. Chemical Bonding Concept Test (CBCT) was used as pre- and post-test to define students' understanding of chemical bonding concepts. After treatment, students' interviews were conducted to observe more information about their responses. Moreover, students from experimental groups were interviewed to obtain information about students' perceptions on cooperative work experiences. The results from ANCOVA showed that cooperative learning based on conceptual change approach instruction led to better acquisition of scientific conceptions related to chemical bonding concepts than traditional instruction. Interview results demonstrated that the students in the experimental group had better understanding and fewer misconceptions in chemical bonding concepts than those in the control group. Moreover, interviews about treatment indicated that this treatment helped students' learning and increased their learning motivation and their social skills. Explaining appears to dominate primary teachers' understanding of mathematical reasoning when it is not confused with problem solving. Drawing on previous literature of mathematical reasoning, we generate a view of the critical aspects of reasoning that may assist primary teachers when designing and enacting tasks to elicit and develop mathematical reasoning. The task used in this study of children's reasoning is a number commonality problem. We analysed written and verbal samples of reasoning gathered from children in grades 3 and 4 from three primary schools in Australia and one elementary school in Canada to map the variation in their reasoning. We found that comparing and contrasting was a critical aspect of forming conjectures when generalising in this context, an action not specified in frameworks for generalising in early algebra. The variance in children's reasoning elicited through this task also illuminated the difference between explaining and justifying. For pre-service teachers (PSTs) who have been exposed to traditional approaches, teacher education courses can be a revelatory experience in their development as educators. This study explores if Canadian upper elementary/lower secondary (grades 4-10) PSTs change their beliefs about mathematics teaching as a result of taking a mathematics methods course and how the course influenced these beliefs. Surveys were used to measure participants' mathematics beliefs, and results show that PSTs' beliefs moved to favor reform-based approaches. Qualitative data complemented the survey results, suggesting that experiencing new approaches and having the opportunity to apply them into practice are important to their development as mathematics teachers. When establishing connections among representations of associated mathematical concepts, students encounter different difficulties and successes along the way. The purpose of this study was to uncover information about and gain greater insight into how student processes connections. Pre-calculus students were observed and interviewed while performing a task that required connections among graphical and algebraic representations of a polynomial relation. Their reasoning and processes on the task were examined. This revealed more detailed information about the nature of their connections and misconceptions. The study reveals different types of student connections among graphical and algebraic representations. Recent research on the phenomenon of improper proportional reasoning focused on students' understanding of elementary functions and their external representations. So far, the role of basic function properties in students' concept images of functions remained unclear. We add to this research line by investigating how accurate students are in connecting functions to their corresponding properties and how this accuracy depends on function types and representations. A large group of 10th graders evaluated for different function types, represented in either a graphical, a formulaic, or a tabular mode, the correctness of statements about their general properties and behavior. Results show that students succeeded rather well in making the right connections between properties and functions. Errors depended not only on the type of function for which the properties were evaluated but also on the kind of representation in which the function was presented. These results highlight the importance of function properties in students' concept images of functions and suggest positive effects of making these properties explicit to students. The Hong Kong Education Bureau recommends that primary school pupils' mathematical achievement be enhanced via collaborative discussions engendered by group work. This pedagogic change may be hindered by Confucian heritage classroom practices and Western-dominated group work approaches that predominate in Hong Kong. To overcome these obstacles, we introduced a relational approach to group work in a quasi-experimental study. Our sample included 20 teachers randomly allocated to experimental (12) and control (8) conditions and their 504 mathematics pupils (aged 9-10). The relational approach focused on the development of peer relationships in a culturally appropriate manner and was implemented over 7 months. Pupils were pre-/post-tested for mathematical achievement and systematically observed, and the teachers were assessed for subject knowledge and pre-/post-tested for pedagogic efficacy. Analysis of covariance (ANCOVA) and hierarchical linear modeling (HLM) results show enhanced mathematical achievement, supported by improved peer-based communication skills and time-on-task for the experimental pupils. Experimental teachers raised their pedagogic efficacy. Results indicate the potential of the relational approach for boosting academic achievement via enhanced child-peer-teacher interaction and the need to reassess the role of peer-based latent collectivist learning in Confucian heritage classrooms. Neutrophils are short-lived leukocytes that migrate to sites of infection as part of the acute immune response, where they phagocytose, degranulate, and form neutrophil extracellular traps (NETs). During NET formation, the nuclear lobules of neutrophils disappear and the chromatin expands and, accessorized with neutrophilic granule proteins, is expelled. NETs can be pathogenic in, for example, sepsis, cancer, and autoimmune and cardiovascular diseases. Therefore, the identification of inhibitors of NET formation is of great interest. Screening of a focused library of natural-product-inspired compounds by using a previously validated phenotypic NET assay identified a group of tetrahydroisoquinolines as new NET formation inhibitors. This compound class opens up new avenues for the study of cellular death through NET formation (NETosis) at different stages, and might inspire new medicinal chemistry programs aimed at NET-dependent diseases. The cationic porphyrin 5,10,15,20-tetrakis (diisopropyl-guanidine)-21H,23H-porphine (DIGPor) selectively binds to DNA containing O-6-methylguanine (O-6-MeG) and inhibits the DNA repair enzyme O-6-methylguanine-DNA methyltransferase (MGMT). The O-6-MeG selectivity and MGMT inhibitory activity of DIGPor were improved by incorporating Zn-II into the porphyrin. The resulting metal complex (Zn-DIGPor) potentiated the activity of the DNA-alkylating drug temozolomide in an MGMT-expressing cell line. To the best of our knowledge, this is the first example of DNA-targeted MGMT inhibition. The range of secondary metabolites (SMs) produced by the rice pathogen Fusarium fujikuroi is quite broad. Several polyketides, nonribosomal peptides and terpenes have been identified. However, no products of dimethylallyltryptophan synthases (DMATSs) have been elucidated, although two putative DMATS genes are present in the F.fujikuroi genome. In this study, the in vivo product derived from one of the DMATSs (DMATS1, FFUJ_09179) was identified with the help of the software MZmine2. Detailed structure elucidation showed that this metabolite is a reversely N-prenylated tryptophan with a rare form of prenylation. Further identified products probably resulted from side reactions of DMATS1. The genes adjacent to DMATS1 were analyzed; this showed no influence on the biosynthesis of the product. Microtubule-stabilizing agents (MSAs) are widely used in chemotherapy. Using X-ray crystallography we elucidated the detailed binding modes of two potent MSAs, (+)-discodermolide (DDM) and the DDM-paclitaxel hybrid KS-1-199-32, in the taxane pocket of -tubulin. The two compounds bind in a very similar hairpin conformation, as previously observed in solution. However, they stabilize the M-loop of -tubulin differently: KS-1-199-32 induces an M-loop helical conformation that is not observed for DDM. In the context of the microtubule structure, both MSAs connect the -tubulin helices H6 and H7 and loop S9-S10 with the M-loop. This is similar to the structural effects elicited by epothiloneA, but distinct from paclitaxel. Together, our data reveal differential binding mechanisms of DDM and KS-1-199-32 on tubulin. The use of synthetic biomarkers is an emerging technique to improve disease diagnosis. Here, we report a novel design strategy that uses analyte-responsive acetaminophen (APAP) to expand the catalogue of analytes available for synthetic biomarker development. As proof-of-concept, we designed hydrogen peroxide (H2O2)-responsive APAP (HR-APAP) and succeeded in H2O2 detection with cellular and animal experiments. In fact, for blood samples following HR-APAP injection, we demonstrated that the plasma concentration ratio [APAP+APAP conjugates]/[HR-APAP] accurately reflects in vivo differences in H2O2 levels. We anticipate that our practical methodology will be broadly useful for the preparation of various synthetic biomarkers. Isoprenoid biosynthesis is an important area for anti-infective drug development. One isoprenoid target is (E)-1-hydroxy-2-methyl-but-2-enyl 4-diphosphate (HMBPP) reductase (IspH), which forms isopentenyl diphosphate and dimethylallyl diphosphate from HMBPP in a 2H(+)/2e(-) reduction. IspH contains a 4Fe-4S cluster, and in this work, we first investigated how small molecules bound to the cluster by using HYSCORE and NRVS spectroscopies. The results of these, as well as other structural and spectroscopic investigations, led to the conclusion that, in most cases, ligands bound to IspH 4Fe-4S clusters by (1) coordination, forming tetrahedral geometries at the unique fourth Fe, ligand side chains preventing further ligand (e.g., H2O, O-2) binding. Based on these ideas, we used in silico methods to find drug-like inhibitors that might occupy the HMBPP substrate binding pocket and bind to Fe, leading to the discovery of a barbituric acid analogue with a K-i value of approximate to 500nm against Pseudomonas aeruginosa IspH. Biophysical studies were undertaken to investigate the binding and release of short interfering ribonucleic acid (siRNA) from lyotropic liquid crystalline lipid nanoparticles (LNPs) by using a quartz crystal microbalance (QCM). These carriers are based on phytantriol (Phy) and the cationic lipid DOTAP (1,2-dioleoyloxy-3-(trimethylammonium)propane). The nonlamellar phase LNPs were tethered to the surface of the QCM chip for analysis based on biotin-neutravidin binding, which enabled the controlled deposition of siRNA-LNP complexes with different lipid/siRNA charge ratios on a QCM-D crystal sensor. The binding and release of biomolecules such as siRNA from LNPs was demonstrated to be reliably characterised by this technique. Essential physicochemical parameters of the cationic LNP/siRNA lipoplexessuch as particle size, lyotropic phase behaviour, cytotoxicity, gene silencing and uptake efficiencywere also assessed. The SAXS data show that when the pH was lowered to 5.5 the structure of the lipoplexes did not change, thus indicating that the acidic conditions of the endosome were not a significant factor in the release of siRNA from the cationic lipidic carriers. Pyrazinamide (PZA), an essential constituent of short-course tuberculosis chemotherapy, binds weakly but selectively to Sirtuin6 (SIRT6). Despite the structural similarities between nicotinamide (NAM), PZA, and pyrazinoic acid (POA), these inhibitors modulate SIRT6 by different mechanisms and through different binding sites, as suggested by saturation transfer difference (STD) NMR. Available experimental evidence, such as that derived from crystal structures and kinetic experiments, has been of only limited utility in elucidation of the mechanistic details of sirtuin inhibition by NAM or other inhibitors. For instance, crystallographic structural analysis of sirtuin binding sites does not help us understand important differences in binding affinities among sirtuins or capture details of such dynamic process. Hence, STD NMR was utilized throughout this study. Our results not only agreed with the binding kinetics experiments but also gave a qualitative insight into the binding process. The data presented herein suggested some details about the geometry of the binding epitopes of the ligands in solution with the apo- and holoenzyme. Recognition that SIRT6 is affected selectively by PZA, an established clinical agent, suggests that the rational development of more potent and selective NAM surrogates might be possible. These derivatives might be accessible by employing the malleability of this scaffold to assist in the identification by STD NMR of the motifs that interact with the apo- and holoenzymes in solution. The lipases/acyltransferases homologous to CpLIP2 of Candida parapsilosis efficiently catalyze acyltransfer reactions in lipid/water media with high water activity (a(W)>0.9). Two new enzymes of this family, CduLAc from Candida dubliniensis and CalLAc8 from Candida albicans, were characterized. Despite 82% sequence identity, the two enzymes have significant differences in their catalytic behaviors. In order to understand the roles played by the different subdomains of these proteins (main core, cap and C-terminal flap), chimeric enzymes were designed by rational exchange of cap and C-terminal flap, between CduLAc and CalLAc8. The results show that the cap region plays a significant role in substrate specificity; the main core was found to be the most important part of the protein for acyltransfer ability. Similar exchanges were made with CAL-A from Candida antarctica, but only the C-terminal exchange was successful. Yet, the role of this domain was not clearly elucidated, other than that it is essential for activity. Two features of meso-Aryl-substituted expanded porphyrins suggest suitability as theranostic agents. They have excellent absorption in near infrared (NIR) region, and they offer the possibility of introduction of multiple fluorine atoms at structurally equivalent positions. Here, hexaphyrin (hexa) was synthesized from 2,6-bis(trifluoromethyl)-4-formyl benzoate and pyrrole and evaluated as a novel expanded porphyrin with the above features. Under NIR illumination hexa showed intense photothermal and weak photodynamic effects, which were most likely due to its low excited states, close to singlet oxygen. The sustained photothermal effect caused ablation of cancer cells more effectively than the photodynamic effect of indocyanine green (a clinical dye). In addition, hexa showed potential for use in the visualization of tumors by F-19 magnetic resonance imaging (MRI), because of the multiple fluorine atoms. Our results strongly support the utility of expanded porphyrins as theranostic agents in both photothermal therapy and F-19 MRI. Polyphenylene dendrimers (PPDs) represent a unique class of macromolecules based on their monodisperse and shape-persistent nature. These characteristics have enabled the synthesis of a new genre of patched surface dendrimers, where their exterior can be functionalized with a variety of polar and nonpolar substituents to yield lipophilic binding sites in a site-specific way. Although such materials are capable of complexing biologically relevant molecules, show high cellular uptake in various cell lines, and low to no toxicity, there is minimal understanding of the driving forces to these characteristics. We investigated whether it is the specific chemical functionalities, relative quantities of each moiety, or the patched surface patterning on the dendrimers that more significantly influences their behavior in biological media. Case adaptation is a challenging phase of case-based reasoning (CBR) for recommendation of a matched case solution. Our proposed knowledge-based recommendation system analyzes the combination of visual and textual information in CBR medical system. In this paper a case-based reasoner uses medical expressions in a textual analysis to create word association profiles. Case-based Learning Assistant System (DePicT CLASS) finds significant references and learning materials by utilizing profile of words associations according to the problem description. This research proposes a new adaptation mechanism based on substitution, abstraction, and compositional method for collaborative recommendation in medical vocational educational training. The DePicT CLASS adaptation mechanism has a combination of value comparison based on requested word association profiles and manual adaptation based on user collaborative recommendation. In the adaptation process of the system, attract rate and adapt rate are defined and utilized for evaluating the adaptation results. Therefore, recommendation is a combination of references and learning materials with highest valued keyword association strength from the most similar cases. (C) 2017 Elsevier B.V. All rights reserved. Some approaches to intelligence state that the brain works as a memory system which stores experiences to reflect the structure of the world in a hierarchical, organized way. Case Based Reasoning (CBR) is well suited to test this view. In this work we propose a CBR based learning methodology to build a set of nested behaviors in a bottom up architecture. To cope with complexity-related CBR scalability problems, we propose a new 2-stage retrieval process. We have tested our framework by training a set of cooperative/competitive reactive behaviors for Aibo robots in a RoboCup environment. (C) 2017 Elsevier B.V. All rights reserved. In this paper we evaluate the use of the machine learning algorithms Support Vector Machines (SVM), K-Nearest Neighbors (KNN) and Classification and Regression Trees (CART) to identify non-spontaneous saccades in clinical electrooculography tests. We propose a modification to an adaptive threshold estimation algorithm for detecting signal impulses without the need for any manually pre-established parameters. Data mining tasks such as feature selection and model tuning were performed, obtaining very efficient models using only 3 attributes: amplitude deviation, absolute response latency and relative latency. The models were evaluated with signals recorded from subjects affected by Spinocerebellar Ataxia type 2 (SCA2). Results obtained by the algorithm show accuracies over 98%, recalls over 98% and precisions over 95% for the three models evaluated. (C) 2017 Elsevier B.V. All rights reserved. The paper investigates application of several methods of feature selection to identification of the most important genes in autism disorder. The study is based on the expression microarray of genes. The applied methods analyze the importance of genes on the basis of different principles of selection. The most important step is to fuse the results of these selections into common set of genes, which are the best associated with autism. These genes may be treated as the biomarkers of this disorder and used in early prediction of autism. The paper proposes and compares three different methods of such fusion: purity of the clusterization space, application of genetic algorithm and random forest in the role of integrator. The numerical experiments are concerned with the identification of the most important biomarkers and their application in autism recognition. They show the applied fusion strategy of many independent selection methods leads to the significant improvement of the autism recognition rate. (C) 2017 Elsevier B.V. All rights reserved. This paper proposes a supervised filter method for evolutionary multi-objective feature selection for classification problems in high-dimensional feature space, which is evaluated by comparison with wrapper approaches for the same application. The filter method based on a set of label-aided utility functions is compared with wrapper approaches using the accuracy and generalization properties in the effective searching of the most adequate subset of features through an evolutionary multi-objective optimization scheme. The target application corresponds to a brain computer interface (BCI) classification task based on linear discriminant analysis (LDA) classifiers, where the properties of multi-resolution analysis (MRA) for signal analysis in temporal and spectral domains have been used to extract features from electroencephalogram (EEG) signals. The results, corresponding to a dataset obtained from the databases of the BCI Laboratory of the University of Essex, UK, including ten subjects with three different imagery movements, have allowed us to evaluate the advantages and drawbacks of the different approaches with respect to time consumption, accuracy and generalization capabilities. (C) 2017 Elsevier B.V. All rights reserved. Brain-Computer Interface (BCI) systems analyze brain signals to generate control commands for computer applications or external devices. Utilized as alternative communication channel, BCIs have the potential to assist people with severe motor disabilities to interact with their environment and to participate in daily life activities. Handicapped people from all age groups could benefit from such BCI technologies. Although some papers have previously reported slightly worse BCI performance by older subjects, in many studies BCI systems were tested with young subjects only. In the presented paper age-associated differences in BCI performance were investigated. We compared accuracy and speed of a steady-state visual evoked potential (SSVEP)-based BCI spelling application controlled by participants of two different equally sized age groups. Twenty subjects (eleven female and nine male) participated in this study; each age group consisted of ten subjects, ranging from 19 to 27 years and from 64 to 76 years. Our results confirm that elderly people may have a deteriorated information transfer rate (ITR). The mean (SD) ITR of the young age group was 27.36 (6.50) bit/min while the elderly people achieved a significantly lower ITR of 16.10 (5.90) bit/min. The average time window length associated with the signal classification was usually larger for the participants of advanced age. These findings show that the subject age must be taken into account during the development of SSVEP-based applications. (C) 2017 The Authors. Published by Elsevier B.V. The paper deals with the problem of a neural network-based robust state and actuator fault estimator design for non-linear discrete-time systems. It starts from a review of recent developments in the area of robust estimators and observers for non-linear discrete-time systems and proposes less restrictive procedure for designing a neural network-based 96 observer. The proposed approach guaranties a predefined disturbance attenuation level and convergence of the observer, as well as unknown input decoupling and state and actuator fault estimation. The main advantage of the design procedure is its simplicity. The paper presents an observer design procedure that is reduced to solving a set of linear matrix inequalities. The final part of the paper presents an illustrative example concerning an application of the proposed approach to the multi-tank system benchmark. (C) 2017 Elsevier B.V. All rights reserved. An enormous effort has been made during the recent years towards the recognition of human activity based on wearable sensors. Despite the wide variety of proposed systems, most existing solutions have in common to solely operate on predefined settings and constrained sensor setups. Real-world activity recognition applications and users rather demand more flexible sensor configurations dealing with potential adverse situations such as defective or missing sensors. In order to provide interoperability and reconfigurability, heterogeneous sensors used in wearable activity recognition systems must be fairly abstracted from the actual underlying network infrastructure. This work presents MIMU-Wear, an extensible ontology that comprehensively describes wearable sensor platforms consisting of mainstream magnetic and inertial measurement units (MIMUs). MIMU-Wear describes the capabilities of MIMUs such as their measurement properties and the characteristics of wearable sensor platforms including their on-body location. A novel method to select an adequate replacement for a given anomalous or nonrecoverable sensor is also presented in this work. The proposed sensor selection method is based on the MIMU-Wear Ontology and builds on a set of heuristic rules to infer the candidate replacement sensors under different conditions. Then, queries are iteratively posed to select the most appropriate MIMU sensor for the replacement of the defective one. An exemplary application scenario is used to illustrate some of the potential of MIMU-Wear for supporting seamless operation of wearable activity recognition systems. (C) 2017 Elsevier B.V. All rights reserved. Automated parameter search methods are commonly used to optimize neuron models. A more challenging task is to fit models of neural systems since the model response is determined by both intrinsic properties of neurons and the neural wiring and architecture of the network. Neural records of cells in the visual system are often analyzed in terms of the cell's receptive field and its temporal response. This type of data requires a finer point-by-point comparison of response traces between the simulated output and the recorded data. To address these issues, we applied a genetic algorithm optimization in conjunction with a multiobjective fitness function and a population-based error metric. Two different models of the early stages in the visual system were fitted to electrophysiological recordings and results from a modeling study, respectively. The first one is a model of cone photoreceptors and horizontal cells that reproduces adaptation to the mean light intensity in the retina. A multiobjective fitness function based on the normalized root-mean-square error (NRMSE) and a shape error descriptor captures high-frequency oscillations in the impulse response to uniform white flashes. The second one is a large-scale model of the thalamocortical system that accounts for the slow rhythms observed during sleep. An error metric of the population neural activity is used in this case. We argue that the optimization framework proposed in this paper could serve as a useful tool for parameter fitting of neuron models and large-scale models in the visual system pathway. (C) 2017 Elsevier B.V. All rights reserved. The aggregation of preferences (expressed in the form of rankings) from multiple experts is a well-studied topic in a number of fields. The Kemeny ranking problem aims at computing an aggregated ranking having minimal distance to the global consensus. However, it assumes that these rankings will be complete, i.e., all elements are explicitly ranked by the expert. This assumption may not simply hold when, for instance, an expert ranks only the top-K items of interest, thus creating a partial ranking. In this paper we formalize the weighted Kemeny ranking problem for partial rankings, an extension of the Kemeny ranking problem that is able to aggregate partial rankings from multiple experts when only a limited number of relevant elements are explicitly ranked (top-K), and this number may vary from one expert to another (top-K-i). Moreover, we introduce two strategies to quantify the weight of each partial ranking. We cast this problem within the realm of combinatorial optimization and lean on the successful Ant Colony Optimization (ACO) metaheuristic algorithm to arrive at high-quality solutions. The proposed approach is evaluated through a real-world scenario and 190 synthetic datasets from www.PrefLib.org. The experimental evidence indicates that the proposed ACO-based solution is capable of significantly outperforming several evolutionary approaches that proved to be very effective when dealing with the Kemeny ranking problem. (C) 2017 Elsevier B.V. All rights reserved. We consider a simple language for writing causal action theories, and postulate several properties for the state transition models of these theories. We then consider some possible embeddings of these causal action theories in some other action formalisms, and their implementations in logic programs with answer set semantics. In particular, we propose to consider what we call permissible translations from these causal action theories to logic programs. We identify two sets of properties, and prove that for each set, there is only one permissible translation, under strong equivalence, that can satisfy all properties in the set. We also show that these two sets of conditions are minimal in that removing any condition from each of them will result in multiple permissible mappings. Furthermore, as it turns out, for one set, the unique permissible translation is essentially the same as Balduccini and Gelfond's translation from Gelfond and Lifschitz's action language B to logic programs. For the other, it is essentially the same as Lifschitz and Turner's translation from the action language C to logic programs. This work provides a new perspective on understanding, evaluating and comparing action languages by using sets of properties instead of examples. The results in this paper provide a characterization of two representative action languages B and C in terms of permissible mappings from our causal action theories to logic programs. It will be interesting to see if other action languages can be similarly characterized, and whether new action formalisms can be defined using different sets of properties. (C) 2017 Elsevier B.V. All rights reserved. Motivated by recent progress on pricing in the AI literature, we study marketplaces that contain multiple vendors offering identical or similar products and unit-demand buyers with different valuations on these vendors. The objective of each vendor is to set the price of its product to a fixed value so that its profit is maximized. The profit depends on the vendor's price itself and the total volume of buyers that find the particular price more attractive than the price of the vendor's competitors. We model the behavior of buyers and vendors as a two-stage full-information game and study a series of questions related to the existence, efficiency (price of anarchy) and computational complexity of equilibria in this game. To overcome situations where equilibria do not exist or exist but are highly inefficient, we consider the scenario where some of the vendors are subsidized in order to keep prices low and buyers highly satisfied. (C) 2017 Elsevier B.V. All rights reserved. A known limitation of many diagnosis algorithms is that the number of diagnoses they return can be very large. This is both time consuming and not very helpful from the perspective of a human operator: presenting hundreds of diagnoses to a human operator (charged with repairing the system) is meaningless. In various settings, including decision support for a human operator and automated troubleshooting processes, it is sufficient to be able to answer a basic diagnostic question: is a given component faulty? We propose a way to aggregate an arbitrarily large set of diagnoses to return an estimate of the likelihood of a given component to be faulty. The resulting mapping of components to their likelihood of being faulty is called the system's health state. We propose two metrics for evaluating the accuracy of a health state and show that an accurate health state can be found without finding all diagnoses. An empirical study explores the question of how many diagnoses are needed to obtain an accurate enough health state, and an online stopping criteria is proposed. (C) 2017 Elsevier B.V. All rights reserved. In physical reasoning, humans are often able to carry out useful reasoning based on radically incomplete information. One physical domain that is ubiquitous both in everyday interactions and in many kinds of scientific applications, where reasoning from incomplete information is very common, is the interaction of containers and their contents. We have developed a preliminary knowledge base for qualitative reasoning about containers, expressed in a sorted first-order language of time, geometry, objects, histories, and actions. We have demonstrated that the knowledge suffices to justify a number of commonsense physical inferences, based on very incomplete knowledge. (C) 2017 Elsevier B.V. All rights reserved. DLN is a recent nonmonotonic description logic, designed for satisfying independently proposed knowledge engineering requirements, and for removing some recurrent drawbacks of traditional nonmonotonic semantics. In this paper we study the logical properties of DLN and illustrate some of the relationships between the KLM postulates and the characteristic features of DLN, including its novel way of dealing with unresolved conflicts between defeasible axioms. Moreover, we fix a problem affecting the original semantics of DLN and accordingly adapt the reduction from DLN inferences to classical inferences. Along the paper, we use various versions of the KLM postulates to deepen the comparison with related work, and illustrate the different tradeoffs between opposite requirements adopted by each approach. (C) 2017 Published by Elsevier B.V. The partner units problem is an acknowledged hard benchmark problem for the logic programming community with various industrial application fields like CCTV surveillance or railway safety systems. Whereas many complexity results exist for the optimization version of the problem, complexity for the decision variant, which from a practical point of view is more important, is widely unknown. In this article we show that the partner units decision problem is NP-complete in general and also for various subproblems of industrial importance. (C) 2017 Elsevier B.V. All rights reserved. Model checking is the best-known and most successful approach to formally verifying that systems satisfy specifications, expressed as temporal logic formulae. In this article, we develop the theory of equilibrium checking, a related but distinct problem. Equilibrium checking is relevant for multi-agent systems in which system components (agents) are assumed to be acting rationally in pursuit of delegated goals, and is concerned with understanding what temporal properties hold of such systems under the assumption that agents select strategies in equilibrium. The formal framework we use to study this problem assumes agents are modelled using REACTIVE MODULES, a system modelling language that is used in a range of practical model checking systems. Each agent (or player) in a REACTIVE MODULES game is specified as a nondeterministic guarded command program, and each player's goal is specified with a temporal logic formula that the player desires to see satisfied. A strategy for a player in a REACTIVE MODULES game defines how that player selects enabled guarded commands for execution over successive rounds of the game. For this general setting, we investigate games in which players have goals specified in Linear Temporal Logic (in which case it is assumed that players choose deterministic strategies) and in Computation Tree Logic (in which case players select nondeterministic strategies). For each of these cases, after formally defining the game setting, we characterise the complexity of a range of problems relating to Nash equilibria (e.g., the computation or the verification of existence of a Nash equilibrium or checking whether a given temporal formula is satisfied on some Nash equilibrium). We then go on to show how the model we present can be used to encode, for example, games in which the choices available to players are specified using STRIPS planning operators. (C) 2017 Elsevier B.V. All rights reserved. We build on the awareness-motivation-capability (AMC) framework of competitive dynamics research to examine how a signal of a rival's innovation, in the form of research and development (R&D) intensity, may influence a focal firm's product actions. We argue that a rival's R&D intensity increases a focal firm's awareness of a competitive threat and thus its motivation to react by increasing its product actions. However, this competitive impact is conditional on the focal firm's size and performance relative to the rival, as well as the strategic homogeneity of the two. We use the AMC framework to analyze such moderating effects. (C) 2017 Elsevier Inc. All rights reserved. Factors promoting loyalty are of great interest to both academics and practitioners because consumer loyalty is a notable predictor of business success. This study identifies the congruency between consumer values and the goals of corporate social responsibility (CSR) activities and corporate ethical standards as the two main determinants of CSR quality and commitment. It further investigates how consumer perceptions of CSR shaped by these two factors increase loyalty. The results of structural equation modeling analysis (N = 931) reveal that higher ethical standards leads consumers to perceive that the company is committed to its CSR activities. The company's CSR commitment induces greater satisfaction with and trust in the company and its services, which then ultimately encourages consumers to remain loyal. (C) 2017 Elsevier Inc. All rights reserved. Positive buyer-supplier relationships rely on a set of underlying behavioral expectations held by individuals. These 'norms' regulate partner behaviors through a set of implicit (dis)incentives. Despite the importance of norms, few studies consider their role in relationship decline. Drawing on an in-depth ethnography, this study focuses on norms at the inter-personal level and at the inter-firm level to uncover how these subtle social rules affect relationship decline. The study identifies three key phases of relationship decline: unawareness, divergence and degeneration. The study also considers the role of individuals' bounded reliability and its contribution to norms violations. We identify two new elements (perceptual inconsistencies and divergent schema) that appear active early in relationship decline and that contribute to other elements of bounded reliability. The findings yield a theoretically grounded, empirically informed framework of relationship decline, with direct relevance to complex buyer-supplier relationships, particularly in capital and technology intensive industries. (C) 2017 Elsevier Inc. All rights reserved. Downsizing is a common organizational practice, yet research on the outcomes of downsizing has produced mixed findings. To contribute to this debate, we use an organizational change perspective to investigate whether the large-scale changes inherent in downsizing set firms on a negative path that is difficult to overcome and ultimately increases the likelihood of bankruptcy. Additionally, we investigate what factors, if any, can mitigate this likelihood. To do so, we build on the resource-based view to suggest that valuable resources can reduce the likelihood that downsizing will lead to bankruptcy. We find support for our theorizing across a sample of publicly traded firms. Our findings suggest that downsizing firms are significantly more likely to declare bankruptcy than firms that do not engage in downsizing and that intangible resources help to mitigate this likelihood. We do not, however, find support for the role of physical and financial resources in preventing bankruptcy. (C) 2017 Elsevier Inc All rights reserved. Interest rate and exchange rate are two important macroeconomic variables that exert considerable effects on the stock market. In this study, we investigate whether variations in interest and exchange rates induce herding behavior in the Chinese stock market. Empirical results indicate that interest rate increase and Chinese currency (CNY) depreciation will induce herding and this phenomenon is mainly manifested in down markets. Moreover, the herding level of the highest idiosyncratic volatility quintile portfolio is twice that of the lowest quintile portfolio which we consider evidence of intentional herding. This result is consistent with those of previous studies, which report that retail investors prefer and overweigh lottery-type stocks. Finally, we investigate the effects of monetary policy announcements and extreme exchange rate volatility on herding because these events elicit considerable public attention and may trigger collective behavior in the aggregate market. (C) 2017 Elsevier Inc. All rights reserved. This research demonstrates that consumers react differently to donations emphasizing a company's effort invested in charitable actions, as opposed to those highlighting its ability to carry out those actions. Our results show that consumers rate the brands that adopt an effort-oriented donation strategy more favorably than those that use an ability-oriented strategy (study 1). Further, this effect is moderated by consumers' perceived psychological distance (made salient by construal level priming or donation proximity). The findings converge to show that congruency between donation framing and primed psychological distance leads to more favorable brand evaluations and greater purchase intentions. Findings of this research contribute to the corporate social responsibility literature and have important marketing research and managerial implications. (C) 2017 Published by Elsevier Inc. This paper analyses the relationship between the presence of financial experts on audit committees and the levels of insolvency risk in the banking sector. The main contribution is the introduction of banking sector regulation and ethical policies as moderators of this relationship. By using a sample of 159 banks from different countries for the period 2004-2010, empirical results suggest that the presence of financial experts on audit committees is useful to reduce insolvency risk, supporting the monitoring advantage hypothesis of financial expertise. This relationship is stronger when banking sector regulation is weaker and also in banks with stronger policies against unethical practices. These findings suggest that financial expertise substitutes regulation and complements ethical policies in reducing insolvency risk. (C) 2017 Elsevier Inc. All rights reserved. Despite widespread theoretical and practical interest in advertising engagement, scholars and practitioners share little consensus as to what it is and how it can be measured. Guided by the theories of immersion and presence, this research investigates the experiential nature of advertising engagement in the television advertising context. Using survey data (N = 1,115 cases) on thirteen TV advertisements aired during two Super Bowl broadcasts, a definition of the construct is developed and a parsimonious, reliable and valid four-item scale for measuring experiential TV advertising engagement is produced. As conceptualized, TV advertising engagement is an experience independent of its antecedents and consequences, in which the viewer is psychologically immersed in and present with a TV advertisement. These conceptual dimensions are reflected in the four items of the produced scale. (C) 2017 Elsevier Inc. All rights reserved. The purpose of this paper is to develop and test a theoretical model that explains that the influence of personal values on sustainable consumption behaviour is moderated by the cultural and consumption context in which the relationship is studied. Data is collected using survey questionnaires, conducted both online and offline, with diverse set of population and total 526 responses are used for assessing validity and reliability by applying PIS based structural equation modelling. The paper identifies fresh set of value dimensions that drive sustainable consumption practices. It is further seen that attitude is more likely to moderate the relationship for internally oriented values than externally oriented values. Thus, the paper significantly extends the previous research on the relationship between values and sustainable consumption behaviour. The findings of this paper have significant contributions for practitioners who wish to sell sustainable products in different cultural contexts. (C) 2017 Elsevier Inc. All rights reserved. The objective of this article is to examine a set of ways to influence consumer behavior toward making more environmentally friendly choices. We conducted three different studies to investigate (1) what consumers think would influence their behavior, (2) how several question-based verbal influence strategies nudge consumer behavior in one direction or another, and (3) how question-based written influence strategies influence consumer behavior. The findings reveal a discrepancy between what consumers think would influence behavior and what actually does influence it. In addition, under all verbal and written experimental conditions, influence strategies led to consumer change toward environmentally friendly offerings compared with alternative non-environment friendly offerings. The discussion highlights possible explanations for the results, managerial implications, the study's limitations, and suggestions for future research, with a special emphasis on research into factors that can change consumer behavior. (C) 2017 Elsevier Inc. All rights reserved. Although institutional environments are important determinants of transaction costs in IJV management and performance, prior studies have paid limited attention to their impacts on partner opportunism. Building on institution theory, this study examines how the characteristics of the host country government affect IJV foreign partner opportunism. The authors posit that host government resource dependence and policy uncertainty increase foreign partner opportunism, and their impacts are constrained by formal (i.e., contract specificity) and informal (i.e., shared vision) governance mechanisms, respectively. The empirical results from a primary survey of IJVs show that contract specificity is effective in curtailing the effect of resource dependence on foreign partner opportunism. In contrast, shared vision weakens the effect of policy uncertainty on foreign partner opportunism. These findings provide important research and managerial implications on how to manage foreign partner opportunism in INS. (C) 2017 Published by Elsevier Inc. This paper investigates the performance changes,of independent hotels due to the presence of nearby branded hotels in Texas. The moderating effects of these performance spillovers are also examined. Evidence from empirical analysis shows the existence and moderate significance of spillover effects from branded to independent hotels. Further analyses indicate that younger and higher-class independent hotels benefit significantly from performance spillovers from branded hotels. Higher-class branded hotels generate the vast majority of spillovers for their independent peers in the vicinity. Moreover, between the two types of branded hotels, franchised hotels generate the vast majority of spillovers, whereas contributions from chain-operated hotels are negligible. Suggestions are provided to independent hotels on how to improve their performance through spillovers from branded hotels. (C) 2017 Elsevier Inc. All rights reserved. The present research describes the development of the multi-dimensional and context-sensitive Consumer Motivation Scale (CMS). Based on an integrative perspective on consumer motivation, studies in economics, marketing, and psychology are reviewed. Three overarching "master goals" are identified - gain, hedonic, and normative - which make up the foundation for the proposed scale. Across three studies, and a variety of consumption contexts, a multi-dimensional goal structure is explored, confirmed, and validated - consisting of the three gain sub goals Value for Money, Quality and Safety; the two hedonic sub-goals Stimulation and Comfort; as well as the two normative sub-goals Ethics and Social Acceptance. The resulting 34-item measure is integrative, multi-dimensional, applicable to a wide range of settings, and takes individual and situational variability into account, and should prove useful in standard marketing research, and for development of tailored marketing strategies and segmentation of consumer groups, settings, or products. (C) 2017 Elsevier Inc. All rights reserved. Firms strive to develop innovation capabilities that help them achieve competitive advantage in the marketplace. This paper shows that managers can contribute to firms' innovation capabilities by involving themselves directly. Based on a unique multi-source (shareholder letters, COMPUSTAT, and World Bank Database) dataset covering 335 firms over nine years, empirical analysis reveals that top managers' innovativeness makes them more likely to adopt exploration orientation over exploitation orientation in innovation. This relative-exploration orientation is a key mediator that can transform top managers' innovativeness into better financial performance, and the effectiveness of this mediating role is contingent on a firm's resources and the industry environment. (C) 2017 Elsevier Inc. All rights reserved. Although social network analysis (SNA) offers an increasingly insightful perspective on the relational and structural properties of organizational activity, discourse on how to manage and coordinate its application is relatively scarce. Aimed largely at an applied network analyst, this paper presents a greater understanding of how SNA has been previously discussed in management studies, what the main points are and where these issues can be addressed prior to and during the research process to ensure network data are efficiently managed, analyzed and interpreted. Engaging with several practical concerns associated with SNA - including network boundary specification, data reliability, context of inquiry and network visualizations - a viable framework is developed that is accessible to managers, consultants or researchers in facilitating the structuring, collection, handling and analysis of network data. The discussion illustrates the relevance of this perspective for both a practitioner and a theoretical audience. (C) 2017 Elsevier Inc All rights reserved. This paper analyzes the shareholder wealth effects of corporate prosecution settlements in the U.S. from 2001 to 2014. We focus on the relative monetary size of the settlement and on deferred prosecution and non-prosecution agreements in contrast to traditional plea agreements. The results show that the settlement of criminal prosecution leads to positive shareholder wealth effects, which may be due to the resolution of any remaining uncertainty with respect to the total settlement amount and lower than expected settlement costs. Stockholders generally view the announcement of plea agreements more positively than the announcement of deferred prosecution and non-prosecution agreements. The likelihood of a certain agreement type is strongly dependent on the crime committed. Moreover, larger firms with better board-related governance structures that have not been criminally prosecuted prior to the settlement are more likely to avoid a criminal conviction. (C) 2017 Elsevier Inc. All rights reserved. This study explores the origins and benefits of value quantification capabilities in industrial markets. After polling 131 US industrial sales and account managers, this study finds that value quantification capabilities improve firm-but not individual sales manager-performance. Second, in stable markets, the effect of value quantification capabilities on firm performance is stronger than in dynamic markets. Third, the study finds that the following psychological traits are positively related to the individual value quantification capability: risk taking and creativity, sales manager questioning style, customer-oriented selling, and cross-functional collaboration. This study suggests that value quantification capabilities benefit firm performance especially in stable markets, it explores attitudinal and behavioural traits underlying value quantification capabilities, and it highlights the need for further studies exploring the circumstances under which value quantification capabilities improve individual sales manager performance. (C) 2017 Elsevier Inc. All rights reserved. The choice and implementation of pricing strategy is often described as an optimization problem where the firm chooses the most profitable pricing strategy given certain external determinants. Contrary to this notion, recent research indicates that the pricing of products is a costly and complex activity, and that firms may differ in their capability to implement pricing strategies. This case study of industrial pricing strategy in the European packaging industry examines how different assets and routines are involved in the implementation of pricing strategy. The study particularly highlights the role of individual judgment, human capital and commercial experience for the implementation of pricing strategy in markets that because of customization are subject to high levels of uncertainty. (C) 2017 Elsevier Inc. All rights reserved. Consumer price promotions account for more than half of many manufacturers' marketing budgets, and require a significant time investment to manage. Amidst the considerable research on price promotions, little academic attention has been paid to how manufacturers and retailers make price-promotion decisions. Based on in-depth interviews with a broad range of managers, this study investigates factors that influence price-promotion decisions in durable and consumer goods industries. Findings suggest that (1) intuition and untested assumptions are the main inputs into these decisions; (2) practitioners lack solid empirical evidence to guide their actions, and their beliefs are often in stark contrast with academic knowledge about the effectiveness of price promotions; and (3) price promotions are typically not evaluated against the objectives according to which they were justified, impeding appropriate feedback for future decisions. Research priorities are outlined to advance evidence-based decision-making in this area. (C) 2017 Elsevier Inc. All rights reserved. The Steadily Increasing Discount pricing strategy pits product scarcity against a future discount and forces consumers to make a choice between cost savings and the potential risk of missing the purchase opportunity. Dual non-student samples provide insight into the regret associated with this decision. The first study finds that product scarcity increases both action regret (purchase) and inaction regret (non-purchase) while the level of discount only influences inaction regret. In study two, the individual characteristics of materialism and price consciousness both impact the decision to buy, only materialism influences purchase decision regret. Theoretically, the results reverse the omission bias, demonstrating that regret from inaction is more salient than regret from action in this purchase situation. The studies underscore the high-risk, high-reward nature of multi period pricing for managers. While firms control product availability and discount levels, they cannot control their customers' personality traits. Therefore, they should make every effort to understand their customers before embarking on such a strategy. (C) 2017 Published by Elsevier Inc. This research examines whether spatial differences in presentation of comparative price promotions (vertical vs. horizontal) affect consumers' assessment of price discounts. Results show that when comparative price promotions are presented horizontally, consumers take longer to compute the monetary discount and are less accurate than when such prices are presented vertically. This suggests that cognitive constraints exhibit a larger detrimental effect on performing computations when prices are presented horizontally than vertically. In addition, a constraint on visual resources impacts vertical presentations more while a constraint on verbal resources influences price computations that are presented horizontally. (C) 2017 Elsevier Inc. All rights reserved. This article introduces multi-product price response maps for various value pricing applications in competitive situations. The maps are based on the direct elicitation of individual willingness to pay (WTP) as a range for competing products; they reveal an individual's or market's choice probability for a focal product, at its own and competing products' prices. Transforming the price response into profit, revenue, or unit sold maps supports optimal pricing decisions. The maps are also useful for optimizing profit differences from the closest competitor and for portfolio pricing. Managers can use a consumer indecisiveness map, gained from the WTP range data, to devise complementary marketing measures at prices where consumer uncertainty is high. The illustration of this approach uses two empirical examples, featuring two or more competing consumer goods, and demonstrates the predictive and external validity of these proposed maps. (C) 2017 Elsevier Inc. All rights reserved. Value-based pricing has the potential to improve differentiation, profitability, and value creation for industrial firms and their customers. However, while most of the pricing research considers the ways organizations set or get value-based prices, only few studies consider how individual managers influence the pricing process and what prevents them from setting and getting value-based prices. This is of critical concern, since it is not just organizations, but individuals within organizations who make pricing decisions and their decision-making is influenced by institutional pressures such as socially prescribed norms, rationalized meanings, and beliefs about profitable approaches to pricing. This study addresses this gap in the current knowledge by adopting a micro-foundations perspective to pricing, and focusing on the barriers that individual managers encounter when implementing value-based pricing. Drawing on a single case study in a global industrial firm, and from interviews with 24 managers, this study identifies 11 individually, organizationally, and externally induced barriers to value-based pricing. The study also sheds light on the potential sensegiving strategies for overcoming these barriers. (C) 2017 Elsevier Inc. All rights reserved. Circular RNAs (circRNAs) are a novel type of endogenous noncoding RNA gaining research interest in recent years. Despite this increase in interest, the mechanism of circRNAs in the pathogenesis of multiple cardiovascular diseases, particularly myocardial fibrosis, is rarely reported. In the following study, the expression profiles and potential mechanisms of circRNAs in mice myocardial fibrosis models in vitro are investigated. Previous research examining circRNA expression profiles of diabetic db/db mice myocardium using circRNA microarray found 43 circRNAs were abnormally expressed, including 24 up regulated circRNAs and 19 down-regulated circRNAs. Furthermore, circRNA_010567 was markedly up regulated in diabetic mice myocardium and cardiac fibroblasts (CFs) treated with Ang II. Bioinformatics analysis predicted circRNA_010567, sponge miR-141 and miR-141 directly target TGF-beta 1, which was validated by dual-luciferase assay. Subsequently, functional experiments revealed circRNA_010567 silencing could up-regulate miR-141 and down-regulate TGF-beta 1 expression, and suppress fibrosis associated protein resection in CFs, including Col I, Col III and alpha-SMA. Results demonstrate the circRNA_010567/miR-141/TGF-beta 1 axis plays an important regulatory role in the diabetic mice myocardial fibrosis model. The present study characterizes a new function of circRNA in the pathogenesis of myocardial fibrosis in a diabetic mouse model, providing novel insight for circRNA-miRNA-mRNA in cardiovascular disease. (C) 2017 Elsevier Inc. All rights reserved. Tissue kallikrein and kallikrein-related peptidases (KLKs) form the largest group of serine proteases in the human genome, sharing many structural and functional characteristics. Multiple alternative transcripts have been reported for the most human KLK genes, while many of them are aberrantly expressed in various malignancies, thus possessing significant prognostic and/or diagnostic value. Alternative splicing of cancer-related genes is a common cellular mechanism accounting for cancer cell transcriptome complexity, as it affects cell cycle control, proliferation, apoptosis, invasion, and metastasis. In this study, we describe the identification and molecular cloning of eight novel transcripts of the human KLK10 gene using 3' rapid amplification of cDNA ends (3' RACE) and next-generation sequencing (NGS), as well as their expression analysis in a wide panel of cell lines, originating from several distinct cancerous and normal tissues. Bioinformatic analysis revealed that the novel KLK10 transcripts contain new alternative splicing events between already annotated exons as well as novel exons. In addition, investigation of their expression profile in a wide panel of cell lines was performed with nested RT-PCR using variant-specific pairs of primers. Since many KLK mRNA transcripts possess clinical value, these newly discovered alternatively spliced KLK10 transcripts appear as new potential biomarkers for diagnostic and/or prognostic purposes or as targets for therapeutic strategies. (C) 2017 Published by Elsevier Inc. Hepatocellular carcinoma (HCC) represents the third leading cause of cancer-related deaths globally. Although 5-Fluorouracil (5-FU) is used as the first choice treatment for advanced HCC, it exerts poor efficacy and is associated with acquired and intrinsic resistance. Sphingosine kinases (Sphk) 1 and 2 play tumour-promoting roles in different cancer types including HCC and thus represent promising pharmacological targets. In the present study, we have investigated for the first time the anticancer efficacy and underlying molecular mechanisms of combined administration of 5-FU and dual Sphkl/Sphk2 inhibitor SKI-II (44[4-(4-chlorophenyl)-1,3-thiazol-2-yl]amino]phenol) in HepG2 hepatocellular carcinoma cells. Here, we report that co-administration of 5-FU and SKI-II at low sub-toxic concentrations of 20 mu M and 5 mu M, respectively, synergistically inhibit cell proliferation, markedly reduce cell migration and the clonogenic survival, and increase apoptosis induction in HepG2 cells. Additional Western blot analyses have shown that possible mechanisms underlying enhanced sensitivity to 5-FU induced by dual Sphk 1/2 inhibition could include abrogation of FAK-regulated IGF-1R activity and down-regulation of osteopontin expression culminating in the inhibition of NE-kappa B activity and its downstream signalling mediated by sirtuin 1 and p38 MAPK. Our results clearly show that pharmacological blockade of both Sphk isoforms represents a promising strategy to boost the anti-tumour efficacy of 5-FU and provide a rationale for further in vivo studies into the possible use of SKI-II inhibitor as an adjunct to 5-FU treatment in HCC. (C) 2017 Elsevier Inc. All rights reserved. Ovarian endometrial cysts cause some kinds of ovarian cancer, and iron is considered as one factor of carcinogenesis. In contrast, hypoxia is associated with progression, angiogenesis, metastasis, and resistance to therapy in cancer. We investigated hypoxia-induced perturbation of iron homeostasis in terms of labile iron, iron deposition, and iron regulatory protein (IRP) in ovarian endometrial cysts. Iron deposition, expression of IRPs, and a protein marker of hypoxia in human ovarian endometrial cysts were analyzed histologically. The concentration of free iron and the pO(2) level of the cyst fluid of human ovarian cysts (n = 9) were measured. The expression of IRP2 under hypoxia was investigated in vitro by using Ishikawa cells as a model of endometrial cells. Iron deposition and the expression of IRP2 and Carbonic anhydrase 9 (CA9) were strong in endometrial stromal cells in the human ovarian endometrial cysts. The average concentration of free iron in the cyst fluid was 8.1 +/- 2.9 mg/L, and the pO(2) was 22.4 +/- 5.2 mmHg. A cell-based study using Ishikawa cells revealed that IRP2 expression was decreased by an overload of Fe(II) under normoxia but remained unchanged under hypoxia even in the presence of excess Fe(II). An increase in the expression of IRP2 caused upregulation of intracellular iron as a result of the response to iron deficiency, whereas the protein was degraded under iron-rich conditions. We found that iron-rich regions existed in ovarian endometrial cysts concomitantly with the high level of IRP2 expression, which should generally be decomposed upon an overload of iron. We revealed that an insufficient level of oxygen in the cysts is the main factor for the unusual stabilization of IRP2 against iron-mediated degradation, which provides aberrant uptake of iron in ovarian endometrial stromal cells and can potentially lead to carcinogenesis. (C) 2017 Elsevier Inc. All rights reserved. Brown adipose tissue (BAT) is critical for mammal's survival in the cold environment. Uncoupling protein 1 (UCP1) is responsible for the non-shivering thermogenesis in the BAT. Pig is important economically as a meat-producing livestock. However, whether BAT or more precisely UCP1 protein exists in pig remains a controversy. The objective of this study was to ascertain whether pig has UCP1 protein. In this study, we used rapid amplification of cDNA ends (RACE) technique to obtain the UCP1 mRNA 3' end sequence, confirmed only exons 1 and 2 of the UCP1 gene are transcribed in the pig. Then we cloned the pig UCP1 gene exons 1 and 2, and expressed the UCP1 protein from the truncated pig gene using E. coli BL21. We used the expressed pig UCP1 protein as antigen for antibody production in a rabbit. We could not detect any UCP1 protein expression in different pig adipose tissues by the specific pig UCP1 antibody, while our antibody can detect the cloned pig UCP1 as well as the mice adipose UCP1 protein. This result shows although exons 1 and 2 of the pig UCP1 gene were transcribed but not translated in the pig adipose tissue. Furthermore, we detected no uncoupled respiration in the isolated pig adipocytes. Thus, these results unequivocally demonstrate that pig has no UCP1 protein. Our results have resolved the controversy of whether pigs have the brown adipose tissue. (C) 2017 Elsevier Inc. All rights reserved. T-type calcium channels are prominently expressed in primary nociceptive fibers and well characterized in pain processes. Although itch and pain share many similarities including primary sensory fibers, the function of T-type calcium channels on acute itch has not been explored. We investigated whether T-type calcium channels expressed within primary sensory fibers of mouse skin, especially Ca(v)3.2 subtype, involve in chloroquine-, endothelin-1-and histamine-evoked acute itch using pharmacological, neuronal imaging and behavioral analyses. We found that pre-locally blocking three subtypes of T-type calcium channels in the peripheral afferents of skins, yielded an inhibition in acute itch or pain behaviors, while selectively blocking the Ca(v)3.2 channel in the skin peripheral afferents only inhibited acute pain but not acute itch. These results suggest that T-type Ca(v)3.1 or Ca(v)3.3, but not Ca(v)3.2 channel, have an important role in acute itch processing, and their distinctive roles in modulating acute itch are worthy of further investigation. (C) 2017 Elsevier Inc. All rights reserved. Excessive Ultra violet (UV) radiation induces injuries to retinal pigment epithelium (RPE) cells (RPEs) and retinal ganglion cells (RGCs), causing retinal degeneration. Cyclophilin D (Cyp-D)-dependent mitochondrial permeability transition pore (mPTP) opening mediates UV-induced cell death. In this study, we show that a novel Cyp-D inhibitor compound 19 efficiently protected RPEs and RGCs from UV radiation. Compound 19-mediated cytoprotection requires Cyp-D, as it failed to further protect RPEs/RGCs from UV when Cyp-D was silenced by targeted shRNAs. Compound 19 almost blocked UV-induced p53-Cyp-D mitochondrial association, mPTP opening and subsequent cytochrome C release. Further studies showed that compound 19 inhibited UV-induced reactive oxygen species (ROS) production, lipid peroxidation and DNA damage. Together, compound 19 protects RPEs and RGCs from UV radiation, possibly via silencing Cyp-D-regulated intrinsic mitochondrial death pathway. Compound 19 could a lead compound for treating UV-associated retinal degeneration diseases. (C) 2017 Elsevier Inc. All rights reserved. Deconjugation of ubiquitin and/or ubiqutin-like modified substrates is essential to maintain a sufficient free ubiquitin within the cell. Deubiquitinases (DUBS) play a key role in the process. Besides, DUBs also play several important regulatory roles in cellular processes. However, our knowledge of their developmental roles are limited. The report here aims to study their potential roles in craniofacial development. Based on the previous genome-wide study in 2009, we selected 36 DUBs to perform the morpholino (MO) knockdown in this study, followed by the Alcian blue cartilage staining at 5 days post-fertilization (dpf) larvae to investigate the facial development. Results classified the tested DUBs into three groups, in which 28% showed unchanged phenotype (Class 1); 22% showed mild changes on the branchial arches (Class 2A); 31% had malformation on branchial arches and ethmoid plate (Class 2B); and 19% had severe changes in most of the facial structures (Class 3). Lastly, we used uchl3 morphant as an example to show that our screening data could be useful for further functional studies. To summarize, we identified new craniofacial developmental role of 26 DUBs in the zebrafish. (C) 2017 Elsevier Inc. All rights reserved. Brusatol, isolated from brucea, has been proved to exhibit anticancer influence on various kind of human malignancies. However, the role that brusatol plays in pancreatic cancer is seldom known by the public. Through researches brusatol was proved to inhibit growth and induce apoptosis in both PATU-8988 and PANC-1 cells by decreasing the expression level of Bcl-2 and increasing the expression levels of Bax, Cleaved Caspase-3. Then we found the activation of the JNK, p38 MAPK and inactivation of the NF-kappa b, Stat3 are related with the potential pro-apoptotic signaling pathways. However, SP600125 could not only abrogated the JNK activation caused by brusatol, but also reverse the p38 activation and the decrease of BcI-2 as SB203580 did. Besides, SP600125 and SB203580 also reversed the inactivation of NF-kappa b and Stat3. Furthermore, BAY 11-7082 and S3I-201 indeed had the similar effect as brusatol had on the expression of Phospho-Stat3 and Bcl-2. To sum up, we came to a conclusion that in pancreatic cancer, brusatol do inhibit growth and induce apoptosis. And we inferred that brusatol illustrates anticancer attribution via JNK/p38 MAPK/NF-kappa b/Stat3/Bc1-2 signaling pathway. (C) 2017 Published by Elsevier Inc. The current study aimed to understand the role of novel, highly selective, orally active, non-peptide Angiotensin II type 2 receptor (AT2R) agonist, Compound 21 and its potential additive effect with Telmisartan on apoptosis and underlying posttranslational modifications in a non-genetic murine model for type 2 diabetic nephropathy (T2DN). An experimental model for T2DN was developed by administering low dose Streptozotocin in high fat diet fed male Wistar rats, followed by their treatment with Telmisartan, C21 or their combination. Our results demonstrated that C21 and Telmisartan combination attenuated metabolic and renal dysfunction, renal morphological and micro-architectural aberrations and hemodynamic disturbances in type 2 diabetic rats. The anti-apoptotic and anti-inflammatory effects of Telmisartan were significantly accentuated by C21 indicated by expression of apoptotic markers (Parpl, Caspase 8, Caspase 7, cleaved PARP and cleaved Caspase 3) and NF-kappa B mediated inflammatory molecules like interleukin 6, tumour necrosis factor alpha; monocyte chemoattractant protein 1 and vascular cell adhesion molecule 1. C21 was found to improve Telmisartan mediated reversal of histone H3 acetylation at lysine 14 and 27 and expression of histone acetyl transferase, p300/CBP-associated factor also known to regulate NF-kappa B activity and DNA damage response. C21 in combination with Telmisartan markedly mitigates caspase mediated apoptosis and NF-kappa B signalling in T2D kidney, which could be partially attributed to its influence on PCAF mediated histone H3 acetylation. Hence further research should be done to develop this combination to treat T2DN. (C) 2017 Elsevier Inc. All rights reserved. Thoracic ossification of the ligamentum flavum (TOLF) is a unique disease with ectopic ossification, and is a major cause of thoracic spinal stenosis and myelopathy. However, the underlying etiology remains largely unknown. In this study, the ligamentum flavum was systematically analyzed in TOLF patients by using comprehensive iTRAQ labeled quantitative proteomics. Among 1285 detected proteins, there were 282 proteins identified to be differentially expressed. The Gene Ontology (GO) analysis regarding functional annotation of proteins consists of the following three aspects: the biological process, the molecular function, and the cellular components. The function clustering analysis revealed that ten of the above proteins are related to inflammation, such as tumor necrosis factor (TNF). This finding was subsequently validated by ELISA, which indicated that serum TNF-alpha of TOLF patients was significantly higher compared with the control group. To address the effect of TNF-alpha on ossification-related gene expression, we purified and cultured primary cells from thoracic ligamentum flavum of patients with TOLF. TNF-alpha was then used to stimulate cells. RNA was isolated and analyzed by RT-PCR. Our results showed that TNF-alpha was able to induce the expressions of osteoblast-specific transcription factor Osterix (Osx) in ligamentum flavum cells, suggesting that it can promote osteoblast differentiation. In addition, as the Osx downstream osteoblast genes OCN and ALP were also activated by TNF-alpha. This is the first proteomic study to identify inflammation factors such as TNF-alpha involved in ossified ligamentum flavum in TOLF, which may contribute to a better understanding of the cause of TOLF. (C) 2017 Elsevier Inc. All rights reserved. Tyrosinase-catalyzed L-tyrosine oxidation is a key step in melanogenesis, and intense melanin formation is often a problem in chemotherapies or food preservation. Here we report that methyl cinnamate one of the constituents characterized from mycelium and sporocarp of American matsutake mushroom Tricholoma magnivelare inhibits both enzymatic and cellular melanin formation. Methyl cinnamate inhibits mushroom tyrosinase-catalyzed L-tyrosine oxidation while the oxidation of L-3,4-dihydroxyphenylalanine (L-DOPA) was not inhibited. In subsequent cellular assays, methyl cinnamate significantly suppressed melanogenesis of murine B16-F10 melanoma cells without affecting cell growth. However, methyl 3-phenylpropionate, a dihydro-derivative of methyl cinnamate, did not possess melanogenesis, indicating that the double bond in the enone moiety is a key Michael reaction acceptor to elicit the activity. In addition, a rather rare chlorinated benzaldehyde derivative, 3,5-dichloro-4-methoxybenzaldehyde isolated from the same source, was found to show potent cytotoxicity, and the chlorine atom reduced a tyrosinase inhibitory activity but enhanced cytotoxicity. Our findings suggest that methyl cinnamate is a novel melanogenesis inhibitor from natural sources. (C) 2017 Elsevier Inc. All rights reserved. Omega-3 (omega-3) polyunsaturated fatty acids (PUFAs) are known to have strong anti-inflammatory effects. In the present study, we investigated the protective effects of omega-3 PUFAs on experimentally induced murine colitis. Intrarectal administration of 2.5% 2,4,6-trinitrobenzene sulfonic acid (TNBS) caused inflammation in the colon of wild type mice, but this was less severe in fat-1 transgenic mice that constitutively produce omega-3 PUFAs from w-6 PUFAs. The intraperitoneal administration of docosahexaenoic acid (DHA), a representative omega-3 PUFA, was also protective against TNBS-induced murine colitis. In addition, endogenously formed and exogenously introduced omega-3 PUFAs attenuated the production of malondialdehyde and 4-hydroxynonenal in the colon of TNBS-treated mice. The effective protection against inflammatory and oxidative colonic tissue damages in fat-1 and DHA-treated mice was associated with suppression of NF-kappa B activation and cyclooxygenase-2 expression and with elevated activation of Nrf2 and upregulation of its target gene, heme oxygenise-1. Taken together, these results provide mechanistic basis of protective action of omega-3 fatty PUFAs against experimental colitis. (C) 2017 Published by Elsevier Inc. Although mast cells are traditionally thought to function as effector cells in allergic responses, they have increasingly been recognized as important regulators of various immune responses. Mast cells mature locally; thus, tissue-specific influences are important for promoting mast cell accumulation and survival in the skin and the gastrointestinal tract. In this study, we determined the effects of keratinocytes on mast cell accumulation during Th17-mediated skin inflammation. We observed increases in dermal mast cells in imiquimod-induced psoriatic dermatitis in mice accompanied by the expression of epidermal stem cell factor (SCF), a critical mast cell growth factor. Similar to mouse epidermal keratinocytes, SCF was highly expressed in the human HaCaT keratinocyte cell line following stimulation with IL-17. Further, keratinocytes promoted mast cell proliferation following stimulation with IL-17 in vitro. However, the effects of keratinocytes on mast cells were significantly diminished in the presence of anti CD117 (stem cell factor receptor) blocking antibodies. Taken together, our results revealed that the Th17-mediated inflammatory environment promotes mast cell accumulation through keratinocytederived SCF. (C) 2017 Elsevier Inc. All rights reserved. Cell scattering of epithelial carcinoma cancer cells is one of the critical event in tumorigenesis. Cells losing epithelial cohesion detach from aggregated epithelial cell masses and may migrate to fatal organs through metastasis. The present study investigated the molecular mechanism by which squamous cell carcinoma cells grow scattered at the early phase of transformation while maintaining the epithelial phenotype. We studied YD-10B cells, which are established from human oral squamous cell carcinoma, because the cells grow scattered without the development of E-cadherin junctions (ECJs) under routine culture conditions despite the high expression of functional E-cadherin. The functionality of their E-cadherin was demonstrated in that YD-10B cells developed ECJs, transiently or persistently, when they were cultured on substrates coated with a low amount of fibronectin or to confluence. The phosphorylation of JNK was up-regulated in YD-10B cells compared with that in human normal oral keratinocyte cells or human squamous cell carcinoma cells, which grew aggregated along with well-organized ECJs. The suppression of JNK activity induced the aggregated growth of YD-10B cells concomitant with the development of ECJs. These results indicate for the first time that inherently up-regulated JNK activity induces the scattered growth of the oral squamous cell carcinoma cells through down-regulating the development of ECJ despite the expression of functional E-cadherin, a hallmark of the epithelial phenotype. (C) 2017 Elsevier Inc. All rights reserved. miR-17-92 cluster are overexpressed in hematological malignancies including chronic myeloid leukemia (CML). However, their roles and mechanisms that regulate BCR-ABL induced leukemogenesis remain unclear. In this study, we demonstrated that genomic depletion of miR-17-92 inhibited the BCR-ABL induced leukemogenesis by using a mouse model of transplantation of BCR-ABL transduced hematopoietic stem cells. Furthermore, we identified that miR-19b targeted A20 (TNFAIP3). A20 overexpression results in inactivation of NF-icB activity including decrease of phosphorylation of P65 and I kappa B alpha, leads to induce apoptosis and inhibit proliferation and cycle in CML CD34(+) cells. Thus we proved that miR-17-92 is a critical contributor to CML leukemogenesis via targeting A20 and activation of NF-kappa B signaling. These findings indicate that miR-17-92 will be important resources for developing novel treatment strategies of CML and better understanding long-term disease control. (C) 2017 Elsevier Inc. All rights reserved. Mammalian alpha/beta hydrolase domain (ABHD) family of proteins have emerged as key regulators of lipid metabolism and are found to be associated with human diseases. Human alpha/beta-hydrolase domain containing protein 11 (ABHDII) has recently been predicted as a potential biomarker for human lung adenocarcinoma. In silico analyses of the ABHD11 protein sequence revealed the presence of a conserved lipase motif GXSXG. However, the role of ABHD11 in lipid metabolism is not known. To understand the biological function of ABHD11, we heterologously expressed the human ABHD11 in budding yeast, Saccharomyces cerevisiae. In vivo [C-14]acetate labeling of cellular lipids in yeast cells overexpressing ABHD11 showed a decrease in triacylglycerol content. Overexpression of ABHD11 also alters the molecular species of triacylglycerol in yeast. Similar activity was observed in its yeast homolog, Ygr031w. The role of the conserved lipase motif in the hydrolase activity was proven by the mutation of all conserved amino acid residues of GXSXG motif. Collectively, our results demonstrate that human ABHD11 and its yeast homolog YGRO31W have a pivotal role in the lipid metabolism. (C) 2017 Elsevier Inc. All rights reserved. Flooding is a principal stress that limits plant productivity. The sensing of low oxygen levels (hypoxia) plays a critical role in the signaling pathway that functions in plants in flooded environments. In this study, to investigate hypoxia response mechanisms in Arabidopsis, we identified three hypoxia-related genes and subjected one of these genes, Arabidopsis thaliana HYPDXIA-INDUCED GENE DOMAIN I (AtHIGD1), to molecular characterization including gene expression analysis and intracellular localization of the encoded protein. AtHIGD1 was expressed in various organs but was preferentially expressed in developing siliques. Confocal microscopy of transgenic plants harboring eGFP-tagged AtHIGD1 indicated that AtHIGD1 is localized to mitochondria. Importantly, plants overexpressing AtHIGD1 exhibited increased resistance to hypoxia compared to wild type. Our results represent the first report of a biological function for an HIGD protein in plants and indicate that AtHIGD1 is a mitochondrial protein that plays an active role in mitigating the effects of hypoxia on plants. (C) 2017 Elsevier Inc. All rights reserved. Dysfunctional coagulation aggravates clinical outcome in patients with sepsis. The aim of this study was to define the role of Rac-1 in the formation of platelet-derived microparticles (PMPs) and thrombin generation (TG) in abdominal sepsis. Male C57BL/6 mice underwent cecal ligation and puncture (CLP). Scanning electron microscopy and flow cytometry were used to quantify PMPs. TG was determined by use of a fluorimetric assay. It was found that CLP increased Racl activity in platelets, which was abolished by administration of the Racl inhibitor NSC23766. Sepsis-induced TG in vivo was reflected by reduced capacity of plasma from septic animals to generate thrombin ex vivo. Administration of NSC23766 increased peak and total TG in plasma from CLP mice indicating that Rac-1 regulates sepsis-induced formation of thrombin. The number of circulating PMPs was markedly elevated in animals with abdominal sepsis. Treatment with NSC23766 significantly decreased formation of PMPs in septic mice. Platelet activation in vitro caused release of numerous MPs. Notably, NSC23766 abolished PMP formation in activated platelets in vitro. These findings suggest that Rac-1 regulates PMP formation and TG in sepsis and that inhibition of Racl activity could be a useful target to inhibit dysfunctional coagulation in abdominal sepsis. (C) 2017 Elsevier Inc. All rights reserved. Amino acid biosynthesis has emerged as a source of new drug targets as many bacterial strains auxotrophic for amino acids fail to proliferate under in vivo conditions. Branch chain amino acids (BCAAs) are important for Mycobacterium tuberculosis (Mtb) survival and strains deficient in their biosynthesis were attenuated for growth in mice. Threonine dehydratase (IlvA) is a pyridoxal-5-phosphate (PLP) dependent enzyme that catalyzes the first step in isoleucine biosynthesis. The MRA_1571 of Mycobacterium tuberculosis H37Ra (Mtb-Ra), annotated to be coding for IlvA, was cloned, expressed and purified. Purified protein was subsequently used for developing enzyme assay and to study its biochemical properties. Also, E. coli BL21 (DE3) IlvA knockout (E. coli-Delta ilvA) was developed and genetically complemented with Mtb-Ra ilvA expression construct (pET32a-ilvA) to make complemented E. coli strain (E. coli-Delta ilvA + pET32a-ilvA). The E. coli-Delta ilvA showed growth failure in minimal medium but growth restoration was observed in E. coli-Delta ilvA + pET32a-ilvA. E. coli-Delta ilvA growth was also restored in the presence of isoleucine. The IlvA localization studies detected its distribution in cell wall and membrane fractions with relatively minor presence in cytosolic fraction. Maximum IlvA expression was observed at 72 h in wild type (WT) Mtb-Ra infecting macrophages. Also, Mtb-Ra IlvA knockdown (KD) showed reduced survival in macrophages compared to WT and complemented strain (KDC). (C) 2017 Elsevier Inc. All rights reserved. Axonal branching is a fundamental requirement for sending electrical signals to multiple targets. However, despite the importance of axonal branching in neural development and function, the molecular mechanisms that control branch formation are poorly understood. Previous studies have hardly addressed the intracellular signaling cascade of axonal bifurcation characterized by growth cone splitting. Recently we reported that DISCO interacting protein 2 (DIP2) regulates bifurcation of mushroom body axons in Drosophila melanogaster. DIP2 mutant displays ectopic bifurcations in alpha/beta neurons. Taking advantage of this phenomenon, we tried to identify genes involved in branching formation by comparing the transcriptome of wild type with that of DIP2 RNAi flies. After the microarray analysis, Glaikit (Gkt), a member of the phospholipase D superfamily, was identified as a downstream target of DIP2 by RNAi against gkt and qRT-PCR experiment. Single cell MARCM analysis of gkt mutant phenocopied the ectopic axonal branches observed in DIP2 mutant. Furthermore, a genetic analysis between gkt and DIP2 revealed that gkt potentially acts in parallel with DIP2. In conclusion, we identified a novel gene underlying the axonal bifurcation process. (C) 2017 The Authors. Published by Elsevier Inc. Recent studies have shown that dopamine plays an important role in several types of cancer by inhibiting cell growth and invasion via dopamine receptors (DRs), such as dopamine receptor D2. However, the roles of DR agonists in cancer cell growth and invasion remain unclear. In our study, we found that apomorphine (APO), one of the most commonly prescribed DR agonists, inhibited TNF-alpha-induced matrix metalloprotease-9 (MMP-9) expression and cell invasion in MCF-7 human breast carcinoma cells through DR-independent pathways. Further mechanistic studies demonstrated that APO suppresses TNF-alpha-induced transcription of MMP-9 by inhibiting activator protein-1 (AP-1), a well-described transcription factor. This is achieved via extracellular signal-regulated kinases 1 and 2 (ERK1/2). Our study has demonstrated that APO targets human MMP-9 in a DR-independent fashion in MCF-7 cells, suggesting that APO is a potential anticancer agent that can suppress the metastatic progression of cancer cells. (C) 2017 Elsevier Inc. All rights reserved. The incidence of diseases associated with a high sugar diet has increased in the past years, and numerous studies have focused on the effect of high sugar intake on obesity and metabolic syndrome. However, how a high sugar diet influences gut homeostasis is still poorly understood. In this study, we used Drosophila melanogaster as a model organism and supplemented a culture medium with 1 M sucrose to create a high sugar condition. Our results indicate that a high sugar diet promoted differentiation of intestinal stem cells through upregulation of the JNK pathway and downregulation of the JAK/STAT pathway. Moreover, the number of commensal bacteria decreased in the high sugar group. Our data suggests that the high caloric diet disrupts gut homeostasis and highlights Drosophila as an ideal model system to explore gastrointestinal disease. (C) 2017 Elsevier Inc. All rights reserved. alpha B-crystallin (alpha BC) is a small heat shock protein. Mutations in the alpha BC gene are linked to alpha-crystallin-opathy, a hereditary myopathy histologically characterized by intracellular accumulation of protein aggregates. The disease-causing R120G alpha BC mutant, harboring an arginine-to-glycine replacement at position 120, is an aggregate-prone protein. We previously showed that the R120G mutant's aggregation in HeLa cells was prevented by enforced expression of alpha BC on the endoplasmic reticulum (ER). To elucidate the molecular nature of the preventive effect on the R120G mutant, we isolated proteins binding to ER-anchored alpha BC (TM alpha BC). The ER transmembrane CLN6 protein was identified as a TM alpha BC's binder. CLN6 knockdown in HeLa cells attenuated TM alpha BC's anti-aggregate activity against the R120G mutant. Conversely, CLN6 overexpression enhanced the activity, indicating that CLN6 operates as a downstream effector of TMaBC. CLN6 physically interacted with the R120G mutant, and repressed its aggregation in HeLa cells even when TM alpha BC was not co-expressed. Furthermore, CLN6's antagonizing effect on the R120G mutant was compromised upon treatment with a lysosomal inhibitor, suggesting CLN6 requires the intact autophagy-lysosome system to prevent the R120G mutant from aggregating. We hence conclude that CLN6 is not only a molecular entity of the anti-aggregate activity conferred by the ER manipulation using TM alpha BC, but also serves as a potential target of therapeutic interventions. (C) 2017 Elsevier Inc. All rights reserved. Ischemic stroke is one of major causes of adult morbidity. Recent studies have shown that over-activated microglial cells play a critical role in aggravating cerebral oxygen glucose deprivation/reoxygenation (OGD/R) damage by releasing excessive inflammatory cytokines. However, the involving mechanisms are not distinct yet. Long non-coding RNAs (lncRNAs) have been reported to in participate in lots of complicated biological processes. Our understandings of the relationship between lncRNAs and OGD/R injury are largely limited. In this study, we demonstrated that a lncRNA Gm4419 functioned as a crucial mediator in the activation of NF-kappa B signaling pathway, causing neuroinflammation damage during OGD/R. Gm4419 was abnormally up-regulated in OGD/R-treated microglial cells. We found that the high level of Gm4419 promoted the phosphorylation of I kappa B alpha by physically associating with I kappa B alpha, therefore, led to increased nucleus NF-kappa B levels for the transcriptional activation of TNE-alpha, IL-1 beta and IL-6. In addition, we also demonstrated that knockdown of Gm4419 functioned as NF-kappa B inhibitor in OGD/R microglial cells, showing that down-regulation of Gm4419 had protective role against OGD/R injury. In summary, Gm4419 is required for microglial cell OGD/R injury though the activation of NF-kappa B signaling. Thus, Gm4419 appears to be a promising therapeutic target for ischemic stroke. (C) 2017 Elsevier Inc. All rights reserved. Type 1 diabetes (T1D) is a chronic autoimmune disease in which the pancreatic beta-cells fail to produce insulin. In addition to such change in the endocrine function, the exocrine function of the pancreas is altered as well. To understand the molecular basis of the changes in both endocrine and exocrine pancreatic functions due to T1D, the proteome profile of the pancreas of control and diabetic mouse was compared using two dimensional gel electrophoresis (2D-GE) and the differentially expressed proteins identified by electrospray ionization liquid chromatography-tandem mass spectrometry (ESI-LC-MS/MS). Among several hundred protein spots analyzed, the expression levels of 27 protein spots were found to be up-regulated while that of 16 protein spots were down-regulated due to T1D. We were able to identify 23 up-regulated and 9 down-regulated protein spots and classified them by bioinformatic analysis into different functional categories: (i) exocrine enzymes (or their precursors) involved in the metabolism of proteins, lipids, and carbohydrates; (ii) chaperone/stress response; and (iii) growth, apoptosis, amino acid metabolism or energy metabolism. Several proteins were found to be present in multiple forms, possibly resulting from proteolysis and/or post-translational modifications. Succinate dehydrogenase [ubiquinone] flavoprotein subunit, which is the major catalytic subunit of succinate dehydrogenase (SDH), was found to be one of the proteins whose expression was increased in T1D mouse pancreata. Since altered expression of a protein can modify its functional activity, we tested and observed that the activity of SDH, a key metabolic enzyme, was increased in the T1D mouse pancreata as well. The potential role of the altered expression of different proteins in T1D associated pathology in mouse is discussed. (C) 2016 Published by Elsevier Inc. This paper focuses on group decision making with hesitant fuzzy preference relations (HFPRs). To derive the consistent ranking order, a new multiplicative consistency concept for HFPRs is introduced that considers all information offered by the decision makers. The main feature is that this concept neither adds values to hesitant fuzzy elements nor disregards any information provided by the decision makers. To judge the multiplicative consistency of HFPRs, 0-1 mixed programming models are constructed. According to the assumption that there is an independent uniform distribution on values in hesitant fuzzy elements, the hesitant fuzzy priority weight vector is derived from multiplicative consistent reciprocal preference relations and their probabilities. Meanwhile, several consistency based 0-1 mixed models to estimate missing values in incomplete HFPRs are constructed that can address the situation where ignored objects exist. Considering the consensus in group decision making, a distance measure based consensus index is defined, and a method for improving the group consensus is provided to address the situation where the consensus requirement is unsatisfied. Then, a distance measure between any two HFPRs is introduced that is used to define the weights of the decision makers. Furthermore, a multiplicative consistency and consensus based interactive algorithm for group decision making with HFPRs is developed. Finally, a multi-criteria group decision making problem with HFPRs is offered to show the concrete application of the procedure, and comparison analysis is also made. (C) 2017 Elsevier B.V. All rights reserved. The concurrence of randomness and imprecision widely exists in real-world problems. To describe the aleatory and epistemic uncertainty in a single framework and take more information into account, in this paper, we propose the concept of probabilistic dual hesitant fuzzy set (PDHFS) and define the basic operation laws of PDHFSs. For the purpose of applications, we also develop the basic aggregation operator for PDHFSs and give the general procedures for information fusion. Next, we propose a visualization method based on the entropy of PDHFSs so as to analyze the aggregated information and improve the final evaluation results. The proposed method is then applied to the risk evaluations. A case study of the Arctic geopolitical risk evaluation is presented to illustrate the validity and effectiveness. Finally, we discuss the advantages and the limitations of the PDHFS in detail. (C) 2017 Elsevier B.V. All rights reserved. This paper demonstrates an innovative data analysis for weather using Cloud Computing, integrating both system and application Data Science services to investigate extreme weather events. Identifying five existing projects with ongoing challenges, our aim is to process, analyze and visualize collected data, study implications and report meaningful findings. We demonstrate the use of Cloud Computing technologies, MapReduce and optimization techniques to simulate temperature distributions and analyze weather data. Two major cases are presented. The first case is focused on forecasting temperatures based on studying trends from the historical data of Sydney, Singapore and London to compare the historical and forecasted temperatures. The second case is to use five-step MapReduce for numerical data analysis and eight-step process for visualization, which is used to analyze and visualize temperature distributions in the United States, before, during and after the time of experiencing polar vortex, as well as in the United Kingdom during and after the flood. Optimization was used in experiments involved up to 100 nodes between Cloud and non-Cloud and compared performance with and without optimization. There was an improvement in performance between 20% and 30% under 60 nodes in Cloud. Results, discussion and comparison were presented. We justify our research contributions and explain thoroughly in the paper how the three goals can be met: (1) forecasting temperatures of three cities based on evaluating the trends from the historical data; (2) using five-step MapReduce to achieve shorter execution time on Cloud and (3) using eight-step MapReduce with optimization to achieve data visualization for temperature distributions on US and UK maps. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Besides, we also develop eLRRSC algorithm to improve the scalability of the original LRRSC by considering its closed form solution. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC and its variant compared with several state-of-the-art subspace clustering algorithms. (C) 2017 Published by Elsevier B.V. Incorporating social network information and contexts to improve recommendation performance has been drawing considerable attention recently. However, a majority of existing social recommendation approaches suffer from the following problems: (1) They only employ individual trust among users to optimize prediction solutions in user latent feature space or in user-item rating space; thus, they exhibit low recommendation accuracy. (2) They use decision trees to perform context-based user-item subgrouping; thus, they can only handle categorical contexts. (3) They have difficulty coping with the data sparsity problem. To solve these problems, and accurately and realistically model recommender systems, we propose a social matrix factorization method to optimize the prediction solution in both user latent feature space and user-item rating space using the individual trust among users. To further improve the recommendation performance and alleviate the data sparsity problem, we propose a context-aware enhanced model based on Gaussian mixture model (GMM). Two real datasets (one sparse dataset and one dense dataset) based experiments show that our proposed method outperforms the state-of-the-art social matrix factorization and context-aware recommendation methods in terms of prediction accuracy. (C) 2017 Elsevier B.V. All rights reserved. Fuzzy rule-based systems (FRBSs) are a common alternative for applying fuzzy logic in different areas and real-world problems. The schemes and algorithms used to generate these types of systems imply that their performance can be analyzed from different points of view, not only model accuracy. Any model, including fuzzy models, needs to be sufficiently accurate, but other perspectives, such as interpretability, are also possible for the FRBSs. Thus, the Accuracy-Interpretability trade-off arises as a challenge for fuzzy systems, as approaches are currently able to generate FRBSs with different trade-offs. Here, rule Relevance is added to Accuracy and Interpretability for a better trade-off in FRBSs. These three factors are involved in this approach to make a rule selection using a multi-objective evolutionary algorithm. The proposal has been tested and compared with nine datasets, two linguistic and two scatter fuzzy algorithms, four measures of interpretability and two rule relevance formulations. The results have been analyzed for different views of Interpretability, Accuracy and Relevance, and the statistical tests have shown that significant improvements have been achieved. On the other hand, the Relevance-based role of fuzzy rules has been checked, and it has been shown that low Relevance rules have a relevant role for trade-off, while some rules with high Relevance must sometimes be removed to reach an adequate trade-off. (C) 2017 Elsevier B.V. All rights reserved. Malignant Focal Liver Lesion (FLL) is a main cause of primary liver cancer. In most existing Computer Aided Diagnosis (CAD) systems of FLLs, machine learning and data mining methods have been widely applied to classify liver CT images for diagnostic decision making. However, these strategies of automatic decision support depend on data-driven classification methods and may lead to risky diagnosis on uncertain medical cases. To tackle the drawback, we expect to integrate the objective judgments from classification algorithms and the subjective judgments from human expert experiences, and propose a data-human-driven Three-way Decision Support for FLL diagnosis. The methodology of three-way decision support is motivated by Three-way Decision (3WD) theory. It tri-partitions the FLL medical records into certain benign, certain malignant and uncertain cases. The certain cases are automatically classified by decision rules and the challenging uncertain cases will be carefully diagnosed by human experts. Therefore, the method of three-way decision support can balance well the risk and efficiency of decision making. The workflow of three-way decision support for FLL diagnosis includes the stages of semantic feature extraction, three-way rule mining and decision cost optimization. Abundant experiments demonstrate that the proposed three-way decision support method is effective to handle the uncertain medical cases, and in the meantime achieves precise classification of FLLs to support liver cancer diagnosis. (C) 2017 Elsevier B.V. All rights reserved. This paper incorporates the regularization strategy of kernel based extreme learning machines (ELM) to improve the performance of a neuro-fuzzy learning machine. The proposed learning machine, regularized extreme learning adaptive neuro-fuzzy inference system (R-ELANFIS), has the advantages of reduced randomness, reduced computational complexity and better generalization. The parameters of the fuzzy layer of R-ELANFIS are randomly selected by incorporating the explicit knowledge representation using fuzzy membership functions. The parameters of the linear neural layer are determined by solving a constrained optimization problem in a regularized framework. Simulations on regression problems show that R-ELANFIS achieves similar or better generalization performance compared to well known kernel based regression methods and ELM based neuro-fuzzy systems. The proposed method can also be applied to multi-class classification problems. (C) 2017 Elsevier B.V. All rights reserved. Human Learning Optimization (HLO) is an emerging meta-heuristic with promising potential, which is inspired by human learning mechanisms. Although binary algorithms like HLO can be directly applied to mixed-variable problems that contains both continuous values and discrete or Boolean values, the search efficiency and the performance of those algorithms may be significantly spoiled due to "the curse of dimensionality" caused by the binary coding strategy especially when the continuous parameters of problems require high accuracy. Therefore, this paper extends HLO and proposes a novel hybrid-coded HLO (HcHLO) framework to tackle mix-coded problems more efficiently and effectively, in which real coded parameters are optimized by a new continuous HLO (CHLO) based on the linear learning mechanism of humans and the other variables are handled by the binary learning operators of HLO. Finally, HcHLO is adopted to solve 14 benchmark problems and its performance is compared with that of recent meta-heuristic algorithms. The experimental results show that the proposed HcHLO achieves the best-known overall performance so far on the test problems, which demonstrates the validity and superiority of HcHLO. (C) 2017 Published by Elsevier B.V. Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. In this paper, we suggest that using stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. Once the network model is chosen to be stochastic graphs, every aspect of the network such as path, clique, spanning tree, network measures and sampling algorithms should be treated stochastically. In particular, the network measures should be reformulated and new network sampling algorithms must be designed to reflect the stochastic nature of the network. In this paper, we first define some network measures for stochastic graphs, and then we propose four sampling algorithms based on learning automata for stochastic graphs. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. The performances of these algorithms are studied in terms of Kolmogorov-Smirnov D statistics, relative error, Kendall's rank correlation coefficient and relative cost. (C) 2017 Elsevier B.V. All rights reserved. In this paper, bifurcation of limit cycles is considered for planar cubic-order systems with an isolated nilpotent critical point. Normal form theory is applied to compute the generalized Lyapunov constants and to prove the existence of at least 9 small amplitude limit cycles in the neighborhood of the nilpotent critical point. In addition, the method of double bifurcation of nilpotent focus is used to show that such systems can have 10 small-amplitude limit cycles near the nilpotent critical point. These are new lower bounds on the number of limit cycles in planar cubic order systems near an isolated nilpotent critical point. Moreover, a set of center conditions is obtained for such cubic systems. (C) 2017 Elsevier Inc. All rights reserved. Management of a portfolio that includes an illiquid asset is an important problem of modern mathematical finance. One of the ways to model illiquidity among others is to build an optimization problem and assume that one of the assets in a portfolio cannot be sold until a certain finite, infinite or random moment of time. This approach arises a certain amount of models that are actively studied at the moment. Working in the Merton's optimal consumption framework with continuous time we consider an optimization problem for a portfolio with an illiquid, a risky and a risk free asset. Our goal in this paper is to carry out a complete Lie group analysis of PDEs describing value function and investment and consumption strategies for a portfolio with an illiquid asset that is sold in an exogenous random moment of time with a prescribed liquidation time distribution. The problem of such type leads to three dimensional nonlinear Hamilton Jacobi Bellman (HJB) equations. Such equations are not only tedious for analytical methods but are also quite challenging form a numeric point of view. To reduce the three-dimensional problem to a two-dimensional one or even to an ODE one usually uses some substitutions, yet the methods used to find such substitutions are rarely discussed by the authors. We find the admitted Lie algebra for a broad class of liquidation time distributions in cases of HARA and log utility functions and formulate corresponding theorems for all these cases. We use found Lie algebras to obtain reductions of the studied equations. Several of similar substitutions were used in other papers before whereas others are new to our knowledge. This method gives us the possibility to provide a complete set of non-equivalent substitutions and reduced equations. (C) 2017 The Authors. Published by Elsevier Inc. We describe min-max formulas for the principal eigenvalue of a V-drift Laplacian defined by a vector field V on a geodesic ball of a Riemannian manifold N. Then we derive comparison results for the principal eigenvalue with the one of a spherically symmetric model space endowed with a radial vector field, under pointwise comparison of the corresponding radial sectional and Ricci curvatures, and of the radial component of the vector fields. These results generalize the known case V = 0. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we consider the positive steady states for reaction diffusion advection competition models in the whole space with a spatially periodic structure. Under the spatially periodic setting, we establish sufficient conditions for the existence of positive steady states of this model, respectively, by investigating the sign of the principal eigenvalue for some linearized eigenvalue problems. As an application, a Lotka-Volterra reaction diffusion advection model for two competing species in a spatially periodic environment is considered. Finally, some numerical simulations are presented to seek dynamical behaviors. (C) 2017 Elsevier Inc. All rights reserved. In this note, we use recent zero duality results arising from Monotropic Programming problem for analyzing consistency of the convex feasibility problem in Hilbert spaces. We characterize consistency in terms of the lower semicontinuity of the infimal convolution of the associated support functions. (C) 2017 Elsevier Inc. All rights reserved. We show that certain terminating (6)phi(5) series can be factorized into a product of two (3)phi(2) series. As applications we prove a summation formula for a product of two q-Delannoy numbers along with some congruences for sums involving q-Delannoy numbers. This confirms three recent conjectures of the second author. (C) 2017 Elsevier Inc. All rights reserved. A two-level atom coupled to the radiation field is studied. First principles in physics suggest that the coupling function, representing the interaction between the atom and the radiation field, behaves like vertical bar k vertical bar(-1/2), as the photon momentum k tends to zero. Previous results on non-existence of ground state eigenvalues suggest that in the most general case binding does not occur in the spin-boson model, i.e., the minimal energy of the atom-photon system is not an eigenvalue of the energy operator. Hasler and Herbst have shown [12], however, that under the additional hypothesis that the coupling function be off-diagonal - which is customary to assume - binding does indeed occur. In this paper an alternative proof of binding in case of off-diagonal coupling is given, i.e., it is proven that, if the coupling function is off-diagonal, the ground state energy of the spin-boson model is an eigenvalue of the Hamiltonian. We develop a multiscale method that can be applied in the situation we study, with the help of a key symmetry operator which we use to demonstrate that the most singular terms appearing in the multiscale analysis vanish. (C) 2017 Elsevier Inc. All rights reserved. We prove that every representation of the Cuntz algebra O-N on a separable Hilbert space H arises from a pure isometry V whose wandering space H circle divide imV has dimension N. We identify the permutative representations in this construction. (C) 2017 Elsevier Inc. All rights reserved. Let X be a Banach space, T be a compact Hausdorff space and C(T) be the real Banach space of all continuous functions on T endowed with the supremum norm. We show that if there exists a standard epsilon-isometric embedding f : X -> C(T), then there are nonempty closed subset S subset of T and a linearly isometric embedding g : X -> (span) over bar (f (X)vertical bar s) subset of C(S) defined as g(u) = lim(n ->infinity) f(2(n) u)vertical bar s/2(n) for each u is an element of X satisfying that parallel to f(u)s - g(u)parallel to <= 4 epsilon for all u is an element of X. Making use of this result and the well known simultaneous extension operator E : C(S) -> C(T), we also prove that the existence of a standard epsilon-isometric embedding f : X -> C(T) implies the existence of a linearly isometric embedding E o g : X -> (span) over bar (E(f(X)vertical bar s)) subset of C(T) whenever T is metrizable. These conclusions generalize several well-known results. For any compact Hausdorff space (resp. compact metric space) T, we further obtain that if g(X) is complemented in (span) over bar (f(X)vertical bar s) (resp. E o g(X) is complemented in (span) over bar (E(f(X)vertical bar s))), then the standard epsilon-isometric embedding f : X -> C(T) is stable. (C) 2017 Elsevier Inc. All rights reserved. We investigate spectral properties of Markov semigroups in von Neumann algebras and their dual semigroups in a fairly general setting which assumes only the abelianess of the semigroups and positivity of the maps in question. In particular, we analyse various properties of the spectral subspaces, and relations between the spectra of the Markov semigroup and its dual semigroup. In our analysis, we make extensive use of ergodic and quasi-ergodic projections which seems to be a new but quite fruitful approach. (C) 2017 Elsevier Inc. All rights reserved. The viscous and inviscid aggregation equation with Newtonian potential models a number of different physical systems and has close analogs in 2D incompressible fluid mechanics. We consider a slight generalization of these equations in the whole space establishing well-posedness of the viscous and inviscid equations, spatial decay of the viscous solutions, and the convergence of viscous solutions to the inviscid solution as the viscosity goes to zero. (C) 2017 Elsevier Inc. All rights reserved. In this paper we study l(1)-like properties for some Lipschitz-free spaces. The main result states that, under some natural conditions, the Lipschitz-free space over a proper metric space linearly embeds into an l(1)-sum of finite dimensional subspaces of itself. We also give a sufficient condition for a Lipschitz-free space to have the Schur property, the 1-Schur property and the 1-strong Schur property respectively. We finish by studying those properties on a new family of examples, namely the Lipschitz-free spaces over metric spaces originating from p-Banach spaces, for p in (0,1). (C) 2017 Elsevier Inc. All rights reserved. With the aid of Hamiltonian structures for the constrained modified KP flow, we show that a coupled Romani equation is a bi-Hamiltonian system with a local Hamiltonian operator (P) over bar (2) and a symplectic operator. After a reduction, we recover the bi-Hamiltonian formalism of the Ramani equation. Furthermore, we obtain the explicit bi-Hamiltonian structure for the Drinfeld-Sokolov hierarchies of C-3((1)) using the local Hamiltonian operator (P) over bar (2). (C) 2017 Elsevier Inc. All rights reserved. We study the specification property and infinite topological entropy for two specific types of linear operators: translation operators on weighted Lebesgue function spaces and weighted backward shift operators on sequence F-spaces. It is known, from the work of Barton, Martinez-Gimenez, Murillo-Arcila, and Penis, that for weighted backward shift operators, the existence of a single non-trivial periodic point is sufficient for specification. We show this also holds for translation operators on weighted Lebesgue function spaces. This implies, in particular, that for these operators, the specification property is equivalent to Devaney chaos. We show that these forms of chaos (unsurprisingly) imply infinite topological entropy, but that (surprisingly) the converse does not hold. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we study the nonuniform sampling and reconstruction problem in shift-invariant subspaces of mixed Lebesgue spaces. We first show that shift invariant subspaces in mixed Lebesgue spaces L-p,L-q (Rd+1) can be well-defined. Then we propose that the sampling problem in shift-invariant subspaces of mixed Lebesgue spaces is well-posed. At last, the nonuniform samples {f(x(j), y(k)) : k, j is an element of J} of a function f belonging to a shift-invariant subspace of mixed Lebesgue spaces are proposed, and we give a fast reconstruction algorithm that allows exact reconstruction off as long as the sampling set X = {(x(j), y(k)) : k, j is an element of J} is sufficiently dense. (C) 2017 Elsevier Inc. All rights reserved. Takeuchi's generalized complete elliptic integrals related to generalized trigonometric functions are of importance in the computation of the generalized pi and in the elementary proof of Ramanujan's cubic transformation. In this paper we generalize some well-known results of the classical complete elliptic integrals to the case of Takeuchi's generalimd complete elliptic integrals. We obtain sharp monotonicity and convexity results for combinations of these functions, as well as functional inequalities. (C) 2017 Elsevier Inc. All rights reserved. We establish the linear stability of an electron equilibrium for an electrostatic and collisionless plasma in interaction with a wall. The equilibrium we focus on is called in plasma physics a Debye sheath. Specifically, we consider a two species (ions and electrons) Vlasov-Poisson-Ampere system in a bounded and one dimensional geometry. The interaction between the plasma and the wall is modeled by original boundary conditions: On the one hand, ions are absorbed by the wall while electrons are partially re-emitted. On the other hand, the electric field at the wall is induced by the accumulation of charged particles at the wall. These boundary conditions ensure the compatibility with the Maxwell-Ampere equation. A global existence, uniqueness and stability result for the linearized system is proven. The main difficulty lies in the fact that (due to the absorbing boundary conditions) the equilibrium is a discontinuous function of the particle energy, which results in a linearized system that contains a degenerate transport equation at the border. (C) 2017 Elsevier Inc. All rights reserved. It is well known that the nonlinear Schrodinger (NLS) equation is a very important integrable equation. Ablowitz and Musslimani introduced and investigated an integrable nonlocal NLS equation through inverse scattering transform. Very recently, we proposed an integrable nonlocal modified Korteweg de Vries equation (mKdV) which can also be found in the papers of Ablowitz and Musslimani. We have constructed the Darboux transformation and soliton solutions for the nonlocal mKdV equation. In this paper, we will investigate further the nonlocal mKdV equation. We will give its exact solutions including soliton and breather through inverse scattering transformation. These solutions have some new properties, which are different from the ones of the mKdV equation. (C) 2017 Elsevier Inc. All rights reserved. In the scalar setting, the power functions vertical bar x vertical bar(gamma), for -1 < gamma < 1, are the canonical examples of A(2) weights. In this paper, we study two types of power functions in the matrix setting, with the goal of obtaining canonical examples of A(2) matrix weights. We first study Type 1 matrix power functions, which are n x n matrix functions whose entries are of the form a vertical bar x vertical bar(gamma). Our main result characterizes when these power functions are A(2) matrix weights and has two extensions to Type 1 power functions of several variables. We also study Type 2 matrix power functions, which are n x n matrix functions whose eigenvalues are of the form a vertical bar x vertical bar(gamma). We find necessary conditions for these to be A(2) matrix weights and give an example showing that even nice functions of this form can fail to be A(2) matrix weights. (C) 2017 Published by Elsevier Inc. Let X be an abstract L-space and let Y be any Banach space. Motivated by a classical result of T. J. Abatzoglou that describes smooth points of the space of operators on a Hilbert space, we give a characterization of very smooth points in the space of operators from X to Y. We also show that a similar result can be proved when Y is an abstract M-space such that the set of extreme points of the dual unit ball is a weak*-compact set. (C) 2017 Elsevier Inc. All rights reserved. In the present paper, we introduce a measure of the non-univalency of an analytic function, and we use it in order to find the best approximation of analytic function by a locally univalent normalized analytic function. (C) 2017 Elsevier Inc. All rights reserved. Let G = (V, E) be a connected finite graph. In this short paper, we reinvestigate the Kazdan-Warner equation Delta u = c - he(u) with c < 0 on G, where h defined on Visa known function. Grigor'yan, Lin and Yang [3] showed that if the Kazdan-Warner equation has a solution, then <(h)over bar>, the average value of h, is negative. Conversely, if (h) over bar < 0, then there exists a number c_(h) < 0, such that the Kazdan-Warner equation is solvable for every 0 > c > c_(h) and it is not solvable for c < c_(h). Moreover, if h <= 0 and h not equivalent to 0, then c_(h) = -infinity. Inspired by Chen and Li's work [1], we ask naturally: Is the Kazdan Warner equation solvable for c = c_(h)? In this paper, we answer the question affirmatively. We show that if c_(h) = -infinity, then h <= 0 and h not equivalent to 0. Moreover, if c_(h) > -infinity, then there exists at least one solution to the Kazdan-Warner equation with c = c_(h). (C) 2017 Elsevier Inc. All rights reserved. In this paper we study the structure, weak continuity and approximability properties for the integer multiplicity rectifiable currents carried by the graphs of maximal semi-monotone set-valued maps on an n-dimensional convex domain. Especially, we give an enhanced version of approximation theorem for the subgradients of semi convex functions. (C) 2017 Elsevier Inc. All rights reserved. In Part 1, we developed a new technique based on Lipschitz pushforwards for proving the jump set containment property Hm-1(J(u) \ J(f)) = 0 of solutions u to total variation denoising. We demonstrated that the technique also applies to Huber-regularised TV. Now, in this Part 2, we extend the technique to higher order regularisers. We are not quite able to prove the property for total generalised variation (TGV) based on the symmetrised gradient for the second-order term. We show that the property holds under three conditions: First, the solution u is locally bounded. Second, the second-order variable is of locally bounded variation, w is an element of BVloc(Omega; R-m), instead of just bounded deformation, w is an element of BD(Omega). Third, w does not jump on J(u) parallel to it. The second condition can be achieved for non-symmetric TGV. Both the second and third condition can be achieved if we change the Radon (or L-1) norm of the symmetrised gradient Ew into an L-pi norm, p > 1, in which case Korn's inequality holds. On the positive side, we verify the jump set containment property for second-order infimal convolution TV (ICTV) in dimension m = 2. We also study the limiting behaviour of the singular part of Du, as the second parameter of TGV(2) goes to zero. Unsurprisingly, it vanishes, but in numerical discretisations the situation looks quite different. Finally, our work additionally includes a result on TGV-strict approximation in BV(Omega). (C) 2017 Elsevier Inc. All rights reserved. In this study, a proof is given that if a non-Archimedean Kothe space A, which is generated by an infinite matrix B = (b(n)(k))(k,n is an element of N) such that b(n)(k) <= b(n+1)(k) for k, n is an element of N and for each k, l is an element of N, m is an element of N exists such that bi(n)(k+1) >= b(n+l)(k) for n >= m, then a continuous operator T : Lambda -> Lambda exists that has no nontrivial closed invariant subspaces. (C) 2017 Elsevier Inc. All rights reserved. In this paper we describe stability properties of the Sine-Gordon breather solution. These properties are first described by suitable variational elliptic equations, which also implies that the stability problem reduces in some sense to (i) the study of the spectrum of explicit linear systems, and (ii) the understanding of how bad directions (if any) can be controlled using low regularity conservation laws. Then we discuss spectral properties of a fourth-order linear matrix system. Using numerical methods, we confirm that all spectral assumptions leading to the H-2 x H-1 stability of SG breathers are numerically satisfied, even in the ultra-relativistic, singular regime. (C) 2017 Elsevier Inc. All rights reserved. In this note, we prove that many von Neumann algebras can be generated by two unitary operators u and v with u(2) = v(3) = 1. As applications, three important classes of II1 factors possessing this property are those with Property Gamma, or with a Cartan masa, or non-prime ones. (C) 2017 Elsevier Inc. All rights reserved. This paper concerns about the regularity criteria for the three-dimensional Navier-Stokes equations. By establishing a more subtle estimate of the crucial convective term in Navier-Stokes equations, we improve [4] and [10] a lot. (C) 2017 Elsevier Inc. All rights reserved. Basil (Ocimum basilicum L.), a medicinal plant of the Lamiaceae family, is used in traditional medicine; its essential oil is a rich source of phenylpropanoids. Methylchavicol and methyleugenol are the most important constituents of basil essential oil. Drought stress is proposed to enhance the essential oil composition and expression levels of the genes involved in its biosynthesis. In the current investigation, an experiment based on a completely randomized design (CRD) with three replications was conducted in the greenhouse to study the effect of drought stress on the expression level of four genes involved in the phenylpropanoid biosynthesis pathway in O. basilicum c.v. Keshkeni luvelou. The genes studied were chavicol O-methyl transferase (CVOMT), eugenol O-methyl transferase (EOMT), cinnamate 4-hydroxylase (C4H), 4-coumarate coA ligase (4CL), and cinnamyl alcohol dehydrogenase (CAD). The effect of drought stress on the essential oil compounds and their relationship with the expression levels of the studied genes were also investigated. Plants were subjected to levels of 100%, 75%, and 50% of field capacity (FC) at the 6-8 leaf stage. Essential oil compounds were identified by gas chromatography/mass spectrometry (GC-MS) at flowering stage and the levels of gene expression were determind by real time PCR in plant leaves at the same stage. Results showed that drought stress increased the amount of methylchavicol, methyleugenol, beta-Myrcene and alpha-bergamotene. The maximum amount of these compounds was observed at 50% FC. Real-time PCR analysis revealed that severe drought stress (50% FC) increased the expression level of CVOMT and EOMT by about 6.46 and 46.33 times, respectively, whereas those of CAD relatively remained unchanged. The expression level of 4CL and C4H reduced under drought stress conditions. Our results also demonstrated that changes in the expression levels of CVOMT and EOMT are significantly correlated with methylchavicol (r = 0.94, P <= 0.05) and methyleugenol (r = 0.98, P <= 0.05) content. Thus, drought stress probably increases the methylchavicol and methyleugenol content, in part, through increasing the expression levels of CVOMT and EOMT. (C) 2017 Elsevier Ltd. All rights reserved. The genus Hypoxylon, a member of the family Xylariaceae, has been known to produce significant secondary metabolites in terms of chemical diversity. Moreover, the compounds isolated can also be used as chemotaxonomic characters for differentiation among the two sections, which are sect. Annulata and sect. Hypoxylon. In our continuing chemical screening programme for novel compounds, the crude extracts of H. fendleri BCC32408 gave significant chemical profiles in HPLC analyses. Thus, the chemical investigation of these crude extracts was then carried out. The investigation led to the isolation of ten previously undescribed compounds including three terphenylquinones (fendleryls A - C), one terphenyl (fendleryl D), and six novel drimane - phthalide-type lactone/isoindolinones derivatives (fendlerinines A - F) along with seven known compounds (2-O-methylatromentin, rickenyl E, atromentin, rickenyls C - D, (+)-ramulosin, and O-hydroxyphenyl acetic acid). The chemical structures were determined on the basis of spectroscopic analyses, including 1 D, 2D NMR and high-resolution mass spectrometry, as well as chemical transformations. In addition, these isolated compounds were assessed for antimicrobial activity including antimalarial (against Plasmodium falciparum, K-1 strain), antifungal (against Candida albicans), antibacterial (against Bacillus cereus) activities. Cytotoxicity against both cancerous (KB, MCF-7, NCI-H187) and non-cancerous (Vero) cells of these compounds were also evaluated. (C) 2017 Elsevier Ltd. All rights reserved. Erucastrum canariense Webb & Berthel. (Brassicaceae) is a wild crucifer that grows in rocky soils, in salt and water stressed habitats, namely in the Canary Islands and similar environments. Abiotic stress induced by copper chloride triggered formation of a phytoalexin and galacto-oxylipins in E. canariense, whereas wounding induced galacto-oxylipins but not phytoalexins. Analysis of the metabolite profiles of leaves of E. canariense followed by isolation and structure determination afforded the phytoalexin eru-calexin, the phytoanticipin indolyl-3-acetonitrile, the galacto-oxylipins arabidopsides A, C, and D, and the oxylipin 12-oxophytodienoic acid. In addition, arabidopsides A and D were also identified in extracts of leaves of Nasturtium officinale R. Br. (C) 2017 Elsevier Ltd. All rights reserved. Plant sterols have become well-known to promote cardiovascular health through the reduction of low density lipoprotein cholesterol in the blood. Plant sterols also have anti-inflammatory, anti-cancer, anti oxidative and anti-atherogenicity activities. Microalgae have the potential to become a useful alternative source of plant sterols with several species reported to have higher concentrations than current commercial ones. In order to increase phytosterol production and optimise culture conditions, the high sterol producer Pavlova lutheri was treated in different dosages (50-250 mJ m(-2)) of UV-C radiation and several concentrations (1-500 mu mol/L) of hydrogen peroxide (H2O2) and the sterol contents were quantified for two days after the treatments. The contents of malondialdehyde (MDA) superoxide dismutase (SOD) as indications of cell membrane damage by lipid peroxidation and repair of oxidative stress, respectively, were measured. Higher activities of SOD and MDA were observed in the treated biomass when compared to the controls. Total sterols increased in P. lutheri due to UV-C radiation (at 100 mJ m(-2)) but not in response to H2O2 treatment. Among the nineteen sterol compounds identified in P. lutheri, poriferasterol, epicampesterol, methylergostenol, fungisterol, dihydrochondrillasterol, and chondrillasterol increased due to UV-C radiation. Therefore, UV-C radiation can be a useful tool to boost industrial phytosterol production from P. lutheri. (C) 2017 Elsevier Ltd. All rights reserved. A recent publication describes an enzyme from the vanilla orchid Vanilla planifolia with the ability to convert ferulic acid directly to vanillin. The authors propose that this represents the final step in the biosynthesis of vanillin, which is then converted to its storage form, glucovanillin, by glycosylation. The existence of such a "vanillin synthase" could enable biotechnological production of vanillin from ferulic acid using a "natural" vanilla enzyme. The proposed vanillin synthase exhibits high identity to cysteine proteases, and is identical at the protein sequence level to a protein identified in 2003 as being associated with the conversion of 4-coumaric acid to 4-hydroxybenzaldehyde. We here demonstrate that the recombinant cysteine protease-like protein, whether expressed in an in vitro transcription-translation system, E. coli, yeast, or plants, is unable to convert ferulic acid to vanillin. Rather, the protein is a component of an enzyme complex that preferentially converts 4-coumaric acid to 4-hydroxybenzaldehyde, as demonstrated by the purification of this complex and peptide sequencing. Furthermore, RNA sequencing provides evidence that this protein is expressed in many tissues of V planifolia irrespective of whether or not they produce vanillin. On the basis of our results, V. planifolia does not appear to contain a cysteine protease-like "vanillin synthase" that can, by itself, directly convert ferulic acid to vanillin. The pathway to vanillin in V planifolia is yet to be conclusively determined. (C) 2017 Elsevier Ltd. All rights reserved. Six previously undescribed C-17-guaianolides, a previously undescribed guaianolide alkaloid, and two previously undescribed guaianolides as well as 10 known guaianolides were obtained from an ethanol extract of Ainsliaea yunnanensis Franch. The chemical structures of all previously reported sesquiterpenoids were determined by extensive NMR spectroscopic analysis in combination with a modified Mosher's method. All isolates were in vitro screened for inhibitory effect against nitric oxide release in RAW 264.7 macrophages stimulated by LPS. Zaluzanin C remarkably inhibited the production of nitric oxide with an IC50 value of 6.54 mu M. (C) 2017 Elsevier Ltd. All rights reserved. The genus Swartzia is a member of the tribe Swartzieae, whose genera constitute the living descendants of one of the early branches of the papilionoid legumes. Legume lectins comprise one of the main families of structurally and evolutionarily related carbohydrate-binding proteins of plant origin. However, these proteins have been poorly investigated in Swartzia and to date, only the lectin from S. lae-vicarpa seeds (SLL) has been purified. Moreover, no sequence information is known from lectins of any member of the tribe Swartzieae. In the present study, partial cDNA sequences encoding L-type lectins were obtained from developing seeds of S. simplex var. grandiflora. The amino acid sequences of the S. simplex grandiflora lectins (SSGLs) were only averagely related to the known primary structures of legume lectins, with sequence identities not greater than 50-52%. The SSGL sequences were more related to amino acid sequences of papilionoid lectins from members of the tribes Sophoreae and Dal-bergieae and from the Cladratis and Vataireoid clades, which constitute with other taxa, the first branching lineages of the subfamily Papilionoideae. The three-dimensional structures of 2 representative SSGLs (SSGL-A and SSGL-E) were predicted by homology modeling using templates that exhibit the characteristic (beta-sandwich fold of the L-type lectins. Molecular docking calculations predicted that SSGL-A is able to interact with D-galactose, N-acetyl-D-galactosamine and alpha-lactose, whereas SSGL-E is probably a non-functional lectin due to 2 mutations in the carbohydrate-binding site. Using molecular dynamics simulations followed by density functional theory calculations, the binding free energies of the interaction of SSGL-A with GaINAc and alpha-lactose were estimated as -31.7 and -47.5 kcal/mol, respectively. These findings gave insights about the carbohydrate-binding specificity of SLL, which binds to immobilized-lactose but is not retained in a matrix containing D-Ga1NAc as ligand. 2017 Elsevier Ltd. All rights reserved. Laser desorption/ionization mass spectrometry imaging (LDI-MSI) with gold nanoparticle-enhanced target (AuNPET) was used for visualization of small molecules in the rhubarb stalk (Rheum rhabarba-rum L.). Analysis was focused on spatial distribution of biologically active compounds which are found in rhubarb species. Detected compounds belong to a very wide range of chemical compound classes such as anthraquinone derivatives and their glucosides, stilbenes, anthocyanins, flavonoids, polyphenols, organic acids, chromenes, chromanones, chromone glycosides and vitamins. The analysis of the spatial distribution of these compounds in rhubarb stalk with the nanoparticle-rich surface of AuNPET target plate has been made without additional matrix and with minimal sample preparation steps. (C) 2017 Elsevier Ltd. All rights reserved. Phytochemical investigations of the roots of Spergularia marginata had led to the isolation of four previously undescribed triterpenoid saponins, a known one and one spinasterol glycoside. Their structures were established by extensive NMR and mass spectroscopic techniques as 3-0-1 beta-D-glucuronopyranosyl echinocystic acid 28-O-alpha-L-arabinopyranosyl-(1 -> 2)-alpha-L-rhamnopyranosyl-(1 -> 3)-beta-D-xylopyranosyl-(1 -> 4)-alpha-L-rhamnopyranosyl-(1 -> 2)-alpha-L- arabinopyranosyl ester, 3-O-beta-D-glucopyranosyl(1 -> 3)-beta-D-glucuronopyranosyl echinocystic acid 28-O-alpha-L-arabinopyranosyl-(1 -> 2)-alpha-L-rhamnopyranosyl-(1 -> 3)-beta-D-xylopyranosyl-(1 -> 4)-alpha-L-rhamnopyranosyl-(1 -> 2)-alpha-L-arabinopyranosyl ester, 3-O-beta-D-glucopyranosyl-(1 -> 4)-3-O-sulfate-beta-D-glucuronopyranosyl echinocystic acid 28-O-alpha-L-arabinopyranosyl-(1 -> 2)-alpha-L-arhamnopyranosyl-(1 -> 3)-beta-D-xylopyranosyl-(1 4)-alpha-L-rhamnopyranosyl-(1 -> 2)-alpha-L-arabinopyranosyl ester, and 3-O-beta-D-glucopyranosyl-(1 -> 4)-beta-D-glucuronopyranosyl 21-O-acetyl acacic acid. Their cytotoxicity was evaluated against two human cancer cell lines SW480 and MCF-7. The most active compound showed a cytotoxicity with IC50 14.2 +/- 0.8 mu M (SW480), and 18.7 +/- 0.8 mu M (MCF-7), respectively. (C) 2017 Elsevier Ltd. All rights reserved. This study describes the identification of very long chain polyunsaturated fatty acids (VLCPUFAs) in three strains of dinoflagellates (Amphidinium carterae, Cystodinium sp., and Peridinium aciculiferum). The strains were cultivated and their lipidomic profiles were obtained by high resolution mass spectrometry with the aid of positive and negative electrospray ionization (ESI) mode by Orbitrap apparatus. Hydrophilic interaction liquid chromatography (HILIC/ESI) was used to separate major lipid classes of the three genera of dinoflagellates by neutral loss scan showing the ion [M + H-28:8](+), where 28:8 was octacosaoctaenoic acid, and by precursor ion scanning of ions at m/z 407, which was an ion corresponding to the structure of acyl of 28:8 acid (C27H39COO-). Based on these analyzes, it was found that out of more than a dozen lipid classes present in the total lipids, only two classes of neutral lipids, i.e. major triacylglycerols and minor diacylglycerols contain VLCPUFAs. In polar lipids, VLCPUFAs were identified only in phosphatidic acid (PA) and phosphatidyl choline (PC) or in their lyso-forms (LPA and LPC). Further analysis of individual lipid classes by reversed-phase high-performance liquid chromatography (RPHPLC) showed the presence of triacylglycerols (TAGs) containing VLCPUFAs, i.e. molecular species of the sn-28:7/28:8/28:8, sn-26:7/28:7/28:8, or sn-26:7/28:8/28:8 types. These TAGs are the longest and most unsaturated TAGs isolated from a natural source that have yet been synthesized. In the case of PA and PC, tandem MS identified sn-28:8/16:0-PA and sn-28:8/16:0-PC and the corresponding lyso-forms (28:8-LPC and 28:8-LPA). All these results indicate that TAGs containing VLCPUFAs are biosynthesized in dinoflagellates in the same manner as in higher eukaryotic organisms, which means that the PA, after conversion to DAG, serves as a precursor in the biosynthesis of other phospholipids, e.g. PC, and, after further acylation, also of TAG. (C) 2017 Elsevier Ltd. All rights reserved. Thirteen previously undescribed 5,6,7,8-tetrahydro-2-(2-phenylethyl)chromones named tetrahydrochromone A-M, together with nine known ones, were isolated from artificial agarwood (induced by holing) originating from Aquilaria sinensis (Lour.) Gilg. The structures of these compounds were unambiguously determined based on extensive NMR spectroscopic analyses, and the absolute configuration was resolved by CD analyses, X-ray crystallographic, chemical and Mosher's method. Tetrahydrochromone A, B, K-M, and Oxidoagarochromone An exhibited inhibitory activity against AChE with the percentage inhibition range from 17.5% to 47.9% (with Tacrine as the positive control; inhibition ratio: 66.7%) when tested at 50 mu g/mL. Tetrahydrochromone A-E, F-J feature one methoxy and three hydroxys linked at the cyclohexene ring rather than usual four hydroxys, and tetrahydrochromone K-M represent the first examples of 7,8-epoxy tetrahydrochromones. (C) 2017 Elsevier Ltd. All rights reserved. Plants that are able to hyperaccumulate heavy metals show increased concentrations of these metals in their leaf tissue. However, little is known about the concentrations of heavy metals and of organic defence metabolites in the phloem sap of these plants in response to either heavy metal-amendment of the soil or biotic challenges such as aphid-infestation. In this study, we investigated the effects of heavy metal-exposure and of aphid-infestation on phloem exudate composition of the metal hyperaccumulator species Arabidopsis halleri L. O'Kane & Al-Shehbaz (Brassicaceae). The concentrations of elements and of organic defence compounds, namely glucosinolates, were measured in phloem exudates of young and old (mature) leaves of plants challenged either by amendment of the soil with cadmium and zinc and/or by an infestation with the generalist aphid Myzus persicae. Metal-amendment of the soil led to increased concentrations of Cd and Zn, but also of two other elements and one indole glucosinolate, in phloem exudates. This enhanced defence in the phloem sap of heavy metal-hyperaccumulating plants can thus potentially act as effective protection against aphids, as predicted by the elemental defence hypothesis. Aphid-infestation also caused enhanced Cd and Zn concentrations in phloem exudates. This result provides first evidence that metal-hyperaccumulating plants can increase heavy metal concentrations tissue-specifically in response to an attack by phloem-sucking herbivores. Overall, the concentrations of most elements, including the heavy metals, and glucosinolates were higher in phloem exudates of young leaves than in those of old leaves. This defence distribution highlights that the optimal defence theory, which predicts more valuable tissue to be better defended, is applicable for both inorganic and organic defences. (C) 2017 Elsevier Ltd. All rights reserved. Drawing comparisons between students' alternative solution strategies to a single mathematics problem is a powerful yet challenging instructional practice. We examined 80 preservice teachers' when asked to design a short lesson when given a problem and two student solutions-one correct and one incorrect. These micro-teaching events were videotaped and coded, revealing that fewer than half of participants (43%) made any explicit comparison or contrasts between the two solution strategies. Those who did were still not likely to use additional support strategies to draw students' attention to key elements of the comparison. Further, correlations suggest that participants' mathematical content knowledge may be related to whether participants' showed contrasting cases but not to whether they used specific pedagogical cues to support those comparisons. While these micro-teaching events differ from the interactive constraints of a classroom, they reveal that participants did not immediately orient toward differing student solutions as a discussion opportunity, and that future instruction on contrasting cases must highlight the utility of this practice. Prior research shows that representational competencies that enable students to use graphical representations to reason and solve tasks is key to learning in many science, technology, engineering, and mathematics domains. We focus on two types of representational competencies: (1) sense making of connections by verbally explaining how different representations map to one another, and (2) perceptual fluency that allows students to fast and effortlessly use perceptual features to make connections among representations. Because these different competencies are acquired via different types of learning processes, they require different types of instructional support: sense-making activities and fluency-building activities. In a prior experiment, we showed benefits for combining sense-making activities and fluency-building activities. In the current work, we test how to combine these two forms of instructional support, specifically, whether students should first work on sense-making activities or on fluency-building activities. This comparison allows us to investigate whether sense-making competencies enhance students' acquisition of perceptual fluency (sense-making-first hypothesis) or whether perceptual fluency enhances students' acquisition of sense-making competencies (fluency-first hypothesis). We conducted a lab experiment with 74 students from grades 3-5 working with an intelligent tutoring system for fractions. We assessed learning processes and learning outcomes related to representational competencies and domain knowledge. Overall, our results support the sense-making-first hypothesis, but not the fluency-first hypothesis. This article presents the use of personal professional theories (PPTs) in Dutch higher vocational education. PPTs are internalised bodies of formal and practical knowledge and convictions that professionals use to direct their behaviour. With the aid of high-quality representations of students' PPTs teachers can access, monitor, and support the professional development of students. Two qualitatively equivalent techniques for representing PPTs are (computer-supported) concept mapping and interviewing. The article reports on a study of the effects of combining these techniques to determine whether (1) this results in higher quality representations and (2), if so, whether technique order will make a difference. The study was conducted in two very different vocational domains: accountancy with 29 participants and teacher education with 20 participants. The results of a counterbalanced quasi-experiment with two factors (i.e. domain and order) show in both domains that combining the techniques improves quality but that the order in which the techniques are applied does not matter. This order independence has practical importance as the combination of first conducting a computer-supported analysis of a student generated concept map and subsequently discussing the results with the student, fosters learning and is well suited to educational practice. A critical assumption made in Kapur's (Instr Sci 40:651-672, 2012) productive failure design is that students have the necessary prerequisite knowledge resources to generate and explore solutions to problems before learning the targeted concept. Through two quasi-experimental studies, we interrogated this assumption in the context of learning a multilevel biological concept of monohybrid inheritance. In the first study, students were either provided or not provided with prerequisite micro-level knowledge prior to the generation phase. Findings suggested that students do not necessarily have adequate prior knowledge resources, especially those at the micro-level, to generate representations and solution methods for a multilevel concept such as monohybrid inheritance. The second study examined how this prerequisite knowledge provision influenced how much students learned from the subsequent instruction. Although the prerequisite knowledge provision helped students generate and explore the biological phenomenon at the micro- and macro-levels, the provision seemingly did not confer further learning advantage to these students. Instead, they had learning gains similar to those without the provision, and further reported lower lesson engagement and greater mental effort during the subsequent instruction. The ability to simultaneously assess airline operations, economics, and emissions would help evaluate the progress toward reduction of aviation's environmental impact as outlined in the NASA Environmentally Responsible Aviation program. Furthermore, assessment of aircraft utilization by airlines would guide future policies and investment decisions on technologies most urgently required. This paper describes the development of the Fleet-Level Environmental Evaluation Tool, which is a computational simulation tool developed to assess the impact of new aircraft concepts and technologies on aviation's impact on environmental emissions and noise. This tool uses an aircraft allocation model that represents the airlines' profit-seeking operational decisions as a mixed-integer programming problem. The allocation model is embedded in a system-dynamics framework that mimics the economics of airline operations, models their decisions regarding retirement and acquisition of aircraft, and estimates market demand growth. This paper describes the development of Fleet-Level Environmental Evaluation Tool to use a single large airline to represent operations of all airlines in the United States aviation market. The paper also demonstrates Fleet-Level Environmental Evaluation Tool's capabilities using scenarios on the assessment of effects of new technology aircraft and biofuels on aviation's emissions. Hypersonic vehicle design and simulation require models that are of low order. Modeling of hypersonic vehicles is complicated due to complex interactions between aerodynamic heating, heat transfer, structural dynamics, and aerodynamics in the hypersonic regime. This work focuses on the development of efficient modal solutions for structural dynamics of hypersonic vehicle structures under transient thermal loads. The problem is outlined, and aerothermoelastic coupling mechanisms are described. A previously developed reduced-order time-domain aerothermoelastic simulation framework is used as the starting point for this study. This paper focuses on two main modeling areas: 1) a surrogate modeling technique is employed for directly updating the generalized stiffness matrix and thermal loads based on the transient temperature distribution, and 2) basis augmentation techniques are employed in order to obtain more accurate solutions for the structural dynamic response. The techniques to be studied are described and applied to a representative hypersonic vehicle all-movable lifting surface. This paper summarizes the results of investigations into the development of parametric waverider geometry models, with emphasis on their efficiency, in terms of their ability to cover a large feasible design space with a sufficiently small number of design variables to avoid the "curse of dimensionality." The work presented here is focused on the parameterization of idealized waverider forebody geometries that provide the baseline shapes upon which more sophisticated and realistic hypersonic aircraft geometries can be built. Three different aspects of rationalizing the decisions behind the parametric geometry models developed using the osculating cones method are considered. Initially, three different approaches to the design method itself are discussed. Each approach provides direct control over different aspects of the geometry for which very specific shapes would be more complex to obtain indirectly, thus enabling the geometry to more efficiently meet any related design constraints. Then, a number of requirements and limitations are investigated that affect the available options for the parametric design-driving curves of the inverse design method. Finally, the performance advantages that open up with increasing flexibility of the design-driving curves in the context of a design optimization study are estimated. This allows one to reduce the risk of overparameterizing the geometry model, while still enabling a variety of meaningful shapes. Although the osculating cones method has mainly been used here, most of the findings also apply to other similar inverse design algorithms. This paper seeks to quantify the uncertainty associated with atmospheric conditions when propagating shaped pressure disturbances from a vehicle flying at supersonic speeds. A discrete adjoint formulation is used to obtain sensitivities of boom metrics to atmospheric inputs such as temperature, wind, and relative humidity profiles in addition to deterministic inputs such as the near-field pressure distribution. This study uses a polynomial chaos theory approach to couple these adjoint-derived gradients with uncertainty quantification to enable robust design by using gradient-based optimization techniques. The effectiveness of this approach is demonstrated over an axisymmetric body of revolution and a low-boom concept. Results show that the mean and standard deviation of sonic-boom loudness are simultaneously reduced using robust optimization. Unlike the conventional optimization approaches, the robust optimization approach has the added benefit of generating probability distributions of the sonic-boom metrics under atmospheric uncertainty. An inverse low-boom design method using a reversed equivalent area A(e,r) based on off-body pressure distributions is effective because this method captures three-dimensional effects in a flowfield. In this paper, a robust low-boom design method using A(e,r) is proposed to consider off-track sonic boom loudness. Computing costs are reduced by applying multipole analysis, which allows the capture of three-dimensional effects, even when an off-body location is close to a body. In terms of robustness, it is difficult to set feasible target A(e,r) distributions in a whole boom carpet. Unfeasible targets may lead to results contrary to the design intent. Thus, the target is imposed only on the undertrack Ae; r distribution. In addition, second derivatives of A(e,r) having direct relation to the F function are controlled in the whole boom carpet. As a result, second derivatives are successfully controlled as intended by free-form deformation and genetic-algorithm-based optimization. The undertrack A(e,r) meets the target with acceptable deviation as well. As compared with the undertrack low-boom design, the robust design reduces the maximum perceived level in the whole boom carpet by 1.5 dB. Because of the skidding process that occurs when a heavy aircraft's main landing gear tires touchdown, since the 1940s, a number of ideas have been patented to improve tire safety and decrease the substantial wear and smoke during every landing by spinning the gear wheels before touchdown. In this paper, a coupled structural-thermal transient analysis in ANSYS has been used to model a single wheel main landing gear as a mass-spring system. This model has been chosen to analyze the wheel's dynamic behavior and tire tread temperature during the short period from static to a matching free-rolling velocity in which the wheel is forced to accelerate by the friction between the tire and ground. The tire contact surface temperature has been calculated for both the initially static and prespun wheels in order to compare the temperature levels for different initial rotations and to validate the use of the prespinning technique. The reduction of fleetwide environmental impacts through vehicle technologies and concepts is a global imperative. Fleet-level environmental impact modeling is governed by computational resource limitations and the tradeoff between breadth and depth. Unlike screening fidelity models, detailed models typically cannot support probabilistic analysis of forecast parameters (i.e., operations, fleet mix, technology-aircraft market penetration, etc.) due to their high run time. Moreover, both levels of fidelity currently rely on the selection of a small set of point-design technology aircraft, disabling the ability to assess technology-aircraft environmental performance and forecast parameters simultaneously. In this paper, a rapid, integrated, and interdependent fleet-level environmental modeling framework that addresses these shortcomings is presented. The model consists of several screening fidelity modeling enablers developed in prior work, combined with parameterized joint probability distributions to bridge the gap between aircraft and fleet-level probabilistic assessment. Copulas are used to enforce a prescribed dependence between two probability distributions and facilitate the characterization of complex and uncertain interactions in fleet-level assessment. This paper explains the building blocks of the framework in detail, followed by their conceptual structuring. Finally, notional simplified use cases of the model are presented to demonstrate the different types of interdependencies that can be assessed probabilistically. Governmental organizations are currently developing standards for civil unmanned aircraft to operate safely in the national airspace. A key requirement for aircraft certification is reliability assessment. Traditional reliability assessment methods make assumptions that are overly restrictive when applied to unmanned aircraft. This paper presents a step-by-step, model-based, reliability assessment method that is tailored for unmanned aircraft. In particular, this paper investigates the effects of stuck actuator faults (a common failure mode in electromechanical actuators) on the overall reliability. Several candidate actuator architectures, with different numbers of controllable surfaces, are compared to gain insight into the effect of actuator placement on reliability. It is assumed that a fault detection algorithm is available and affected by known rates of false alarms and missed detections. The overall reliability is shown to be dependent on several parameters, including hardware quality, fault detection performance, mission profile, flight envelope, and operating point. In addition to being an analysis tool, the method can help understand aircraft design tradeoffs. This paper presents an improved Neumann expansion method for calculating statistical moments of either a structural response or system inverse matrix under stochastic uncertainties in system properties and loads. The essentials of the proposed method are the successive matrix inversion and partial bivariate dimension reduction method that enable efficient multidimensional probability integration. The successive matrix inversion computes an exact realization of a stochastic system response, or an inverse matrix, by using the concept of inverse Neumann expansion. In characterizing the stochastic response, multidimensional probability integration is solved for statistical moments and covariance matrices. The existing univariate decomposition method is highly efficient to approximate the multidimensional integration. However, due to the missing interaction effects, the univariate decomposition method could lead to a huge error. In this study, a partial bivariate decomposition method is proposed to capture bivariate interaction effects approximately with a marginal increase in computational cost. Several numerical examples, including representative structural problems, are discussed to demonstrate the accuracy and efficiency of the proposed method compared with other existing methods. It is found that the proposed method provides better accuracy with significant computational cost savings when random fluctuations make locally limited changes in a system. This paper investigates the aerodynamic performance of a semiflexible membrane wing concept using twoway coupled fluid-structure interaction simulations. The sailwing concept consists of a rigid leading-edge mast, ribs, tensioned edge cables, and membranes forming the upper- and lower-wing surfaces. For membrane structures like the sailwing, the exact shape of the membrane surface in the absence of external loading depends on the prestress levels of both membranes and supporting edge cables. A form-finding analysis is used for calculating the equilibrium shape of prestressed membrane surfaces with no external loading. The exact shape of the sailwing in operating condition depends on the pressure distribution on the membrane's surface, which itself depends on the wing's shape and the structural parameters. For faster design space exploration, the fluid problem is modeled using an unsteady vortex panel method. Among all the factors influencing the fluid-structure interaction problem, the role of the prestress level is specifically investigated. A nonlinear dynamic analysis of the structural problem is performed using the finite element method. The influence of the wing's flexibility on its aerodynamic performance and structural response is examined by changing the prestress level in the membranes. Aerodynamic characteristics of the membrane wing are compared with an equivalent conventional rigid wing. A higher lift slope for the membrane wing and improvement of the lift coefficient due to trailing-edge flexibility are observed. The design of plate lines as a ground-mounted device for wake-vortex decay enhancement is investigated in this work. The most important design parameters, the aspect ratio and plate distance, are analyzed for the wake vortices generated by two aircraft: the A340 as well as the A380. Large-eddy-simulations are used to simulate the wake-vortex evolution in ground proximity for different parameter combinations. Fully rolled-up wake vortices are initialized using a Lamb-Oseen vortex model resembling the characteristics of the two aircraft. With the stochastic so-called kriging method, estimates of the performance and respective probabilistic envelopes are given for the design parameter region, spanned by the large-eddy-simulation. The vortex circulation averaged over the rapid decay phase is taken as the objective function. The large-eddy-simulation parameters are selected in the vicinity of the expected optimum. An optimal parameter combination can be localized in the A340 case, as well as in the A380 case. For both cases, statistical relevance is provided. Moreover, it can be deduced that the optimal parameters for the A380 are also well suited for smaller aircraft like an A340. The design of vortex generators on a tiltrotor-aircraft infinite wing is presented using an adaptive surrogate modeling approach. Particular design issues in tiltrotors produce wings that are thick and highly loaded, and so separation and early-onset buffet can be problematic, and vortex generators are commonly used to alleviate these issues. In this work, the design of vortex generators for the elimination of separation is considered using a viscous flowfield simulation. A large design space of rectangular vane-type vortex generators is sampled and simulated, and a radial-basis-function surrogate model is implemented to model the full design space. An efficient adaptive sampling approach for improved design space sampling has also been developed that balances the properties of space filling, curvature capture, and optimum locating. This approach has been tested on the design of a vortex generator on a highly loaded infinite wing, with a representative tiltrotor airfoil section, using a five-dimensional design space. The design of the vortex generators using this approach shows that elimination of the separation is possible while simultaneously reducing the drag of the wing with optimized vortex generators, compared to the clean wing. This paper examines the application of formation flight to micro air vehicles with regard to possible power savings. Results of an experimental investigation on echelon formations using low-aspect-ratio (AR = 2) flat plate rectangular wings at low Reynolds number (Re = 35,000) are presented. One-, two-, and three-wing configurations are tested in a low-speed wind tunnel. To quantify the power savings by lift enhancement and drag reduction, the aerodynamic loads acting on each wing are measured using specific balances while the trailing wings of the formation are being traversed laterally and vertically in fine steps. In addition, the flowfields of the wing wakes are measured using particle image velocimetry. The force and flowfield measurements show that the optimal positions for lift enhancement appear at slightly spanwise overlapping between the leading and trailing wings. This paper describes a method for determining an optimal control allocation function for aircraft with an unconventional control surface setup (i.e., that does not consist of a conventional elevator, rudder, and ailerons). Atypical application of this control mixing would be to impart RLconventional handling qualities to an unconventional unmanned aerial vehicle, which will enable a pilot to fly the unmanned vehicle manually during flight testing. The mixing can also be used as a backup mode to recover the unmanned aerial vehicle manually, should any sensor failures occur during flight testing. Furthermore, the allocation can be used to simplify the inner control loops of an unmanned aerial vehicle autopilot or stability augmentation system. The control allocation design process was formulated as a multi-objective optimization problem. A methodology was proposed to define the constraints, which can be customized for a particular aircraft or application. This paper presents a mesh generation strategy that facilitates the numerical simulation of ice accretion on realistic aircraft configurations by automating the deformation of surface and volume meshes in response to the evolving ice shape. The discrete surface evolution algorithm is based on a face-offsetting strategy that uses an eigenvalue decomposition to determine 1)the nodal offset direction and 2)a null space in which the quality of the surface mesh is improved via point redistribution. A fast algebraic technique is then used to propagate the computed surface deformations into the surrounding volume mesh. Due to inherent limitations in the icing model employed here, there is no intent to present a tool to predict three-dimensional ice accretions but, instead, to demonstrate a meshing strategy for surface evolution and mesh deformation that is appropriate for aircraft icing applications. In this context, sample results are presented for a complex glaze-ice accretion on a rectangular-planform wing with a constant GLC-305 airfoil section and rime-ice accretion on a swept, tapered wing (also with a GLC-305 cross section). This meshing strategy is demonstrated to be robust and largely automatic, except for severe deformations. An approach to modeling longitudinal airplane aerodynamics during unsteady maneuvers was developed for a micro air vehicle at angles of attack well past stall under unsteady conditions, including dynamic stall as might be experienced in perching maneuvers. To gather unsteady micro air vehicle flight data, an offboard motion tracking system was used to capture free-flight trajectories of a micro air vehicle with a weight of 14.44 g (0.0594 oz) and a wingspan of 37.47 cm (14.75 in.), operating at a nominal Reynolds number of 25,000. The measured trajectories included nominal gliding flight as well as mild-to-aggressive stalls. For the most aggressive stall case, the maximum lift coefficient reached a value near 2.5. The new model derived from the test data relied on a so-called separation parameter that modeled the aerodynamic lag during rapid changes in the angle of attack, and it thereby captured the effects of dynamic stall seen in the lift, drag, and moment coefficient data. Results from the model were shown for flights that covered a range of conditions from quasi-steady low angle-of-attack flight to aggressive stalls with angles of attack approaching 90 deg. This paper presents the modeling of the performance of small propellers used for vertical takeoff and landing micro aerial vehicles operating at low Reynolds numbers and in oblique flow. The blade element momentum theory, vortex lattice method, and momentum theory for oblique flow are used to predict propeller performance. For validation, the predictions for a commonly used propeller for vertical takeoff and landing micro aerial vehicles are compared to a set of wind-tunnel experiments. Both the blade element momentum theory and vortex lattice method succeed in predicting correct trends of the forces and moments acting upon the propeller shaft, although accuracy decreases significantly in oblique flow. For the dataset analyzed here, combining the available data of the propeller in purely axial flow with the momentum theory for oblique flow and applying a correction factor for the wake skew angle results in more accurate performance estimates at all elevation angles. High-fidelity aerodynamic shape optimization based on the Reynolds-averaged Navier-Stokes equations is used to optimize the aerodynamic performance of a conventional tube-and-wing design, a hybrid wing-body, and a novel lifting-fuselage concept for regional-class aircraft. Trim-constrained drag minimization is performed on a hybrid wing-body design, with an optimized conventional design serving as a performance reference. The optimized regional-class hybrid wing-body yields no drag savings when compared with the conventional reference aircraft. Starting from the optimized hybrid wing-body, an exploratory optimization with significant geometric freedom is then performed, resulting in a novel shape with a slender lifting fuselage and distinct wings. Based on this exploratory result, a new regional-class lifting-fuselage configuration is designed and optimized. With a span constrained by code "C" gate limits and having the same wing-only span as the conventional reference aircraft, this new design produces up to 10% lower drag than the reference aircraft. The effect of structural weight uncertainties, cruise altitude, and stability requirements are also examined. Rotor blades experience very high centrifugal forces that can be used to pump air to the outboard region of the blade through an internal duct, which can be used for flow control. Analysis or design of such systems requires accurate prediction capability. To validate current Reynolds-averaged Navier-Stokes simulation methodologies, an experiment was performed using a rotating pipe, and simulation results were compared to the measured data. A quasi-one-dimensional code was also compared to experiment as a lower-order simulation tool for faster solutions. The test and simulations include several combinations of steady inlet and exit conditions as well as an unsteady inlet valve operation at several rotational speeds. The quasi-one-dimensional code showed good correlation for steady inlet and exit conditions with boundary conditions obtained from experiment. Navier-Stokes methods also showed good agreement with measured data for pressure and mass flow rate at most conditions, while properly capturing complex flow features including separation, secondary swirl flow, and tip-flow interactions. The kinetic-eddy simulation and the Spalart-Allmaras turbulence models were tested to examine solution sensitivity under the complex flow environment. The two turbulence models showed similar results, except when the inlet valve was closed, in which case the kinetic-eddy simulation model showed better correlation. An uncertainty-based approach is undertaken to deal with multipoint wing aerostructural optimization. The flight points are determined by the quadruple set of parameters: Mach number, cruise altitude, carried payload, and flight range. From this set, the payload and range are modeled as probabilistically uncertain based on U.S. flight data for the operations of an A320 aircraft. The fuel burn is selected as the performance metric to optimize. Structural failure criteria, aileron efficiency, and field performance considerations are formulated as constraints. The wing is parametrized by its planform, airfoil sections, and structural thickness. The analyses disciplines consist of an aerostructural solver and a surrogate-based mission analysis. For the optimization task, a gradient-based algorithm is used in conjunction with coupled adjoint methods and a fuel burn sensitivity analytical formula. Another key enabler is a cost-effective nonintrusive uncertainty propagator that allows optimization of an aircraft with legacy analysis codes, within a computational budget. This paper presents an aerostructural perspective on the potential benefits of wingletted wings in comparison to planar wings of the same projected span. There is no consensus in the current literature on the efficiency gains possible from winglets. The present paper takes a step further toward understanding the tradeoffs in the design of wingletted wings using high-fidelity numerical optimization based on both purely aerodynamic and fully coupled aerostructural analysis. The high-fidelity analysis in both cases uses the Euler equations to model the flow along with a friction drag estimate based on the surface area. Three configurations are considered: winglet-up, winglet-down, and planar. The results show that downward winglets produce a greater drag reduction than upward winglets for two reasons. First, the downward winglet moves the tip vortex further away from the wing from a purely aerodynamic standpoint. Second, the winglet-down configuration has a higher projected span at the deflected state due to the structural deflections. This indicates that fully coupled high-fidelity aerostructural optimization is required to quantify the benefits of winglets properly. The winglet-down configuration can reduce the total drag by up to 2% at the same weight as the optimal planar counterpart. Lambda wing configurations are known to possess challenging flight characteristics at transonic Mach numbers. This study tests various methods for increasing the maneuverability and stability of the SACCON, a generic lambda wing configuration. Several control concepts are presented and tested to determine their effects on the lateral-directional and longitudinal stability of the aircraft. The first category of control concepts uses smooth surface deformations of the wing tips to eliminate control surface gaps found with traditional flap-based systems. The second category uses panel deflections on the upper surface of the aircraft. Each concept is simulated at Mach numbers in the range of 0.5M0.85 using the TAU code of the German Aerospace Center (DLR), with subsequent wind tunnel tests taking place in the Transonic Wind Tunnel of the German-Dutch Wind Tunnels (DNW-TWG) in Gottingen, Germany. The main flow features created by each control device and the Mach number effects are discussed, as are their effects on the aircraft's aerodynamic performance. The smooth wing-tip deformations isolate the control derivatives and are best able to enhance lateral-directional stability. The upper surface devices create a more complicated response but offer more immediate application due to their reliance on firmly established technology. Experimental work has shown that the flowfield around a wing-body configuration can be successfully modified with a short Kutta edge tail, so named because, by controlling the rear stagnation point, the circulation about the body can be effectively modified. The precise nature of the Kutta edge and body interaction were not considered, rather only the global flowfield effects. Therefore, the purpose of this study was to investigate the lift potential of low-drag bodies with Kutta edges, by numerically solving the flowfield around two low-drag bodies selected from literature. The drag and lift of the bodies were compared with experimental and numerical results in literature with good agreement. The geometries, computational grids, and boundary conditions of the two benchmark cases were then modified by adding short Kutta edges, for aftbody deflection angles of 2, 4, 6, and 8deg at Reynolds numbers of 1.2x10(6) and 10(7). Both of the low-drag bodies showed similar average increases in lift and in pressure drag with the addition of the Kutta edge at increasing deflection angles. Though the configuration study is not yet complete, the results indicate a design space where there is potential for improvement in flight efficiency. Power-required flight testing has traditionally been done using the classic power independent of weight, velocity independent of weight method. This method is well suited for manned aircraft but has some shortcomings when applied to unmanned systems that are remotely piloted and/or operating on small test ranges. Maximum-likelihood estimation was used to identify the lift and drag models of two test aircraft using data acquired through acceleration and deceleration runs. The calculated power-required data were reduced using the classic technique to remove weight and air density effects. Flight-test data were collected using a Raspberry Pi single-board computer because of its low cost, weight, and good performance. Data points using the classic flight-test method were also collected for comparison against the dynamic method. Because of the small test range used, a third of all points collected using the classic method had to be eliminated because of assumption violations. The flight-test results show good agreement between the deceleration runs and the classic method. The acceleration runs generally had a poorer overall fit and a larger spread due to excessive sensor noise caused by imbalances in the motor/prop combination. Overall, the dynamic power analysis was much easier to perform and required significantly less time and airspace. The utilization of fuel as a heat sink can enable the design of higher-performance aircraft through the reduction of heat loads. A conceptual design phase energy model for the cross section of a wing with an internal fuel tank is developed for computing the rate of heat rejection in flight. The calculations focus on physical dependencies rather than empirical correlations, solving the conservation of energy equation by using results from prior solved conservation of mass and momentum equations. A series of control volumes and various temperature profiles are employed to model the thermal boundary layer. The surface boundary conditions include an isothermal surface as well as an unheated starting length to approximate a heated fuel tank. An implicit root-finding method is verified and used in solving the resulting system of equations. Results are compared to empirical correlations for validation purposes involving laminar and turbulent flow test cases over a flat plate, various NACA airfoils, as well as a C-130 Hercules. Two temperature profiles are considered and compared throughout the verification and validation process. The results show reasonable agreement with empirical correlations, and thus show great promise to be employed in conceptual design. A rationally based methodology is proposed to derive a new mass loss empirical model for supercooled large droplets. The numerical results from the ONERA two-dimensional trajectory solver and the experimental results from the NASA Papadakis supercooled large droplet database are combined to get both the impinging and the deposited mass flow rates at each point of the test model. These data are used to derive a new model for the collection efficiency. It allows clearly separating the influence of the kinetic energy, which is the dominating effect close to the leading edge, from the influence of the angle of incidence, which is the most influent parameter close to the impingement limits. Moreover, the model can be used for both supercooled large droplets and small droplets. The accurate, direct measurement of thrust or impulse is one of the most critical elements of electric thruster characterization, and it is one of the most difficult measurements to make. This paper summarizes recommended practices for the design, calibration, and operation of pendulum thrust stands, which are widely recognized as the best approach for measuring micronewton- to millinewton-level thrust and micronewton-per-second-level impulse bits. The fundamentals of pendulum thrust stand operation are reviewed, along with the implementation of hanging pendulum, inverted pendulum, and torsional balance configurations. The methods of calibration and recommendations for calibration processes are presented. Sources of error are identified, and methods for data processing and uncertainty analysis are discussed. This review is intended to be the first step toward a recommended practices document to help the community produce high-quality thrust measurements. Accurate control and measurement of propellant flow to a thruster is one of the most basic and fundamental requirements for operation of electric propulsion systems, whether they be in the laboratory or on flight spacecraft. Hence, it is important for the electric propulsion community to have a common understanding of typical methods for flow control and measurement. This paper addresses the topic of propellant flow primarily for the gaseous propellant systems that have dominated laboratory research and flight application over the last few decades, although other types of systems are also briefly discussed. Although most flight systems have employed a type of pressure-fed flow restrictor for flow control, both thermal-based and pressure-based mass flow controllers are routinely used in laboratories. Fundamentals and theory of operation of these types of controllers are presented, along with sources of uncertainty associated with their use. Methods of calibration and recommendations for calibration processes are presented. Finally, details of uncertainty calculations are presented for some common calibration methods and for the linear fits to calibration data that are commonly used. Presented in this work is a guide to standardize the methods of performing correct and accurate Langmuir probe measurements within the plasma environment produced by an electric propulsion device. The construction, installation, operation, and data interpretation of the full family of Langmuir probes are addressed including single Langmuir probes, double Langmuir probes, and triple Langmuir probes. Traditional time-averaged operation as well as high-speed operation are discussed as are the various pressure and external field limitations of Langmuir probes. This recommended practice is established from the vast collection of preexisting Langmuir probe theory and operational experience within the plasma environment of electric propulsion thrusters. Focus is placed on the application of Langmuir probes in the most common electric propulsion devices including Hall, ion, and pulsed thrusters. These electric propulsion devices all create flowing plasmas that require special probe construction and data analysis that is not needed in many other types of plasmas where Langmuir probes are commonly employed. The recommended practices are presented in a concise handbook style, reserving all theoretical derivations and details to the referenced works of their origination. Electrostatic analyzers are used in electric propulsion to measure the energy per unit charge E/q distribution of ion and electron beams: in the downstream region of thrusters, for example. This paper serves to give an overview of the most fundamental, yet most widely used, types of electrostatic analyzer designs. Analyzers are grouped into two classifications: 1) mirror-type analyzers, and 2) deflector-type analyzers. Common mirror-type analyzers are the parallel-plate mirror analyzer and the cylindrical mirror analyzer. For deflector-type analyzers, a generalized toroidal type is first described and the commonly used cylindrical deflector and spherical deflector analyzers are discussed as special cases. The procedure for energy resolution calculations of electrostatic analyzers is described, which is a common way of comparing analyzers. Ion energy distributions from a spherical deflector analyzer are presented, comparing variations in 1) particle energy, 2) particle angle, 3) the entrance and exit geometry of the analyzer, and 4) sector angle using numerical calculations. Inductive magnetic field probes (also known as B-dot probes and sometimes as B probes or magnetic probes) are often employed to perform field measurements in electric propulsion applications where there are time-varying fields. Magnetic field probes provide the means to measure these magnetic fields and can even be used to measure the plasma current density indirectly through the application of Ampere's law. Measurements of this type can yield either global information related to a thruster and its performance or detailed local data related to the specific physical processes occurring in the plasma. The available literature is condensed into an accessible set of rules, guidelines, and techniques to standardize the performance and presentation of future B-dot probe measurements. The electric propulsion community has been implored to establish and implement a set of universally applicable test standards during the research, development, and qualification of electric propulsion systems. Variability between facility-to-facility and more importantly ground-to-flight performance can result in large margins in application or aversion to mission infusion. Performance measurements and life testing under appropriate conditions can be costly and lengthy. Measurement practices must be consistent, accurate, and repeatable. Additionally, the measurements must be universally transportable across facilities throughout the development, qualification, spacecraft integration, and on-orbit performance. A recommended practice for making pressure measurements, pressure diagnostics, and calculating effective pumping speeds with justification is presented. Ion thruster plumes are simulated under a recently developed computer framework inside a vacuum chamber at various background pressures for single- and triple-thruster configurations. Momentum and charge-exchange collisions occurring between neutral and ion species, as well as the induced electric field due to ions, are performed using multiple adaptive mesh refinement meshes to study the spatial variance of neutrals and ions while analyzing the effect of vacuum chamber conditions. Comparisons between vacuum chamber and space simulations show that the spatial distribution of the neutral densities are different in the far downstream region of the ion thruster. The change in the background density is also found to affect the spatial distributions of the charge-exchange ions in the domain. The triple-thruster cases show that the background density levels are approximately proportional to the number of thrusters being operated. The adaptive mesh refinement capability is shown to be able to capture the confinement effect of the flow in the vacuum chamber. Research into hot corrosion and its preventative measures in gas turbine engines has focused almost exclusively on sodium sulfate since the early 1950s. However, current gas turbine engines on U.S. Department of Defense aircraft are exhibiting degradation due to hot corrosion following operations in locations where sodium sulfate is not present. In light of this, questions raised by previous hot corrosion research are reviewed with regard to U.S. Department of Defense operations. Initial lab results suggest calcium sulfate as a likely alternative initiator of hot corrosion in environments, or at temperatures, where sodium-sulfate-initiated hot corrosion is not probable. The tabulated premixed conditional moment closure model has shown the capability to model turbulent, premixed methane flames with detailed chemistry and reasonable run times in a Reynolds-averaged Navier-Stokes formulation. The tabulated premixed conditional moment closure model is a table lookup combustion model that tabulates species, reaction rates, and thermodynamic data for use by the computational-fluid-dynamics code during run time. In this work, the tabulated premixed conditional moment closure model is extended to unsteady Reynolds-averaged Navier-Stokes. The new model is validated against particle image velocimetry and laser Raman measurements of an enclosed turbulent reacting methane jet from the German Aerospace Center. The flame's reaction progress variable, its variance, and the scalar dissipation rate are calculated by the computational fluid dynamics in three dimensions. These three parameters are used to index detailed species information from the table for use by the computational-fluid-dynamics code. The scalar dissipation is used to account for the effects of the small-scale mixing, whereas a presumed shape beta function probability density function is used to account for the effects of large-scale turbulence on the reaction rates. Velocity, temperature, and major species are compared to the experimental data. Accurate predictions of the velocity fields were obtained, but accurate predictions of scalar quantities were limited by the adiabatic assumption of the tabulated premixed conditional moment closure model. Combustion stability characteristics of a turbulent diffusion flame established between a center jet of gaseous oxygen and coflowing jets of gaseous hydrogen blended with different amounts of gaseous methane are studied in a rectangular combustor operating under atmospheric pressure conditions. A compression driver, mounted near the injector, is used to acoustically excite the flame from a transverse direction. Resulting flame perturbations are studied using OH* chemiluminescence imaging, dynamic pressure measurements, and high-speed flow visualizations. Both steady-state perturbations and perturbations as the acoustically forced flames transition from one fuel blend to another are studied. Simultaneous measurements of pressure oscillations and heat release oscillations are used to obtain local Rayleigh indices showing locations that drive or dampen the instability. Transient measurements associated with real-time in situ methane blending are used to obtain timescales associated with the suppression process. Interesting intermittencies in heat release oscillations driven by acoustic forcing and local hydrodynamics are explored. Heat release oscillations, which drive combustion instability, are substantially reduced when gaseous methane is blended with gaseous hydrogen while holding ignition characteristics relatively constant. The reduction appears to result from a lowering of the density difference between the propellant streams upon methane dilution fuel-oxidizer density ratio. The approach could potentially be used in shear-coaxial combustors where instability from similar flame-acoustic interactions are common. Effective fault detection and identification methods are crucial in gas turbine maintenance. To express the gas turbine performance of the fault symptom state precisely and to reduce the individual differences of different gas turbines, a novel performance deviation model based on real-life operation data of gas turbines is proposed in this paper. A backpropagation neural network is adopted to establish the performance deviation model. Performance deviation values calculated by the model are regarded as fault signatures of the gas turbines. To enhance the accuracy of the fault diagnosis, a multikernel support vector machine is employed in the fault classification experiment. A contrast experiment showed the accuracy of the fault diagnosis method based on the performance deviation model and multikernel support vector machine. This paper quantifies the changes in turbine performance due to manufacturing tolerances and profile degradation of the blade-tip region during engine operation. An extensive numerical study was conducted on a modern high-pressure turbine rotor, operating at high Reynolds and high subsonic outlet conditions. The stochastic collocation method was used to investigate the effects of the geometrical variability of a single squealer tip design. The variation of three geometrical characteristics was investigated: tip clearance size, squealer depth, and rim corner radius. Three-dimensional Reynolds-averaged Navier-Stokes simulations were performed to predict the aerodynamic and heat transfer characteristics. The results highlight the influence of the combined geometrical variability on the overtip flow structures, heat transfer signatures, loss development, and downstream flow characteristics. This study measures the robustness of squealer tip designs to changes in its geometry and demonstrates the necessity of anticipating the consequences of these unavoidable tip geometry variabilities in an early development stage of the turbine design optimization. Dilution jets in a gas turbine combustor are used to oxidize remaining fuel from the main flame zone in the combustor and to homogenize the temperature field upstream of the turbine section through highly turbulent mixing. The high-momentum injection generates high levels of turbulence and very effective turbulent mixing. However, mean flow distortions and large-scale turbulence can persist into the turbine section. In this study, a dilution hole configuration was scaled from a rich-burn-quench-lean-burn combustor and used in conjunction with a linear vane cascade in a large-scale, low-speed wind tunnel. Mean and turbulent flowfield data were obtained at the vane leading edge with the use of high-speed particle image velocimetry to help quantify the effect of the dilution jets in the turbine section. The dilution hole pattern was shifted (clocked) for two positions such that a large dilution jet was located directly upstream of a vane or in between vanes. Time-averaged results show that the large dilution jets have a significant impact on the magnitude and orientation of the flow entering the turbine. Turbulence levels of 40% or greater were observed approaching the vane leading edge, with integral length scales of approximately 40% of the dilution jet diameter. Incidence angle and turbulence level were dependent on the position of the dilution jets relative to the vane. High-amplitude combustion instabilities are a destructive and pervasive problem in gas-turbine combustors. Although much research has focused on measuring the characteristics of these instabilities, there are still many remaining questions about the fluid-mechanic mechanisms that drive the flame oscillations. In particular, a variety of complex disturbance mechanisms arise during velocity-coupled instabilities excited by transverse acoustic modes. The resulting disturbance field has two components: the acoustic-velocity fluctuation from both the incident transverse acoustic field and the excited longitudinal field near the nozzle, and the vortical-velocity fluctuations arising from acoustic excitation of hydrodynamic instabilities in the flow. In this research, the relative contribution of these two components had been explored using proper orthogonal decomposition as a methodology for decomposing the velocity-disturbance field. Although proper orthogonal decomposition is successful at decomposing these two components at certain nonreacting conditions, it fails at reacting conditions. These results show the significant interaction of velocity-disturbance modes under reacting conditions and the limitations of the proper-orthogonal-decomposition technique for extracting velocity-decomposition information. The performance of different constraints for the rate-controlled constrained-equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic of hypervelocity ground testing facilities and reentry conditions. Although the RCCE approach has been used widely in the past, its application in non-combustion-based reacting flows is rarely done; the flows being investigated in this work do not contain species that can easily be classified as reactants and/or products. The effectiveness of different constraints is investigated before running a full computational simulation, and new constraints not reported in the existing literature are introduced. A constraint based on the enthalpy of formation is shown to work well for the two gas models used for flows that involve both shocks and steady expansions. Stereoselective high-performance liquid chromatographic and subcritical fluid chromatographic separations of 19 N-Fmoc proteinogenic amino acid enantiomers were carried out by using Quinidine-based zwitterionic and anion-exchanger-type chiral stationary phases Chiralpak ZWIX(-) and QD-AX. For optimization of retention and enantioselectivity, the ratio of bulk solvent components (MeOH/MeCN, H2O/MeOH, or CO2/MeOH) and the nature and concentration of the acid and base additives (counter- and co-ions) were systematically varied. The effect of column temperature on the enantioseparation was investigated and thermodynamic parameters were calculated from the van't Hoff plots ln vs. 1/T. The thermodynamic parameters revealed that the enantioseparations were enthalpy-driven. The elution sequence was determined in all cases and with the exception of Fmoc-Cys(Trt)-OH, it was identical on both chiral stationary phases whereby the L-enantiomers eluted before the D-enantiomers. The enantioselective potential of two polysaccharide-based chiral stationary phases for analysis of chiral structurally diverse biologically active compounds was evaluated in supercritical fluid chromatography using a set of 52 analytes. The chiral selectors immobilized on 2.5m silica particles were tris-(3,5-dimethylphenylcarmabate) derivatives of cellulose or amylose. The influence of the polysaccharide backbone, different organic modifiers, and different mobile phase additives on retention and enantioseparation was monitored. Conditions for fast baseline enantioseparation were found for the majority of the compounds. The success rate of baseline and partial enantioseparation with cellulose-based chiral stationary phase was 51.9% and 15.4%, respectively. Using amylose-based chiral stationary phase we obtained 76.9% of baseline enantioseparations and 9.6% of partial enantioseparations of the tested compounds. The best results on cellulose-based chiral stationary phase were achieved particularly with propane-2-ol and a mixture of isopropylamine and trifluoroacetic acid as organic modifier and additive to CO2, respectively. Methanol and basic additive isopropylamine were preferred on amylose-based chiral stationary phase. The complementary enantioselectivity of the cellulose- and amylose-based chiral stationary phases allows separation of the majority of the tested structurally different compounds. Separation systems were found to be directly applicable for analyses of biologically active compounds of interest. The enantioresolution and determination of the enantiomeric purity of 32 new xanthone derivatives, synthesized in enantiomerically pure form, were investigated on (S,S)-Whelk-O1 chiral stationary phase (CSP). Enantioselectivity and resolution ( and R-S) with values ranging from 1.41-6.25 and from 1.29-17.20, respectively, were achieved. The elution was in polar organic mode with acetonitrile/methanol (50:50v/v) as mobile phase and, generally, the (R)-enantiomer was the first to elute. The enantiomeric excess (ee) for all synthesized xanthone derivatives was higher than 99%. All the enantiomeric pairs were enantioseparated, even those without an aromatic moiety linked to the stereogenic center. Computational studies for molecular docking were carried out to perform a qualitative analysis of the enantioresolution and to explore the chiral recognition mechanisms. The in silico results were consistent with the chromatographic parameters and elution orders. The interactions between the CSP and the xanthone derivatives involved in the chromatographic enantioseparation were elucidated. A few new l-threitol-based lariat ethers incorporating a monoaza-15-crown-5 unit were synthesized starting from diethyl l-tartrate. These macrocycles were used as phase transfer catalysts in asymmetric Michael addition reactions under mild conditions to afford the adducts in a few cases in good to excellent enantioselectivities. The addition of 2-nitropropane to trans-chalcone, and the reaction of diethyl acetamidomalonate with -nitrostyrene resulted in the chiral Michael adducts in good enantioselectivities (90% and 95%, respectively). The substituents of chalcone had a significant impact on the yield and enantioselectivity in the reaction of diethyl acetoxymalonate. The highest enantiomeric excess (ee) values (99% ee) were measured in the case of 4-chloro- and 4-methoxychalcone. The phase transfer catalyzed cyclopropanation reaction of chalcone and benzylidene-malononitriles using diethyl bromomalonate as the nucleophile (MIRC reaction) was also developed. The corresponding chiral cyclopropane diesters were obtained in moderate to good (up to 99%) enantioselectivities in the presence of the threitol-based crown ethers. Enantiomeric H-1 and C-13 NMR signal separation behaviors of various -amino acids and DL-tartarate were investigated by using the samarium(III) and neodymium(III) complexes with (S,S)-ethylenediamine-N,N'-disuccinate as chiral shift reagents. A relatively smaller concentration ratio of the lanthanide(III) complex to substrates was suitable for the neodymium(III) complex compared with the samarium(III) one, striking a balance between relatively greater signal separation and broadening. To clarify the difference in the signal separation behavior, the chemical shifts of -protons for fully bound D- and L-alanine ((b)(D) and (b)(L)) and their adduct formation constants (Ks) were obtained for both metal complexes. Preference for D-alanine was similarly observed for both complexes, while it was revealed that the difference between the (b)(D) and (b)(L) values is the significant factor to determine the enantiomeric signal separation. The neodymium(III) and samarium(III) complexes can be used complementarily for higher and smaller concentration ranges of substrates, respectively, because the neodymium(III) complex gives the larger difference between the (b)(D) and (b)(L) values with greater signal broadening compared to the samarium(III) complex. Enantiomeric thalidomide undergoes various kinds of biotransformations including chiral inversion, hydrolysis, and enzymatic oxidation, which results in several metabolites, thereby adding to the complexity in the understanding of the nature of thalidomide. To decipher this complexity, we analyzed the multidimensional metabolic reaction networks of thalidomide and related molecules in vitro. Characteristic patterns in the amount of various metabolites of thalidomide and related molecules generated during a combination of chiral inversion, hydrolysis, and hydroxylation were observed using liquid chromatography-tandem mass spectrometry and chiroptical spectroscopy. We found that monosubstituted thalidomide derivatives exhibited different time-dependent metabolic patterns compared with thalidomide. We also revealed that monohydrolyzed and monohydroxylated metabolites of thalidomide were likely to generate mainly by a C-5 oxidation of thalidomide and subsequent ring opening of the hydroxylated metabolite. Since chirality was conserved in most of these metabolites during metabolism, they had the same chirality as that of nonmetabolized thalidomide. Our findings will contribute toward understanding the significant pharmacological effects of the multiple metabolites of thalidomide and its derivatives. (+)-R,R-D-84 ((+)-R,R-4-(2-benzhydryloxyethyl)-1-(4-fluorobenzyl)piperidin-3-ol) is a promising pharmacological tool for the dopamine transporter (DAT), due to its high affinity and selectivity for this target. In this study, an analytical method to ascertain the enantiomeric purity of this compound was established. For this purpose, a high-performance liquid chromatographic (HPLC) method, based on a cellulose derived chiral stationary phase (CSP) was developed. The method was characterized concerning its specificity, linearity, and range. It was shown that the method is suitable to determine an enantiomeric excess of up to 99.8%. With only a few adjustments, this analytical CSP-HPLC method is also well suited to separate (+)-R,R-D-84 from its enantiomer in a semipreparative scale. S-naproxen by enantioselective hydrolysis of racemic naproxen methyl ester was produced using immobilized lipase. The lipase enzyme was immobilized on chitosan beads, activated chitosan beads by glutaraldehyde, and Amberlite XAD7. In order to find an appropriate support for the hydrolysis reaction of racemic naproxen methyl ester, the conversion and enantioselectivity for all carriers were compared. In addition, effects of the volumetric ratio of two phases in different organic solvents, addition of cosolvent and surfactant, optimum pH and temperature, reusability, and inhibitory effect of methanol were investigated. The optimum volumetric ratio of two phases was defined as 3:2 of aqueous phase to organic phase. Various water miscible and water immiscible solvents were examined. Finally, isooctane was chosen as an organic solvent, while 2-ethoxyethanol was added as a cosolvent in the organic phase of the reaction mixture. The optimum reaction conditions were determined to be 35 degrees C, pH7, and 24h. Addition of Tween-80 in the organic phase increased the accessibility of immobilized enzyme to the reactant. The optimum organic phase compositions using a volumetric ratio of 2-ethoxyethanol, isooctane and Tween-80 were 3:7 and 0.1% (v/v/v), respectively. The best conversion and enantioselectivity of immobilized enzyme using chitosan beads activated by glutaraldehyde were 0.45 and 185, respectively. Chiral solid membranes of cellulose, sodium alginate, and hydroxypropyl--cyclodextrin were prepared for chiral dialysis separations. After optimizing the membrane material concentrations, the membrane preparation conditions and the feed concentrations, enantiomeric excesses of 89.1%, 42.6%, and 59.1% were obtained for mandelic acid on the cellulose membrane, p-hydroxy phenylglycine on the sodium alginate membrane, and p-hydroxy phenylglycine on the hydroxypropyl--cyclodextrin membrane, respectively. To study the optical resolution mechanism, chiral discrimination by membrane adsorption, solid phase extraction, membrane chromatography, high-pressure liquid chromatography ultrafiltration were performed. All of the experimental results showed that the first adsorbed enantiomer was not the enantiomer that first permeated the membrane. The crystal structures of mandelic acid and p-hydroxy phenylglycine are the racematic compounds. We suggest that the chiral separation mechanism of the solid membrane is adsorption - association - diffusion, which is able to explain the optical resolution of the enantioselective membrane. This is also the first report in which solid membranes of sodium alginate and hydroxypropyl--cyclodextrin were used in the chiral separation of p-hydroxy phenylglycine. This classroom-based study explored links among second language (L2) learners' interaction mindsets, interactional behaviors, and L2 development in the context of peer interaction. While peer interaction research has revealed that certain interactional behaviors (e.g., receiving corrective feedback and engaging in collaborative interaction) assist L2 learning, it is yet unknown why some learners exhibit these interactional behaviors or how learners' affective states impact their L2 development. The participants were two Grade 10 English as a foreign language classes in Chile (N = 53). Three data sets were collected: (a) interaction mindset data based on pretask interviews with focus groups from each class (n = 10); (b) interaction data pertaining to the communicative tasks of the focus groups; and (c) L2 development data from both classes consisting of oral and written production tests focusing on grammar and lexis. Results indicated that L2 development was mediated by learners' interaction mindsets, which in turn affected their interactional behaviors. This study used a dynamic approach to explore bidirectional sequential relations between the real-time language use of teachers and students in naturalistic early elementary science lessons. It also compared experienced teachers (n = 22) with novice teachers (n = 8) with respect to such relations. Verbal interactions were transcribed and coded at the utterance level for syntactic complexity, lexical density of content word use, and open-ended teacher questions. Sequential analyses provided evidence for the existence of a bidirectional relationship, meaning that both teachers and students were sensitive to each other's use of complex and dense language. In addition, the use of open-ended teacher questions was related to complex and dense student utterances. Comparisons between experienced teachers and novice teachers revealed that sequential patterns were stronger in the case of experienced teachers, suggesting that there were more flexible adaptation processes in this group. We investigated the influence of nonparental caregivers, such as foreign domestic helpers (FDH), on the home language spoken to the child and its implications for vocabulary and word reading development in Cantonese- and English-speaking bilingual children. Using data collected from ages 5 to 9, we analyzed Chinese vocabulary, Chinese character recognition, English vocabulary, and English word reading among 194 native Cantonese-speaking children in Hong Kong with English-speaking FDHs (n = 46), children with Cantonese-speaking FDHs (n = 32), and children with no FDHs who were spoken to in Cantonese (n = 116). Multilevel modeling results showed potential advantages in initial English vocabulary and disadvantages in initial Chinese character recognition among children in the English-speaking FDH group, with no evidence for compounding or diminished costs or benefits over time. Results are discussed in relation to both theoretical and practical aspects of home language and literacy development. This study applied systematic meta-analytic procedures to summarize findings from experimental and quasi-experimental investigations into the effectiveness of using the tools and techniques of corpus linguistics for second language learning or use, here referred to as data-driven learning (DDL). Analysis of 64 separate studies representing 88 unique samples reporting sufficient data indicated that DDL approaches result in large overall effects for both control/experimental group comparisons (d = 0.95) and for pre/posttest designs (d = 1.50). Further investigation of moderator variables revealed that small effect sizes were generally tied to small sample sizes. Research has barely begun in some key areas, and durability/transfer of learning through delayed posttesting remains an area in need of further investigation. Although DDL research demonstrably improved over the period investigated, further changes in practice and reporting are recommended. Taking a psycholinguistic orientation within task-based language teaching scholarship, this study investigated the effects of mode (oral vs. written) and task complexity on second language (L2) performance. The participants were 78 Catalan/Spanish learners of English as a foreign language. Half of the participants performed the simple and complex versions of an argumentative, instruction-giving task orally, the other half did it in writing. The comparison of the participants' oral and written performance revealed that speakers produced more idea units but that writers achieved higher scores for subordination, mean length of analysis-of-speech units, lexical diversity, extended idea units, and time on task. As for the effects of task complexity, the participants' written production showed more variation between the complex and the simple versions of the task. These findings are interpreted in light of task modality effects in L2 learning and discussed in relation to task complexity theory and research. This study investigated how standard and substandard varieties of first language (L1) Dutch affect grammatical gender assignments to nouns in second language (L2) German. While German distinguishes between masculine, feminine, and neuter gender, the masculine-feminine distinction has nearly disappeared in Standard Dutch. Many substandard Belgian Dutch varieties, however, still mark this distinction, making them more akin to German than Standard Dutch in this respect. Seventy-one Belgian and 104 Netherlandic speakers of Dutch with varying levels of German proficiency assigned gender-marked German articles to German nouns with Dutch cognates; these gender assignments were then compared to the cognates' gender in the standard and substandard L1 varieties. While the gender assignments of both Belgian and Dutch participants were strongly influenced by the cognates' Standard Dutch gender, the Belgians' responses showed, at best, weak traces of the masculine-feminine distinction in substandard Belgian Dutch. Possible reasons for this weak substandard variety influence are discussed. This study aimed to advance research on first and second language future-time expression in Spanish and to demonstrate the strengths of combining functionalist, concept-oriented approaches (e.g., Andersen, 1984; Bardovi-Harlig, 2000; Shirai, 1995; von Stutterheim & Klein, 1987) with variationist approaches. The study targeted 140 participants (120 English-speaking learners of Spanish of varying proficiency and 20 native speakers of Spanish) who completed an oral task responding to eight prompts (e.g., describe tus planes para este fin de semana describe your plans for this weekend). Results from cross-tabulations and multinomial regressions indicated gradual inclusion of new linguistic and social variables as learner proficiency increased and demonstrated the value of considering both group and individual behavior. Findings were discussed in relation to stages of acquisition of future-time expression. Several previous studies have explored nursing students' perceptions of clinical learning at hospitals and in other health care facilities, but there are few studies exploring nursing students' perceptions of the clinical learning in the ambulance service. Therefore, the aim of this study was to explore nursing students' perceptions of learning nursing skills in the ambulance service. An inductive qualitative study design with two focus group interviews and content analysis was used. Two themes were identified. The first theme, professional skills, included: Assessment, Prioritizing and initiating care, and Medical treatment and evaluation of interventions. The second theme, a holistic approach to the care included: Cultural, social, and ethical aspects of caring, Decision-making in collaboration with patients, and Care provided in the patients' home. Conclusion: The ambulance service provides a learning environment where the students face a multifaceted picture of health and illness. This learning environment helps nursing students to learn independently how to use professional nursing skills and how to care by employing a holistic approach. However, further research is needed to explore if and how this knowledge about nursing and caring in the ambulance service is useful when working as a Registered Nurse in other health care settings. (C) 2017 Elsevier Ltd. All rights reserved. Introduction: Simulation creates the possibility to experience acute situations during nursing education which cannot easily be achieved in clinical settings. Aim: To describe how nursing students learn acute care of patients through simulation exercises, based on observation and debriefing. Design: The study was designed as an observational study inspired by an ethnographic approach. Method: Data was collected through observations and interviews. Data was analyzed using an interpretive qualitative content analysis. Results: Nursing students created space for reflection when needed. There was a positive learning situation when suitable patient scenarios were presented. Observations and discussions with peers gave the students opportunities to identify their own need for knowledge, while also identifying existing knowledge. Reflections could confirm or reject their preparedness for clinical practice. The importance of working in a structured manner in acute care situations became apparent. However, negative feedback to peers was avoided, which led to a loss of learning opportunity. Conclusion: High fidelity simulation training as a method plays an important part in the nursing students' learning. The teacher also plays a key role by asking difficult questions and guiding students towards accurate knowledge. This makes it possible for the students to close knowledge gaps, leading to improved patient safety. (C) 2017 Elsevier Ltd. All rights reserved. Students who leave pre-registration nurse education having failed to complete remain a concern for higher education institutions. This study identifed factors influencing completion using a retrospective cohort analysis to map student characteristics at entry against Year 3 completion data. The study was set in a nursing faculty in a higher education institution in northern England. Data were collected between 2009 and 2014 with five cohorts of students participating (n = 807). Multinomial logistic regression was used to model the dependent variable Progression Outcome with categories of; completion and non-completion (academic and non-academic reasons). Predictors included cohort, programme, branch, gender, age on entry, ethnic group, disability status, domicile, change of home postcode, change of term-time postcode, entry qualifications, previous experience of caring, and dependents. Age on Entry and Domicile or alternatively Dependents and Domicile emerged as statistically significant (p < 0.05) in the multivariable analysis. Older students were less likely to be lost from the programme, as were students who lived locally at all times and those with dependents. There is currently little reliable, consistent information on nursing student attrition, progression and completion. This study contributes to the evidence base by identifying some of the factors that may contribute to successful programme completion. (C) 2017 Published by Elsevier Ltd. This randomized controlled trial, conducted in a UK University nursing department, compared student nurses' performance during a simulated cardiac arrest. Eighteen teams of four students were randomly assigned to one of three scenarios: 1) no family witness; 2) a "quiet" family witness; and 3) a family witness displaying overt anxiety and distress. Each group was assessed by observers for a range of performance outcomes (e.g. calling for help, timing to starting cardiopulmonary resuscitation), and simulation manikin data on the depth and timing of three cycles of compressions. Groups without a distressed family member present performed better in the early part of the basic life support algorithm. Approximately a third of compressions assessed were of appropriate pressure. Groups with a distressed family member present were more likely to perform compressions with low pressure. Groups with no family member present were more likely to perform compressions with too much pressure. Timing of compressions was better when there was no family member present. Family presence appears to have an effect on subjectively and objectively measured performance. Further study is required to see how these findings translate into the registered nurse population, and how experience and education modify the impact of family member presence. (C) 2017 Elsevier Ltd. All rights reserved. Accelerated nursing programs are gaining momentum as a means of career transition into the nursing profession for mature age learners in an attempt to meet future healthcare workforce demands in Australia. With a gap in the literature on readiness for practice of graduates from accelerated nursing programs at the Masters level the purpose of this study was to evaluate the effectiveness of the program based on graduates' preparedness for practice and graduate outcomes. Using a descriptive, exploratory design an online survey was used to explore the perception of graduate nurses' readiness for clinical practice. Forty-nine graduates from a nursing Masters program at an Australian university completed the survey defining readiness for practice as knowledge of self-limitations and seeking help, autonomy in basic clinical procedures, exhibiting confidence, possessing theoretical knowledge and practicing safe care. Graduates perceived themselves as adequately prepared to work as a beginner practitioner with their perception of readiness for clinical practice largely positive. The majority of participants agreed that the program had prepared them for work as a beginner practitioner with respondents stating that they felt adequately prepared in most areas relating to clinical practice. This would suggest that educational preparation was adequate and effective in achieving program objectives. (C) 2017 Elsevier Ltd. All rights reserved. It has been acknowledged the traditional lecture format is a familiar teaching methodology and that there is still much to be learnt from using this in class room based lectures. Whilst the first author was a postgraduate student undertaking a programme in Nurse Education at a University in the Republic of Ireland, poetry was used to challenge undergraduate nursing students' attitudes towards older persons in a large group format. The students were in Year 3 of a Bachelor of Nursing Science General and Intellectual Disability Programmes. Feedback was obtained from the students that comprised of three main themes; Aids Recall of Information; Enriched Learning Experiences, Challenges Attitudes to Person Centred Care. Thus, the paper aims to evaluate using poetry as an engaging teaching strategy within a lecture format for the first time as a novice teacher when drawing out nursing students' attitudes towards older persons with a focus on supporting them in embracing key care skills in the clinical setting. This paper should provide other student educationalists the opportunity to see the value of poetry as a teaching strategy and provide practical tips on its use within the classroom. (C) 2017 Elsevier Ltd. All rights reserved. Increasing demands for clinical placements have forced tertiary institutions to look for alternative placements for third year nursing students. While Prison Health Services provide an opportunity for nursing students to engage in care of offender populations with significant chronic illnesses, there has been little evaluation of such placements. Third year undergraduate nurses (18/46) participated in a mixed methods study to provide evidence based research on students' perceptions of clinical placements in Prison Health Services. Quantitative and qualitative data were collected via an anonymous survey and individual interview. Whilst the majority of students valued the opportunity to increase their knowledge and clinical skills and felt supported by preceptors, challenges included being psychologically ill-prepared for the physical and emotional aspects of placement, and witnessing poor attitudes and behaviours of staff, which impacted on the quality of their experience. Recommendations include changes to orientation programs and introduction of simulation to help students feel better prepared and supported during placements in prison settings. Refining the selection process for placements in this setting will also help to ensure student suitability for clinical placement in Prison Health Services. (C) 2017 Elsevier Ltd. All rights reserved. Team-Based Learning (TBL) is a teaching strategy designed to promote problem solving, critical thinking and effective teamwork and communication skills; attributes essential for safe healthcare. The aim was to explore postgraduate student perceptions of the role of TBL in shaping learning style, team skills, and professional and clinical behaviours. An exploratory descriptive approach was selected. Critical care students were invited to provide consent for the use for research purposes of written reflections submitted for course work requirements. Reflections of whether and how TBL influenced their learning style, teamwork skills and professional behaviours during classroom learning and clinical practice were analysed for content and themes. Of 174 students, 159 participated. Analysis revealed three themes: Deep Learning, the adaptations students made to their learning that resulted in mastery of specialist knowledge; Confidence, in knowledge, problem solving and rationales for practice decisions; and Professional and Clinical Behaviours, including positive changes in their interactions with colleagues and patients described as patient advocacy, multidisciplinary communication skills and peer mentorship. TBL facilitated a virtuous cycle of feedback encouraging deep learning that increased confidence. Increased confidence improved deep learning that, in turn, led to the development of professional and clinical behaviours characteristic of high quality practice. (C) 2017 Elsevier Ltd. All rights reserved. Background: The visual arts, including concept maps, have been shown to be effective tools for facilitating student learning. However, the use of concept maps in nursing education has been under explored. Objectives: The aim of this study was to explore how students develop concept maps and what these concept maps consist of, and their views on the use of concept maps as a learning activity in a PBL class. Design: A qualitative approach consisting of an analysis of the contents of the concept maps and interviews with students. Settings: The study was conducted in a school of nursing in a university in Hong Kong. Participants: A total of 38 students who attended the morning session (20 students) and afternoon session (18 students) respectively of a nursing problem-based learning class. Methods: The students in both the morning and afternoon classes were allocated into four groups (4-5 students per group). Each group was asked to draw two concept maps based on a given scenario, and then to participate in a follow-up interview. Two raters individually assessed the concept maps, and then discussed their views with each other. Results: Among the concept maps that were drawn, four were selected. Their four core features of those maps were: a) the integration of informative and artistic elements; b) the delivery of sensational messages; c) the use of images rather than words; and d) three-dimensional and movable. Both raters were concerned about how informative the presentation was, the composition of the elements, and the ease of comprehension, and appreciated the three-dimensional presentation and effective use of images. From the results of the interview, the pros and cons of using concept maps were discerned. Conclusions: This study demonstrated how concept maps could be implemented in a PBL class to boost the students' creativity and to motivate them to learn. This study suggests the use of concept maps as an initiative to motivate student to learn, participate actively, and nurture their creativity. To conclude, this study explored an alternative way for students to make presentations and pioneered the use of art-based concept maps to facilitate student learning. (C) 2017 Elsevier Ltd. All rights reserved. Home-based cardiac rehabilitation (CR) programs improve health outcomes for people diagnosed with heart disease. Mentoring of patients by nurses trained in CR has been proposed as an innovative model of cardiac care. Little is known however, about the experience of mentors facilitating such programs and adapting to this new role. The aim of this qualitative study was to explore nurse mentor perceptions of their role in the delivery of a home-based CR program for rural patients unable to attend a hospital or outpatient CR program. Seven nurses mentored patients by telephone providing patients with education, psychosocial support and lifestyle advice during their recovery. An open-ended survey was administered to mentors by email and findings revealed mentors perceived their role to be integral to the success of the program. Nurses were satisfied with the development of their new role as patient mentors. They believed their collaborative skills, knowledge and experience in coronary care, timely support and guidance of patients during their recovery and use of innovative audiovisual resources improved the health outcomes of patients not able to attend traditional programs. Cardiac nurses in this study perceived that they were able to successfully transition from their normal work practices in hospital to mentoring patients in their homes. Crown Copyright (C) 2017 Published by Elsevier Ltd. All rights reserved. Novels are one humanities resource available to educators in health disciplines to support student reflection on their own professional practice and therapeutic relationships with patients. An interdisciplinary team, including nurses, a physician, and an English instructor, carried out an interpretive study of the use of a novel by clinical nursing instructors in an undergraduate practicum course. Students placed in assisted living or long term care facilities for the elderly were expected to read a contemporary work, Exit Lines, by Joan Barfoot, which is set in a comparable facility. The objective was to increase understanding of the meanings that participants ascribed to the novel reading exercise in relation to their development as student nurses. By using a hermeneutic approach, we used dialogue throughout the study to elicit perspectives among participants and the interdisciplinary research team. Major themes that emerged included the students' tacit awareness of epistemological plurality in nursing, and the consequent importance of cultivating a capacity to move thoughtfully between different points of view and ways of knowing. (C) 2017 Elsevier Ltd. All rights reserved. Grading of practice is a mandatory element of programmes leading to registration as a midwife in the United Kingdom, required by the Nursing and Midwifery Council. This validates the importance of practice by placing it on an equal level with academic work, contributing to degree classification. This paper discusses a scoping project undertaken by the Lead Midwives for Education group across the 55 Higher Education Institutions in the United Kingdom which deliver pre-registration midwifery programmes. A questionnaire was circulated and practice tools shared, enabling exploration of the application of the standards and collation of the views of the Lead Midwives. Timing and individuals involved in practice assessment varied as did the components and the credit weighting applied to practice modules. Sign-off mentor confidence in awarding a range of grades had increased over time, and mentors seemed positive about the value given to practice and their role as professional gatekeepers. Grading was generally felt to be more robust and meaningful than pass/refer. It also appeared that practice grading may contribute to an enhanced student academic profile. A set of guiding principles is being developed with the purpose of enhancing consistency of the application of the professional standards across the United Kingdom. (C) 2016 Elsevier Ltd. All rights reserved. The aim of this paper was to explore the mentoring experiences of new graduate midwives working in midwifery continuity of care models in Australia. Most new graduates find employment in hospitals and undertake a new graduate program rotating through different wards. A limited number of new graduate midwives were found to be working in midwifery continuity of care. The new graduate midwives in this study were mentored by more experienced midwives. Mentoring in midwifery has been described as being concerned with confidence building based through a personal relationship. A qualitative descriptive study was undertaken and the data were analysed using continuity of care as a framework. We found having a mentor was important, knowing the mentor made it easier for the new graduate to call their mentor at any time. The new graduate midwives had respect for their mentors and the support helped build their confidence in transitioning from student to midwife. With the expansion of midwifery continuity of care models in Australia mentoring should be provided for transition midwives working in this way. Crown Copyright (C) 2016 Published by Elsevier Ltd. All rights reserved. Many higher education institutions have adopted mentoring programs for students as a means of providing support, improve learning and enhance the student experience. The aim of this project was to improve midwifery students experience by offering a peer mentoring program to commencing students to assist with the transition to university life and the rigours of the midwifery program. This paper reports the evaluation of this specific mentoring program and the ongoing development and implementation of a sustainable program within an Australian University. A survey design was adopted to gather feedback from both mentees to evaluate if the peer mentoring program enhanced the first year midwifery student experience and ascertain how the program could be further developed. Fifty-five students engaged with the peer mentors and completed the questionnaire regarding the mentoring program. Specifically valuable was the positive impact that mentoring had on midwifery student confidence, managing the demands of the program and being motivated to keep going when,the program requirements were challenging. The success of this program rested largely with mentoring students sharing their own experiences and providing reassurance that other students could also succeed (C) 2015 Elsevier Ltd. All rights reserved. Midwifery education plays an important role in educating graduates about engaging in continuous professional development (CPD) but there is a lack of empirical research analysing student midwives' awareness of CPD beyond graduation. We aimed to explore student midwives' awareness of the need to become lifelong learners and to map their knowledge of CPD activities available after graduation. Therefore, forty-seven reflective documents, written in the last week of student midwives' training programme, were analysed in a thematic way. Content analysis confirmed student midwives' awareness of the importance of CPD before graduation. They mentioned different reasons for future involvement in CPD and described both, formal and informal CPD-activities. Respondents were especially aware of the importance of knowledge, to a lesser degree of skills-training and still less of the potential value of the Internet for individual and collective learning. Respondents perceived a need for a mandatory preceptorship. Supporting learning guides were highly valued and the importance of reflection on CPD was well-established. This could have resulted from an integrated reflective learning strategy during education. Conclusion: Undergraduate midwives are aware of the importance of CPD and the interplay of formal and informal learning activities. Virtual learning requires special attention to overcome CPD challenges. (C) 2015 Elsevier Ltd. All rights reserved. New strategies to potentially improve drug safety and efficacy emerge with allosteric programs. Biased allosteric modulators can be designed with high subtype selectivity and defined receptor signaling end-points, however, selecting the most meaningful parameters for optimization can be perplexing. Historically, "potency hunting" at the expense of physicochemical and pharmacokinetic optimization has led to numerous tool compounds with excellent pharmacological properties but no path to drug development. Conversely, extensive physicochemical and pharmacokinetic screening with only post hoc bias and allosteric characterization has led to inefficacious compoundg or compounds with on-target toxicities. This field is rapidly evolving with new mechanistic understanding, changes in terminology, and novel opportunities. The intent of this digest is to summarize current understanding and debates within the field. We aim to discuss, from a medicinal chemistry perspective, the parameter choices available to drive SAR. (C) 2017 The Author(s). Published by Elsevier Ltd. Several novel oxygenated polyunsaturated lipid mediators biosynthesized from n-3 docosapentaenoic acid were recently isolated from murine inflammatory exudates and human primary cells. These compounds belong to a distinct family of specialized pro-resolving mediators, and display potent in vivo anti-inflammatory and pro-resolution effects. The endogenously formed specialized pro-resolving mediators have attracted a great interest as lead compounds in drug discovery programs towards the development of new classes of drugs that dampen inflammation without interfering with the immune response. Detailed information on the chemical structures, cellular functions and distinct biosynthetic pathways of specialized pro-resolving lipid mediators is a central aspect of these biological actions. Herein, the isolation, structural elucidation, biosynthetic pathways, total synthesis and bioactions of the n-3 docosapentaenoic acid derived mediators PD1(n-3) Dm and MaR1(n-3) Dm are discussed. In addition, a brief discussion of a novel family of mediators derived from n-3 docosapentaenoic acid, termed 13-series resolvins is included. (C) 2017 Elsevier Ltd. All rights reserved. Eight new steroidal saponins, trillikamtosides K-R (1-8), along with three known analogues, were isolated from the whole plants of Trillium kamtschaticum. Their structures were unambiguously established by interpretation of spectroscopic data (MS and NMR) and chemical methods. Compound 1 had a rare aglycone featuring a skeleton of 16-oxaandrost-5-en-3-01-17-one, which was reported for the first time. The isolated saponins were tested for cytotoxicities against HCT116 cells, and trillikamtoside R (8) was found to show the most cytotoxic effect with an IC50 value of 4.92 mu M. (C) 2017 Elsevier Ltd. All rights reserved. Naturally occurring flavonoids co-exist as glycoside conjugates, which dominate aglycones in their content. To unveil the structure-activity relationship of a naturally occurring flavonoid, we investigated the effects of the glycosylation of naringenin on the inhibition of enzyme systems related to diabetes (protein tyrosine phosphatase 1B (PTP1B) and alpha-glycosidase) and on glucose uptake in the insulin-resistant state. Among the tested naringenin derivatives, prunin, a single-glucose-containing flavanone glycoside, potently inhibited PTP1B with an IC50 value of 17.5 +/- 2.6 mu M. Naringenin, which lacks a sugar molecule, was the weakest inhibitor compared to the reference compound, ursolic acid (IC50: 5.4 +/- 0.30 mu M). In addition, prunin significantly enhanced glucose uptake in a dose-dependent manner in insulin-resistant HepG2 cells. Regarding the inhibition of ce-glucosidase, naringenin exhibited more potent inhibitory activity (IC50: 10.6 +/- 0.49 mu M) than its glycosylated forms and the reference inhibitor, acarbose (IC50: 178.0 +/- 0.27 mu M). Among the glycosides, only prunin (IC(5)0: 106.5 +/- 4.1 mu M) was more potent than the positive control. A molecular docking study revealed that prunin had lower binding energy and higher binding affinity than glycosides with higher numbers of H-bonds, suggesting that prunin is the best fit to the PTP1B active site cavity. Therefore, in addition to the number of H-bonds present, possible factors affecting the protein binding and PTP1B inhibition of flavanones include their fit to the active site, hydrogen-bonding affinity, Van der Waals interactions, H-bond distance, and H-bond stability. Furthermore, this study clearly depicted the association of the intensity of bioactivity with the arrangement and characterization of the sugar moiety on the flavonoid skeleton. (C) 2017 Elsevier Ltd. All rights reserved. The endothelin axis and in particular the two receptor subtypes, ETA and ETB, are under investigation for the treatment of various diseases such as pulmonary arterial hypertension, fibrosis, renal failure and cancer. Previous work in our lab has shown that 1,3,6-trisubstituted-4-oxo-1,4-dihydroquinoline-2-carboxylic acid derivatives exhibit noteworthy endothelin receptor antagonist activity. A series of analogues with modifications centered around position 6 of the heterocyclic quinolone core and replacement of the aryl carboxylic acid group with an isosteric tetrazole ring was designed and synthesized to further optimize the structure activity relationship. The endothelin receptor antagonist activity was determined by in vitro Forster resonance energy transfer (FRET) using GeneBLAzer (R) assay technology. The most potent member of this series exhibited ETA receptor antagonist activity in the subnanomolar range with an IC50 value of 0.8 nM, and was 1000-fold selective for the ETA receptor compared to the ETB receptor. Its activity and selectivity profile resembles that of the most recently approved drug, macitentan. (C) 2017 Elsevier Ltd. All rights reserved. Docetaxel is a commonly used chemotherapeutic drug for patients with late stage prostate cancer. However, serious side effect and drug resistance limit its clinical success. Brefeldin A is a 16-membered macrolide antibiotic from mangrove-derived Fungus Aspergillus sp. (9Hu), which exhibited potent cytotoxicity against human cancer cells. In the present study, we determined the effect of brefeldin A on docetaxel-induced growth inhibition and apoptosis in human prostate cancer PC-3 cells. Brefeldin A in combination with docetaxel inhibited the growth of PC-3 cells in monolayer and in three dimensional cultures. The combination also potently stimulated apoptosis in PC-3 cells as determined by propidium iodide staining and morphological assessment. Mechanistic studies showed that growth inhibition and apoptosis in PC-3 cells treated with brefeldin A and docetaxel were associated with decrease in the level of Bcl-2. The present study indicates that combined brefeldin A with docetaxel may represent a novel approach for improving the efficacy of docetaxel, and Bcl-2 may serve as a target for brefeldin A to enhance the effects of docetaxel chemotherapy. (C) 2017 Elsevier Ltd. All rights reserved. Copalic acid, one of the diterpenoid acids in copaiba oil, inhibited the chaperone function of a-crystallin and heat shock protein 27 kD (HSP27). It also showed potent activity in decreasing an HSP27 client protein, androgen receptor (AR), which makes it useful in prostate cancer treatment or prevention. To develop potent drug candidates to decrease the AR level in prostate cancer cells, more copalic acid analogs were synthesized. Using the level of AR as the readout, 15 of the copalic acid analogs were screened and two compounds were much more potent than copalic acid. The compounds also dose-dependently inhibited AR positive prostate cancer cell growth. Furthermore, they inhibited the chaperone activity of alpha-crystallin as well. Published by Elsevier Ltd. This letter describes the further chemical optimization of the 5-amino-thieno(2,3-c]pyridazine series (VU0467154/VU0467485) of M-4 positive allosteric modulators (PAMs), developed via iterative parallel synthesis, culminating in the discovery of the non-human primate (NHP) in vivo tool compound, VU0476406 (8p). VU0476406 is an important in vivo tool compound to enable translation of pharmacodynamics from rodent to NHP, and while data related to a Parkinson's disease model has been reported with 8p, this is the first disclosure of the optimization and discovery of VU0476406, as well as detailed pharmacology and DMPK properties. (C) 2017 Elsevier Ltd. All rights reserved. 1,3-Dipolar cycloaddition between a chiral nitrone and N-substituted maleimides afforded unprecedented enantiopure spiro-fused heterocycles in good yields with a high enantio- and diastereoselectivity. The reaction was taking place on the less hindered face of the nitrone. The obtaining heterocycles were screened for their in vitro antioxidant properties and the results revealed that the potent antioxidant activity was generally recorded to compounds (3g) and (3e). The in vitro antibacterial activities of these two compounds were also investigated and the results demonstrated the strongest potential of compound (3g) against all the tested bacteria. Molecular properties were analyzed and showed good oral drug candidate like properties and that could be exploited as a potential antioxidant and antimicrobial agent. Finally, the preliminary results obtained from this investigation attempted to clarify if the structurally different side chains of active compounds interfere with their biological properties. (C) 2017 Elsevier Ltd. All rights reserved. Piperlongumine (PL) is a natural alkaloid with broad biological activities. Twelve analogues have been designed and synthesized with non-substituted benzyl rings or heterocycles in this work. Most of the compounds showed better anticancer activities than the parent PL without apparent toxicity in normal cells. Elevation of cellular ROS levels was one of the main anticancer mechanisms of these compounds. Cell apoptosis and cell cycle arrest for the best compound ZM90 were evaluated and similar mechanism of action with PL was demonstrated. The SAR was also characterized, providing worthy directions for further optimization of PL compounds. (C) 2017 Elsevier Ltd. All rights reserved. Designing drug candidates exhibiting polypharmacology is one of the strategies adopted by medicinal chemists to address multifactorial diseases. Metabolic disease is one such multifactorial disorder characterized by hyperglycaemia, hypertension and dyslipidaemia among others. In this paper we report a new class of molecular framework combining the pharmacophoric features of DPP4 inhibitors with those of ACE inhibitors to afford potent dual inhibitors of DPP4 and ACE. (C) 2017 Elsevier Ltd. All rights reserved. In the current study, seven compounds (i.e. 1-7) were found to be novel activators for the N-epsilon-acetyl lysine deacetylation reaction catalyzed by human histone deacetylase 8 (HDAC8). When assessed with the commercially available HDAC8 peptide substrate Fluor-de-Lys (R)-HDAC8 that harbors the unnatural 7-amino-4-methylcoumarin (AMC) residue immediately C-terminal to the N-epsilon-acetyl-lysine residue to be deacetylated, our compounds exhibited comparable activation potency to that of TM-2-51, the strongest HDAC8 activator reported in the current literature. However, when assessed with an AMC-less peptide substrate derived from the native HDAC8 non-histone substrate protein Zinc finger protein ZNF318, while our compounds were all found to be able to activate HDAC8 deacetylation reaction, TM-2-51 was found not to be able to. Our compounds also seemed to be largely selective for HDAC8 over other classical HDACs. Moreover, treatment with the strongest activator among our compounds (i.e. 7) was found to decrease the KM of the above AMC-less HDAC8 substrate, while nearly maintaining the k(cat) of the HDAC8-catalyzed deacetylation on this substrate. (C) 2017 Elsevier Ltd. All rights reserved. Clinical studies have revealed that diabetic retinopathy is a multifactorial disorder. Moreover, studies also suggest that ALR2 and PARP-1 co-occur in retinal cells, making them appropriate targets for the treatment of diabetic retinopathy. To find the dual inhibitors of ALR2 and PARP-1, the structure based design was carried out in parallel for both the target proteins. A series of novel thiazolidine-2,4-dione (TZD) derivatives were therefore rationally designed, synthesized and their in vitro inhibitory activities against ALR2 and PARP-1 were evaluated. The experimental results showed that compounds 5b and 5f, with 2chloro and 4-fluoro substitutions, showed biochemical activities in micromolar and submicromolar range (IC(50)1.34-5.03 mu M) against both the targeted enzymes. The structure-activity relationship elucidated for these novel inhibitors against both the enzymes provide new insight into the binding mode of the inhibitors to the active sites of enzymes. The positive results of the biochemical assay suggest that these compounds may be further optimized and utilized for the treatment of diabetic retinopathy. (C) 2017 Elsevier Ltd. All rights reserved. Oxytocin (OT) is a neuropeptide involved in a wide variety of physiological actions, both peripherally and centrally. Many human studies have revealed the potential of OT to treat autism spectrum disorders and schizophrenia. OT interacts with the OT receptor (OTR) as well as vasopressin la and 1b receptors (V1aR, V1bR) as an agonist, and agonistic activity for V1aR and V1bR may have a negative impact on the therapeutic effects of OTR agonism in the CNS. An OTR-selective agonistic peptide, FE 202767, in which the structural differences from OT are a sulfide bond instead of a disulfide bond, and N-alkylglycine replacement for Pro at position 7, was reported. However, the effects of amino acid substitutions in OT have not been comprehensively investigated to compare OTR, V1aR, and V1bR activities. This led us to obtain a new OTRselective analog by comprehensive amino acid substitutions of OT and replacement of the disulfide bond. A systematic amino acid scanning (Ala, Leu, Phe, Ser, Glu, or Arg) of desamino OT (dOT) at positions 2, 3, 4, 5, 7, and 8 revealed the tolerability for the substitution at positions 7 and 8. Further detailed study showed that trans-4-hydroxyproline (trans-Hyp) at position 7 and gamma-methylleucine [Leu(Me)] at position 8 were markedly effective for improving receptor selectivity without decreasing the potency at the OTR. Subsequently, a combination of these amino acid substitutions with the replacement of the disulfide bond of dOT analogs with a sulfide bond (carba analog) or an amide bond (lactam analog) yielded several promising analogs, including carba-1-[trans-Hyp(7),Leu(Me)(8)]dOT (14) with a higher potency (7.2 pM) at OTR than that of OT and marked selectivity (>10,000-fold) over V1aR and V1bR. Hence, we investigated comprehensive modification of OT and obtained new OT analogs that exhibited high potency at OTR with marked selectivity. These OTR-selective agonists could be useful to investigate OTR-mediated effects on psychiatric disorders. (C) 2017 Elsevier Ltd. All rights reserved. Putative dual action compounds (DACs 3a d) based on azabicyclo[5.3.0]decane (ABD) Smac mimetic scaffolds linked to Zn2+-chelating 2,2'-dipicolylamine (DPA) through their 4 position are reported and characterized. Their synthesis, their target affinity (cIAP1 BIR3, Zn2+) in cell-free assays, their pro-apoptotic effects, and their cytotoxicity in tumor cells with varying sensitivity to Smac mimetics are described. A limited influence of Zn2+ chelation on in vitro activity of DPA-substituted DACs 3a d was sometimes perceivable, but did not lead to strong cellular synergistic effects. In particular, the linker connecting DPA with the ABD scaffold seems to influence cellular Zn2+-chelation, with longer lipophilic linkers/DAC 3c being the optimal choice. (C) 2017 Elsevier Ltd. All rights reserved. Biologically active Knoevenagel condensates (1-14) of diarylheptanoids: 1,7-bis(3-methoxy-4-hydroxyphenyl)hepta-1,7-diene-3,5-dione and 1,7-bis(3-ethoxy-4-hydroxyphenyl)hepta-1,7-diene-3,5-dione, were synthesized and structurally characterized. Compounds 1-14 exhibited cytotoxicity against colon carcinoma cells, and their antiproliferative effect was associated with a significant decrease of multidrug resistance proteins. One of the underlying mechanisms of these effects is the reduction of intracellular and extracellular SOD enzymes by compounds 1, 12 and 14, which render the tumor cells more vulnerable to oxidative stress. (C) 2017 Elsevier Ltd. All rights reserved. Flavonoids, stilbenes, and chalcones are plant secondary metabolites that often possess diverse biological activities including anti-inflammatory, anti-cancer, and anti-viral activities. The wide range of bioactivities poses a challenge to identify their targets. Here, we studied a set of synthetically generated flavonoids and chalcones to evaluate for their biological activity, and compared similarly substituted flavonoids and chalcones. Substituted chalcones, but not flavonoids, showed inhibition of viral translation without significantly affecting viral replication in cells infected with hepatitis C virus (HCV). We suggest that the chalcones used in this study inhibit mammalian target of rapamycin (mTOR) pathway by ablating phosphorylation of ribosomal protein 6 (rps6), and also the kinase necessary for phosphorylating rps6 in Huh7.5 cells (pS6K1). In addition, selected chalcones showed inhibition of growth in Ishikawa, MCF7, and MDA-MB-231 cells resulting an IC50 of 1-6 mu g/mL. When similarly substituted flavonoids were used against the same set of cancer cells, we did not observe any inhibitory effect. Together, we report that chalcones show potential for anti-viral and anti-cancer activities compared to similarly substituted flavonoids. (C) 2017 Elsevier Ltd. All rights reserved. The unique properties of polyoxometalates, such as molecular polarity, redox potential, surface charge distribution, shape and acidity, influence their response of recognition to targeted biological macromolecules. By using PM-19 (K7PTi2W10O40) as a lead-compound, a series of novel pyridinium polyoxometalates (A(7)PTi(2)W(10)O(40)), which hadn't been reported in literatures, were designed and synthesized. The evaluation was conducted using the single-cycle pseudovirus infection assay (TZM-bl assay), CCK8 method was used for determining the cytotoxicity. The results indicated that the designed pyridinium polyoxometalates had a lower toxicity to TZM-bl cells, and showed higher inhibitory activity against HIV 1 virus. (C) 2017 Published by Elsevier Ltd. A series of new nopinone-based thiosemicarbazone derivatives were designed and synthesized as potent anticancer agents. All these compounds were identified by H-1 NMR, C-13 NMR, HR-MS spectra analyses. In the in vitro anticancer activity, most derivatives showed considerable cytotoxic activity against three human cancer cell lines (MDA-MB-231, SMMC-7721 and Hela). Among them, compound 4i exhibited most potent antitumor activity against three cancer cell lines with the IC50 values of 2.79 +/- 0.38, 2.64 +/- 0.17 and 3.64 +/- 0.13 mu M, respectively. Furthermore, the cell cycle analysis indicated that compound 4i caused cell cycle arrest of MDA-MB-231 cells at G2/M phase. The Annexin V-FITC/7-AAD dual staining assay also revealed that compound 4i induced the early apoptosis of MDA-MB-231 cells. (C) 2017 Elsevier Ltd. All rights reserved. Ebola virus is one of the most threatening pathogens with the mortality rate as high as 90% in the world. There are no licensed therapeutic drugs or preventive vaccines for Ebola hemorrhagic fever up to date. Favipiravir, a novel antiviral drug which was mainly used for the treatment of influenza, now has been demonstrated to have a curative effect in treating Ebola virus infection. In this review, we present an overview of recent progress on the treatment of Ebola virus disease with Favipiravir and describe its possible mechanism. Moreover, we give a brief summary of other related treatment strategies against Ebola. (C) 2017 Elsevier Ltd. All rights reserved. A series of new 1,2,3-triazolo-phenanthrene hybrids has been synthesized by employing Cu(I)-catalyzed azide-alkyne cycloaddition (CuAAC) reaction. These compounds were evaluated for their in vitro cytotoxic potential against various human cancer cell lines viz. lung (A549), prostate (PC-3 and DU145), gastric (HGC-27), cervical (HeLa), triple negative breast (MDA-MB-231, MDA-MB-453) and breast (BT-549, 4T1) cells. Among the tested compounds, 7d displayed highest cytotoxicity against DU145 cells with IC50 value of 1.5 +/- 0.09 mu M. Further, the cell cycle analysis shown that it blocks G0/G1 phase of the cell cycle in a dose dependent manner. In order to determine the effect of compound on cell viability, phase contrast microscopy, AO/EB, DAPI, DCFDA and JC-1 staining studies were performed. These studies clearly indicated that the compound 7d inhibited the cell proliferation of DU145 cells. Relative viscosity measurements and molecular docking studies indicated that these compounds bind to DNA by intercalation. (C) 2017 Elsevier Ltd. All rights reserved. COX-2 is an inducible enzyme mediating inflammatory responses. Selective targeting of COX-2 is useful for developing anti-inflammatory agents devoid of ulcerogenic activity. Herein, we report the design and synthesis of a series of pyrazoles and pyrazolo[1,2-a]pyridazines with selective COX-2 inhibitory activity and in vivo anti-inflammatory effect. Both series were accessed through acid-catalyzed ultrasound assisted reactions. The most active compounds in this study are two novel molecules, 11 and 16, showing promising selectivity and decent IC50 of 16.2 and 20.1 nM, respectively. These compounds were also docked into the crystal structure of COX-2 enzyme (PDB ID: 3LN1) to understand their mode of binding. Finally, Mulliken charges and electrostatic surface potential were calculated for both compound 11 and celecoxib using DFT method to get insights into the molecular determinants of activity of this compound. These results could lead to the development of novel COX-2 inhibitors with improved selectivity. (C) 2017 Elsevier Ltd. All rights reserved. We report the discovery and hit-to-lead optimization of a structurally novel indazole series of CYP11B2 inhibitors. Benchmark compound 34 from this series displays potent inhibition of CYP11B2, high selectivity versus related steroidal and hepatic CYP targets, and lead-like physical and pharmacokinetic properties. On the basis of these and other data, the indazole series was progressed to lead optimization for further refinement. (C) 2017 Elsevier Ltd. All rights reserved. Described herein is a facile and efficient methodology toward the synthesis of Morusin scaffolds and Morusignin L scaffolds 4-9 and 12 via a novel three-step approach (Michael addition or prenylation, cyclization and cyclization) and use a rapid, microwave-accelerated cyclization as the key step. Furthermore, their biological activities have been preliminarily demonstrated by in vitro evaluation for anti-osteoporosis activity. These Morusin, Morusignin L and newly synthesized compounds 5b, 6a, 8e, 8f greatly exhibited the highest potency, especially at the 10(-5) mol/L (P < 0.01), and had good in vitro anti-osteoporosis activities using the commercially available standard drug Iprifiavone as a positive control. The mechanisms associated with anti-osteoporosis effects of these compounds may be through the inhibition of TRAP enzyme activity and bone resorption in osteoclasts, and promotion effect of osteoblast proliferation in vitro. The results indicated that Morusin scaffolds and Morusignin L scaffolds may be useful leads for further anti-osteoporosis activity screenings. (C) 2017 Elsevier Ltd. All rights reserved. Muchimangins are benzophenone-xanthone hybrid polyketides produced by Securidaca longepedunculata. However, their biological activities have not been fully investigated, since they are minor constituents in this plant. To evaluate the possibility of muchimangins as antibacterial agent candidates, five muchimangin analogs were synthesized from 2,4,5-trimethoxydiphenyl methanol and the corresponding xanthones, by utilizing p-toluenesulfonic acid monohydrate for the Bronsted acid-catalysis. The antibacterial assays against Gram-positive bacteria, Staphylococcus aureus and Bacillus subtilis, and Gram-negative bacteria, Klebsiella pneumoniae and Escherichia coli, revealed that the muchimangin analogs (+/-)-1,3,6,8-tetrahydroxy-4-(phenyl-(2',4',5'-trimethoxyphenyl)methyl)-xanthone (1), (+/-)-1,3,6-trihydroxy-4-(phenyl-(2',4',5'-trimethoxyphenyl)methyl)-xanthone (2), and (+/-)-1,3-dihydroxy-4-(phenyl(2',4',5'-trimethoxyphenyl)methyl)-xanthone (3) showed significant activities against S. aureus, with MIC values of 10.0, 10.0, and 25.0 mu M, respectively. Analogs (+/-)-1 and (+/-)-2 also exhibited antibacterial activities against B. subtilis, with MIC values of 50.0 and 12.5 mu M, respectively. Furthermore, (+)-3 enhanced the antibacterial activity against S. aureus, with a MIC value of 10 mu M. (C) 2017 Elsevier Ltd. All rights reserved. An imbalance between bone resorption by osteoclasts and bone formation by osteoblasts can cause bone loss and bone-related disease. In a previous search for natural products that increase osteogenic activity, we found that 5,6-dehydrokawain (1) from Alpinia zerumbet promotes osteoblastogenesis. In this study, we synthesized and evaluated series of 5,6-dehydrokawain analogs. Our structure-activity relationships revealed that alkylation of para or meta position of aromatic ring of 1 promote osteogenic activity. Among the potential analogs we synthesized, (E)-6-(4-Ethylstyryl)-4-methoxy-2H-pyran-2-one (14) and (E)-6(4-Butylstyryl)-4-methoxy-2H-pyran-2-one (21) both significantly up-regulated Runx2 and Osterix mRNA expression at 10 mu M. These osteogenic activities could be mediated by bone morphogenetic protein (BMP) and activation of p38 MAPK signaling pathways. Compounds 14 and 21 also inhibited RANKL-induced osteoclast differentiation of RAW264 cells. These results indicated that novel 5,6-dehydrokawain analogs not only increase osteogenic activity but also inhibit osteoclast differentiation, and could be potential lead compounds for the development of anti-osteoporosis agents. (C) 2017 Elsevier Ltd. All rights reserved. The first synthesis of octapeptin C4 was achieved using a combination of solid phase synthesis and off resin cyclisation. Octapeptin C4 displayed antibiotic activity against multi-drug resistant, NDM-1 and polymyxin-resistant Gram-negative bacteria, with moderate activity against Staphylococcus aureus. The linear analogue of octapeptin C4 was also prepared, which showed reduced activity. (C) 2017 The Authors. Published by Elsevier Ltd. The sulfinic acid analog of aspartic acid, cysteine sulfinic acid, introduces a sulfur atom that perturbs the acidity and oxidation properties of aspartic acid. Cysteine sulfinic acids are often introduced in peptides and proteins by oxidation of cysteine, but this method is limited as all cysteine residues are oxidized and cysteine residues are often oxidized to sulfonic acids. To provide the foundation for the specific incorporation of cysteine sulfinic acids in peptides and proteins, we synthesized a 9-fluorenylmethyloxycarbonyl (Fmoc) benzothiazole sulfone analog. Oxidation conditions to generate the sulfone were examined and oxidation of the Fmoc-protected sulfide (3) with NbC in hydrogen peroxide provided the corresponding sulfone (4) in the highest yield and purity. Reduction with sodium borohydride generated the cysteine sulfinic acid (5) suggesting this approach may be an efficient method to incorporate a cysteine sulfinic acid in biomolecules. A model tripeptide bearing a cysteine sulfinic acid was synthesized using this approach. Future studies are aimed at using this method to incorporate cysteine sulfinic acids in peptide hormones and proteins for use in the study of biological function. (C) 2017 Elsevier Ltd. All rights reserved. In this study, we synthesized an Azo-py phosphoramidite, featuring azobenzene and pyrene units, as a novel fluorescent and isomeric (trans- and cis-azobenzene units) material, which we incorporated in an i-motif DNA sequence. We then monitored the structural dynamics and changes in fluorescence as the modified DNA sequences transformed from single strands at pH 7 to i-motif quadruplex structures at pH 3. After incorporating Azo-py into the 4A loop position of an i-motif sequence, dramatic changes in fluorescence occurred as the DNA structures changed from single-strands to i-motif quadruplex structures. Interestingly, the cis form of Azo-py induced a more stable i-motif structure than did the trans form, as confirmed from circular dichroism spectra and melting temperature data. The absorption and fluorescence signals of these Azo-py-incorporated i-motif systems exhibited switchable and highly correlated signaling patterns. Such isomeric structures based on Azo-py might find potential applications in biology, where control over stable i-motif quadruplex structures might be performed with switchable fluorescence signaling. (C) 2017 Elsevier Ltd. All rights reserved. Introduction of a Michael acceptor on a flexible scaffold derived from pan-FGFR inhibitors has successfully yielded a novel series of highly potent FGFR4 inhibitors with selectivity over FGFR1. Due to reduced lipophilicity and aromatic ring count, this series demonstrated improved solubility and permeability. However, plasma instability and fast metabolism limited its potential for in vivo studies. Efforts have been made to address these problems, which led to the discovery of compound (-)-11 with improved stability, CYP inhibition, and good activity/selectivity for further optimization. (C) 2017 Elsevier Ltd. All rights reserved. Combining advantageous sequences of Alchemia and Sanofi methods of synthesis of Fondaparinux, a more efficient and practical synthetic strategy for the synthesis of corresponding protected pentasaccharide was developed. The protected pentasaccharide was smoothly converted into Fondaparinux in overall high yield (1%). (C) 2017 Elsevier Ltd. All rights reserved. The astacin proteases meprin a and beta are emerging drug targets for treatment of disorders such as kidney failure, fibrosis or inflammatory bowel disease. However, there are only few inhibitors of both proteases reported to date. Starting from NNGH as lead structure, a detailed elaboration of the structure-activity relationship of meprin beta inhibitors was performed, leading to compounds with activities in the lower nanomolar range. Considering the preference of meprin 0 for acidic residues in the P1' position, the compounds were optimized. Acidic modifications induced potent inhibition and >100-fold selectivity over other structurally related metalloproteases such as MMP-2 or ADAM10. (C) 2017 Elsevier Ltd. All rights reserved. We report the design and synthesis of a series of BACE1 inhibitors incorporating mono- and bicyclic 6 substituted 2-oxopiperazines as novel P1' and P2' ligands and isophthalamide derivative as P2-P3 ligands. Among mono-substituted 2-oxopiperazines, inhibitor 5a with N-benzyl-2-oxopiperazine and isophthalamide showed potent BACE1 inhibitory activity (K-i = 2 nM). Inhibitor 5g, with N-benzyl-2-oxopiperazine and substituted indole-derived P2-ligand showed a reduction in potency. The X-ray crystal structure of 5g-bound BACE1 was determined and used to design a set of disubstituted 2-oxopiperazines and bicyclic derivatives that were subsequently investigated. Inhibitor 6j with an oxazolidinone derivative showed a BACE1 inhibitory activity of 23 nM and cellular EC50 of 80 nM. (C) 2017 Elsevier Ltd. All rights reserved. We report on P2X(7) receptor antagonists based on a lead adamantly-cyanoguanidine-aryl moiety. We have investigated the importance of the central cyanoguanidine moiety by replacing it with urea, thiourea or guanidine moieties. We have also investigated the linker length between the central moiety and the aryl portion. All compounds were assessed for their inhibitory potency in a pore-formation dye uptake assay at the P2X(7) receptor. None of the compounds resulted in an improved potency illustrating the importance of the cyanoguanidine moiety in this chemotype. (C) 2017 Elsevier Ltd. All rights reserved. The 1,2,3,4-tetrahydroacridine derivative tacrine was the first drug approved to treat Alzheimer's disease (AD). It is known to act as a potent cholinesterase inhibitor. However, tacrine was removed from the market due to its hepatotoxicity concerns as it undergoes metabolism to toxic quinonemethide species through the cytochrome P450 enzyme CYP1A2. Despite these challenges, tacrine serves as a useful template in the development of novel multi-targeting anti-AD agents. In this regard, we sought to evaluate the risk of hepatotoxicity in a series of C9 substituted tacrine derivatives that exhibit cholinesterase inhibition properties. The hepatotoxic potential of tacrine derivatives was evaluated using recombinant cytochrome (CYP) P450 CYP1A2 and CYP3A4 enzymes. Molecular docking studies were conducted to predict their binding modes and potential risk of forming hepatotoxic metabolites. Tacrine derivatives compound 1 (N-(3,4-dimethoxybenzyl)-1,2,3,4-tetrahydroacridin-9-amine) and 2 (6-chloro-N-(3,4-dimethoxybenzy1)-1,2,3,4-tetrahydroacridin-9-amine) which possess a C9 3,4-dimethoxybenzylamino substituent exhibited weak binding to CYP1A2 enzyme (1, IC50 = 33.0 mu M; 2, IC50 = 8.5 mu M) compared to tacrine (CYP1A2 IC50 = 1.5 mu M). Modeling studies show that the presence of a bulky 3,4-dimethoxybenzylamino C9 substituent prevents the orientation of the 1,2,3,4-tetrahydroacridine ring close to the heme-iron center of CYP1A2 thereby reducing the risk of forming hepatotoxic species. Crown Copyright (C) 2017 Published by Elsevier Ltd. All rights reserved. Resveratrol (RVT) is a stilbene with a protective effect on the cardiovascular system; however, drawbacks including low bioavailability and fast metabolism limit its efficacy. In this work we described new resveratrol derivatives with nitric oxide (NO) release properties, ability to inhibit platelet aggregation and in vivo antithrombotic effect. Compounds (4a-f) were able to release NO in vitro, at levels ranging from 24.1% to 27.4%. All compounds (2a-f and 4a-f) have exhibited platelet aggregation inhibition using as agonists ADP, collagen and arachidonic acid. The most active compound (4f) showed reduced bleeding time compared to acetylsalicylic acid (ASA) and protected up to 80% against in vivo thromboembolic events. These findings suggest that hybrid resveratrol-furoxan (4f) is a novel lead compound able to prevent platelet aggregation and thromboembolic events. (C) 2017 Elsevier Ltd. All rights reserved. Phenoxodiol is an isoflavene with potent anti-tumor activity. In this study, a series of novel mono- and di-substituted phenoxodiol-thiosemicarbazone hybrids were synthesized via the condensation reaction between phenoxodiol with thiosemicarbazides. The in vitro anti-proliferative activities of the hybrids were evaluated against the neuroblastoma SKN-BE(2)C, the triple negative breast cancer MDA-MB-231, and the glioblastoma U87 cancer cell lines. The mono-substituted hybrids exhibited potent anti-proliferative activity against all three cancer cell lines, while the di-substituted hybrids were less active. Selected mono-substituted hybrids were further investigated for their cytotoxicity against normal MRC-5 human lung fibroblast cells, which identified two hybrids with superior selectivity for cancer cells over normal cells as compared to phenoxodiol. This suggests that mono-substituted phenoxodiol-thiosemicarbazone hybrids have promising potential for further development as anti-cancer agents. (C) 2017 Elsevier Ltd. All rights reserved. Leishmaniasis are infectious diseases caused by parasites of genus Leishmania that affect affects 12 million people in 98 countries mainly in Africa, Asia, and Latin America. Effective treatments for this disease are urgently needed. In this study, we present a computer-aided approach to investigate a set of 32 recently synthesized chalcone and chalcone-like compounds to act as antileishmanial agents. As a result, nine most promising compounds and three potentially inactive compounds were experimentally evaluated against Leishmania infantum amastigotes and mammalian cells. Four compounds exhibited EC50 in the range of 6.2-10.98 mu M. In addition, two compounds, LabMol-65 and LabMol-73, exhibited cytotoxicity in macrophages > 50 mu M that resulted in better selectivity compared to standard drug amphotericin B. These two compounds also demonstrated low cytotoxicity and high selectivity towards Vero cells. The results of target fishing followed by homology modeling and docking studies suggest that these chalcone compounds could act in Leishmania because of their interaction with cysteine proteases, such as procathepsin L. Finally, we have provided structural recommendations for designing new antileishmanial chalcones. (C) 2017 Elsevier Ltd. All rights reserved. A novel antifungal strategy targeting the inhibition of calcineurin is described. To develop a calcineurin based inhibitor of pathogenic fungi, analogs of FK506 were synthesized that were able to permeate mammalian but not fungal cells. Antagonists in combination with FK506 were not immunosuppressive and retained antifungal activity in A. fumigatus. To reduce the dosage burden of the antagonist, murine oral PR was improved an order of magnitude relative to previous FK506 antagonists. (C) 2017 Elsevier Ltd. All rights reserved. Overexpression of the CREB-binding protein (CBP), a bromodomain-containing transcription coactivator involved in a variety of cellular processes, has been observed in several types of cancer with a correlation to aggressiveness. We have screened a library of nearly 1500 fragments by high-throughput docking into the CBP bromodomain followed by binding energy evaluation using a force field with electrostatic solvation. Twenty of the 39 fragments selected by virtual screening are positive in one or more ligand-observed nuclear magnetic resonance (NMR) experiments. Four crystal structures of the CBP bromodomain in complex with in silico screening hits validate the pose predicted by docking. Thus, the success ratio of the high-throughput docking procedure is 50% or 10% if one considers the validation by ligand-observed NMR spectroscopy or X-ray crystallography, respectively. Compounds 1 and 3 show favorable ligand efficiency in two different in vitro binding assays. The structure of the CBP bromodomain in the complex with the brominated pyrrole 1 suggests fragment growing by Suzuki coupling. (C) 2017 Elsevier Ltd. All rights reserved. This letter describes the synthesis and structure activity relationship (SAR) studies of structurally novel M-4 antagonists, based on a 4,6-disubstituted core, identified from a high-throughput screening campaign. A multi-dimensional optimization effort enhanced potency at both human and rat M-4 (IC(50)s < 300 nM), with no substantial species differences noted. Moreover, CNS penetration proved attractive for this series (brain:plasma K-p,K-uu = 0.87), while other DMPK attributes were addressed in the course of the optimization effort, providing low in vivo clearance in rat (CLp = 5.37 mL/min/kg). Surprisingly, this series displayed pan-muscarinic antagonist activity across M1-5, despite the absence of the prototypical basic or quaternary amine moiety, thus offering a new chemotype from which to develop a next generation of pan-muscarinic antagonist agents. (C) 2017 Elsevier Ltd. All rights reserved. Using the enzymatic transglycosylation reaction beta-D-ribo- and 2'-deoxyribofuranosides of 2-amino-5,6-difluorobenzimidazole nucleosides have been synthesized. 2-Amino-5,6-difluoro-benzimidazole riboside proved to exhibit a selective antiviral activity (selectivity index >32) against a wild strain of the herpes simplex virus type 1, as well as towards virus strains that are resistant to acyclovir, cidofovir, and foscarnet. We believe that this compound might be used for treatment of herpes infections in those cases, when acyclovir is not efficient. (C) 2017 Elsevier Ltd. All rights reserved. A series of novel carbohydrate-modified antitumor compounds were designed based on glucose transporter 1 (GLUT1), and evaluated for their anticancer activities against four cancer cell lines. The ribose derivatives (compound 9 and 10) exhibited modest inhibitory activity. The compound 9 significantly inhibited the migration of A549 cell and induced A549 cell apoptosis in a concentration-dependent manner. Moreover, compound 9 blocked A549 cells at the G0/G1 phase. The cellular uptake studies suggested that ribose-modified compound 9 could be taken through GLUT1 in A549 cell line. (C) 2017 Elsevier Ltd. All rights reserved. In this work, a glutamic acid linked paclitaxel (PTX) dimer (Glu-PTX2) with high PTX content of 88.9 wt% was designed and synthesized. Glu-PTX2 could self-assemble into nanoparticles (Glu-PTX2 NPs) in aqueous solution to increase the water solubility of PTX. Glu-PTX2 NPs were characterized by electron microscopy and dynamic light scattering, exhibiting spherical morphology and favorable structural stability in aqueous media. Glu-PTX2 NPs could be internalized by cancer cells as revealed by confocal laser scanning microscopy and exert potent cytotoxicity. It is envisaged that Glu-PTX2 NPs would be an alternative formulation for PTX, and such amino acid linked drug dimers could also be applied to other therapeutic agents. (C) 2017 Elsevier Ltd. All rights reserved. We previously reported a facile preparation method of 3-substituted-2,6-difluoropyridines, which were easily converted to 2,3,6-trisubstituted pyridines by nucleophilic aromatic substitution with good regioselectivity and yield. In this study, we demonstrate the synthetic utility of 3-substituted-2,6-difluoropyridines in drug discovery via their application in the synthesis of various 2,3,6-trisubstituted pyridines, including macrocyclic derivatives, as novel protein kinase C theta inhibitors in a moderate to good yield. This synthetic approach is useful for the preparation of 2,3,6-trisubstituted pyridines, which are a popular scaffold for drug candidates and biologically attractive compounds. (C) 2017 Elsevier Ltd. All rights reserved. AIDS-related cancer diseases are malignancies with low incidence on healthy people that affect mostly subjects already immunocompromised. The connection between HIV/AIDS and these cancers has not been established yet, but a weakened immune system is certainly the main cause. We envisaged the possibility to screen a small library of compounds synthesized in our laboratory against opportunistic tumors mainly due to HIV infection like Burkitt's Lymphoma. From cellular assays and gene expression analysis we identified two promising compounds. These derivatives have the dual action required inhibiting HIV replication in human TZM-bl cells infected with HIV-1 NL4.3 and showing cytotoxic activity on human colon HT-29 and breast adenocarcinoma MCF-7 cells. In addition, preclinical in vitro adsorption, distribution, metabolism, and excretion studies highlighted a satisfactory pharmacokinetic profile. (C) 2017 Elsevier Ltd. All rights reserved. SHAPE chemistry (selective 2'-hydroxyl acylation analyzed by primer extension) has been developed to specifically target flexible nucleotides (often unpaired nucleotides) independently to their purine or pyrimidine nature for RNA secondary structure determination. However, to the best of our knowledge, the structure of 2'-O-acylation products has never been confirmed by NMR or X-ray data. We have realized the acylation reactions between cNMP and NMIA under SHAPE chemistry conditions and identified the acylation products using standard NMR spectroscopy and LC-MS/MS experiments. For CAMP and cGMP, the major acylation product is the 2'-O-acylated compound (> 99%). A trace amount of N-acylated CAMP has also been identified by LC-UV-MS2. While for cCMP, the isolated acylation products are composed of 96% of 2'-O-acylated, 4% of N,O-diacylated, and trace amount of N-acylated compounds. In addition, the characterization of the major 2'-O-acylated compound by NMR showed slight differences in the conformation of the acylated sugar between the three cyclic nucleotides. This interesting result should be useful to explain some unexpected reactivity of the SHAPE chemistry. (C) 2017 Elsevier Ltd. All rights reserved. A series of 1((2-hydroxynaphthalen-1-y1)(phenyl)(methyl))pyrrolidin-2-one derivatives by an efficient iodine catalyzed domino reaction involving various aromatic aldehydes, 2-pyrrolidinone and beta-naphthol was achieved and the structures were elucidated by FTIR H-1 NMR, C-13 NMR, and HRMS. Subsequently they were evaluated for cytotoxicity against breast cancer (MCF-7), colon cancer (HCT116) cell lines. In the cytotoxicity, the relative inhibition activity was remarkably found to be high in MCF-7 cell lines as 79% (4c), 83% (4f) and the IC(50)values were 1.03 mu M (4c), 0.98 mu M (4f). Compounds 4a, 4e, 4k-m, and 4q were found to be inactive and rest showed a moderate activity. In order to get more insight into the binding mode and inhibitor binding affinity, compounds (4a-q) were docked into the active site phosphoinositide 3-kinase (PI3K) (PDB ID: 4JPS) which is a crucial regulator of apoptosis or programmed cell death. Results suggested that the hydrophobic interactions in the binding pockets of PI3K exploited affinity of the most favourable binding ligands (4c and 4f: inhibitory constant (ki) = 66.22 nM and 107.39 nM). The SAR studies demonstrated that the most potent compounds are 4c and 4f and can be developed into precise PI3K inhibitors with the capability to treat various cancers. (C) 2017 Elsevier Ltd. All rights reserved. A class of novel pyrimidine derivatives bearing diverse conformationally restricted azabicyclic ether/amine were designed, synthesized and evaluated for their GPR119 agonist activities against type 2 diabetes. Most compounds exhibited superior hEC(50) values to endogenous lipid oleoylethanolamide (OEA). Analogs with 2-fluoro substitution in the aryl ring showed more potent GPR119 activation than those without fluorine. Especially compound 27m synthesized from endo-azabicyclic alcohol was observed to have the best EC50 value (1.2 nM) and quite good agonistic activity (112.2% max) as a full agonist. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we present the results of a ligand- and structure-based virtual screen targeting LRRK2, a kinase that has been implicated in Parkinson's disease. For the ligand-based virtual screen, the structures of 12 competitor compounds were used as queries for a variety of 2D and 3D searches. The structure based virtual screen relied on homology models of LRRK2, as no X-ray structure is currently available in the public domain. From the virtual screening, 662 compounds were purchased, of which 35 showed IC50 values below 10 mu M in wild-type and/or mutant LRRK2 (a hit rate of 5.3%). Of these 35 hits, four were deemed to have potential for medicinal chemistry follow-up. (C) 2017 The Authors. Published by Elsevier Ltd. Developing efficient controlled release system of insecticide can facilitate the better use of insecticide. We described here a first example of photo-controlled release of an insecticide by linking fipronil with photoresponsive coumarin covalently. The generated coumarin-fipronil (CF) precursor could undergo cleavage to release free fipronil in the presence of blue light (420 nm) or sunlight. Photophysical studies of CF showed that it exhibited strong fluorescence properties. The CF had no obvious activity against mosquito larvae under dark, but it can be activated by light inside the mosquito larvae. The released Fip from CF by blue light irradiation in vitro retained its activity to armyworm (Mythimna separate) with LC50 value of 24.64 mu mol L-1. This photocaged molecule provided an alternative delivery method for fipronil. (C) 2017 Published by Elsevier Ltd. The involvement of the phosphoinositide 3-kinases (PI3Ks) in several diseases, especially in the oncology area, has singled it as one of the most explored therapeutic targets in the last two decades. Many different inhibitor classes have been developed by the industry and academia with a diverse selectivity profile within the PI3K family. In the present manuscript we report a further exploration of our lead PI3K inhibitor ETP-46321 (Martinez Gonzalez et al., 2012)(1) by the application of a conformational restriction strategy. For that purpose we have successfully synthesized novel tricyclic imidazo[1,2-a]pyrazine derivatives as PI3K inhibitors. This new class of compounds had enable the exploration of the solvent-accessible region within PI3K and resulted in the identification of molecule 8q with the best selectivity PI3K alpha/delta isoform profile in vitro, and promising in vivo PK data. (C) 2017 Elsevier Ltd. All rights reserved. Three series of pyrazolo[3,4-d]pyrimidine derivatives were synthesized and evaluated as RET kinase inhibitors. Compounds 23a and 23c were identified to show significant activity both in the biochemical and the BaF3/CCDC6-RET cell assays. Compound 23c was found to significantly inhibit RET phosphorylation and down-stream signaling in BaF3/CCDC6-RET cells, confirming its potent cellular RET-targeting profile. Different from other RET inhibitors with equal potency against KDR that associated with severe toxicity, 23c did not show significant KDR-inhibition even at the concentration of 1 mu M. These results demonstrated that 23c is a potent and selective RET inhibitor. (C) 2017 Elsevier Ltd. All rights reserved. Based on our previous results and literature precedence, a series of 2-anilinopyridinyl-benzothiazole Schiff bases were rationally designed by performing molecular modeling experiments on some selected molecules. The binding energies of the docked molecules were better than the E7010, and the Schiff base with trimethoxy group on benzothiazole moiety, 4y was the best. This was followed by the synthesis of a series of the designed molecules by a convenient synthetic route and evaluation of their anticancer potential. Most of the compounds have shown significant growth inhibition against the tested cell lines and the compound 4y exhibited good antiproliferative activity with a GI(50) value of 3.8 mu M specifically against the cell line DU145. In agreement with the docking results, 4y exerted cytotoxicity by the disruption of the microtubule dynamics by inhibiting tubulin polymerization via effective binding into colchicine domain, comparable to E7010. Detailed binding modes of 4y with colchicine binding site of tubulin were studied by molecular docking. Furthermore, 4y induced apoptosis as evidenced by biological studies like mitochondrial membrane potential, caspase-3, and Annexin V-FITC assays. (C) 2017 Elsevier Ltd. All rights reserved. SAR in the previously described spirocyclic ROMK inhibitor series was further evolved from lead 4 by modification of the spirocyclic core and identification of novel right-side pharmacophores. In this process, it was discovered that the spiropyrrolidinone core with the carbonyl group a to the spirocenter was preferred for potent ROMK activity. Efforts aimed at decreasing hERG affinity within the series led to the discovery of multiple novel right-hand pharmacophores including 3-methoxythiadiazole, 2-methoxypyrimidine, and pyridazinone. The most promising candidate is pyridazinone analog 32 that showed an improved functional hERG/ROMK potency ratio and preclinical PI(profile. In vivo evaluation of 32 demonstrated blood pressure lowering effects in the spontaneously hypertensive rat model. (C) 2017 Elsevier Ltd. All rights reserved. We present a practical synthesis of both enantiomers of 1,2,3,4-tetrahydroisoquinoline derivative IPPAM-1 (1), which is a positive allosteric modulator (PAM) of prostacyclin receptor (IP) and a candidate for treatment of pulmonary arterial hypertension without the side effects caused by IP agonists. Assay of cAMP production by CHO-K1 cells stably expressing human IP clearly demonstrated that the IPPAM activity resides exclusively on the R-form of 1. (C) 2017 Elsevier Ltd. All rights reserved. Mirror-image screening using D-proteins is a powerful approach to provide mirror-image structures of chiral natural products for drug screening. During the course of our screening study for novel MDM2-p53 interaction inhibitors, we identified that NPD6878 (R-(-)-apomorphine) inhibited both the native L-MDM2-L-p53 interaction and the mirror-image D-MDM2-D-p53 interaction at equipotent doses. In addition, both enantiomers of apomorphine showed potent inhibitory activity against the native MDM2-p53 interaction. In this study, we investigated the inhibitory mechanism of both enantiomers of apomorphine against the MDM2-p53 interaction. Achiral oxoapomorphine, which was converted from chiral apomorphines under aerobic conditions, served as the reactive species to form a covalent bond at Cys77 of MDM2, leading to the inhibitory effect against the binding to p53. (C) 2017 Elsevier Ltd. All rights reserved. Microbial transformation of ursolic acid (1) by Bacillus megaterium CGMCC 1.1741 was investigated and yielded five metabolites identified as 3-oxo-urs-12-en-28-oic acid (2); 1 beta,11 alpha-dihydroxy-3-oxo-urs-12-en-28-oic acid (3); 1 beta-hydroxy-3-oxo-urs-12-en-28, 13-lactoe (4); 1 beta,3 beta, 11 alpha-trihydroxyurs-12-en-28-oic acid (5) and 1 beta,11 alpha-dihydroxy-3-oxo-urs-12-en-28-O-beta-D-glucopyranoside (6). Metabolites 3, 4, 5 and 6 were new natural products. Their nitric oxide (NO) production inhibitory activity was assessed in lipopolysaccharide (LPS) - stimulated RAW 264.7 cells. Compounds 3 and 4 exhibited significant activities with the IC50 values of 1.243 and 1.711 mu M, respectively. A primary structure-activity relationship was also discussed. (C) 2017 Elsevier Ltd. All rights reserved. A new eudesmane sesquiterpenoid (1), and a new homologue of virginiae butanolide E (2) along with butyl isobutyl phthalate (3) were isolated from, actinomycete-Lentzea violacea strain AS08 isolated from north western Himalayas by stressing on modified one strain-many compounds (OSMAC) method. The structures of the new compounds were elucidated by extensive spectroscopic analyses including 1D, 2D NMR along with HR-ESI-MS and FT-IR data. Herein, a distinctive method was added for inspecting secretory profile of the strain by quantification of extract value of cell free supernatant in different types of culture media fallowed by HPLC profiling of respective extracts, which revealed a highly altered metabolic profile of the strain and formed the base for the selection of media. The compounds 1 and 2 showed moderate activity against Gram negative (MIC similar to 32-64 mu g ml(-1)) in comparison to Gram positive bacterial pathogens. Compound 1 exhibited significant activity in human cancerous cell lines (IC50 similar to 19.2 mu M). (C) 2017 Elsevier Ltd. All rights reserved. As part of a quest for backups to the antitubercular drug pretomanid (PA-824), we investigated the unexplored 6-nitro-2,3-dihydroimidazo[2,1-b][1,3]-thiazoles and related-oxazoles. The nitroimidazothiazoles were prepared in high yield from 2-bromo-4-nitroimidazole via heating with substituted thiiranes and diisopropylethylamine. Equivalent examples of these two structural classes provided broadly comparable MICs, with 2-methyl substitution and extended aryloxymethyl side chains preferred; albeit, S-oxidised thiazoles were ineffective for tuberculosis. Favourable microsomal stability data for a biaryl thiazole (45) led to its assessment in an acute Mycobacterium tuberculosis mouse model, alongside the corresponding oxazole (48), but the latter proved to be more efficacious. In vitro screening against kinetoplastid diseases revealed that nitroimidazothiazoles were inactive versus leishmaniasis but showed interesting activity, superior to that of the nitroimidazooxazoles, against Chagas disease. Overall, "thio-delamanid" (49) is regarded as the best lead. (C) 2017 Elsevier Ltd. All rights reserved. Steroids are important components of cell membranes and are involved in several physiological functions. A diphenylmethane (DPM) skeleton has recently been suggested to act as a mimetic of the steroid skeleton. However, difficulties are associated with efficiently introducing different substituents between two phenyl rings of the DPM skeleton, and, thus, further structural development based on the DPM skeleton has been limited. We herein developed an efficient synthetic method for introducing different substituents into two phenyl rings of the DPM skeleton. We also synthesized DPM-based estrogen receptor (ER) modulators using our synthetic method and evaluated their ER transcriptional activities. (C) 2017 Elsevier Ltd. All rights reserved. A series of substituted tricyclic 4,4-dimethyl-3,4-dihydrochromeno[3,4-d]imidazole derivatives have been synthesized and their mPGES-1 biological activity has been disclosed in detail. Structure-activity relationship (SAR) optimization provided inhibitors with excellent mPGES-1 potency and low to moderate PGE(2) release A549 cell potency. Among the mPGES-1 inhibitors studied, 7, 9 and 111 provided excellent selectivity over COX-2 (>200-fold) and >70-fold selectivity for COX-1 except 111, which exhibited dual mPGES-1/COX-1 activity. Furthermore, the above tested mPGES-1 inhibitors demonstrated good metabolic stability in liver microsomes, high plasma protein binding (PPB) and no significant inhibition observed in clinically relevant CYP isoforms. Besides, selected mPGES-1 tool compounds 9 and 111 provided good in vivo pharmacokinetic profile and oral bioavailability (%F = 33 and 85). Additionally, the representative mPGES-1 tool compounds 9 and 111 revealed moderate in vivo efficacy in the LPS-induced thermal hyperalgesia guinea pig pain model. (C) 2017 Elsevier Ltd. All rights reserved. Chemical investigations of the MeOH extract of air dried flowers of the Australian tree Angophora woodsiana (Myrtaceae) yielded two new beta-triketones, woodsianones A and B (1, 2) and nine known beta-triketones (3-11). Woodsianone A is a beta-triketone-sesquiterpene adduct and woodsianone B is a beta-triketone epoxide derivative. The structures of the new and known compounds were elucidated from the analysis of 1D/2D NMR and MS data. The relative configurations of the compounds were determined from analysis of H-1-H-1 coupling constants and ROESY correlations. All compounds (1-11) had antiplasmodial activity against the chloroquine sensitive strain 3D7. The known compound rhodomyrtone (5) and new compound woodsianone B (2) showed moderate antiplasmodial activities against the 3D7 strain (1.84 mu M and 3.00 mu M, respectively) and chloroquine resistant strain Dd2 (4.00 mu M and 2.53 mu M, respectively). (C) 2017 Elsevier Ltd. All rights reserved. Targeted therapy is unavailable for treating patients with triple-negative breast cancer (TNBC), which accounts for approximately 15% of all breast cancers. Overexpression of epidermal growth factor receptor (EGFR) is observed in approximately 30-60% of TNBCs. Therefore, developing novel strategies for inhibiting EGFR signaling is required. In the present study, a natural compound library was screened to identify molecules that target TNBCs that overexpress EGFR. Picrasidine G (PG), a naturally occurring dimeric alkaloid produced by Picrasma quassioides, decreased the viability of the MDA-MB 468 cell line (TNBCEGFR+) compared with other breast cancer cell lines. PG treatment increased markers of apoptosis, including chromatin condensation, sub-G1 population, cleavage of caspase 3 and cleavage of poly (ADP ribose) polymerase (PARP). PG inhibited the phosphorylation of signal transducer and activator of transcription 3 (STAT3) and inhibited transcription of the STAT3-target gene encoding survivin. Further, PG inhibited EGF-induced STAT3 phosphorylation but not interleulcin-6 (IL-6)-induced STAT3 phosphorylation. These results suggest that PG may contribute to the development of targeted therapy of patients with EGFR-overexpressing TNBC. (C) 2017 Elsevier Ltd. All rights reserved. In an effort to identify novel anti-inflammatory compounds, a series of flavone derivatives were synthesized and biologically evaluated for their inhibitory effects on the production of nitric oxide (NO) and prostaglandin E-2 (PGE(2)), representative pro-inflammatory mediators, in LPS-induced RAW 264.7 cells. Their structure-activity relationship was also investigated. In particular, we found that compound 3g displayed more potent inhibitory activities on PGE2 production, similar inhibitory activities on NO production and less weak cytotoxicity than luteolin, a natural flavone known as a potent anti-inflammatory agent. (C) 2017 Elsevier Ltd. All rights reserved. A structure-activity relationship has been developed around the meridianin scaffold for inhibition of Dyrk1a. The compounds have been focussed on the inhibition of kinase Dyrk1a, as a means to retain the transcription factor NFAT in the nucleus. NFAT is responsible for up-regulation of genes responsible for the induction of a slow, oxidative skeletal muscle phenotype, which may be an effective treatment for diseases where exercise capacity is compromised. The SAR showed that while strong Dyrk1a binding was possible with the meridianin scaffold the compounds have no effect on NFAT localisation, however, by moving from the indole to a 6-azaindole scaffold both potent Dyrk1a binding and increased NFAT residence time in the nucleus were obtained - properties not observed with the reported Dyrk1a inhibitors. One compound was shown to be effective in an ex vivo muscle fiber assay. The increased biological activity is thought to arise from the added interaction between the azaindole nitrogen and the lysine residue in the back pocket. (C) 2017 Elsevier Ltd. All rights reserved. Using structure based drug design, novel aminobenzisoxazoles as coagulation factor IXa inhibitors were designed and synthesized. Highly selective inhibition of FIXa over FXa was demonstrated. Anticoagulation profile of selected compounds was evaluated by aPTT and PT tests. In vitro ADMET and pharmacokinetic (PK) profiles were also evaluated. (C) 2017 Elsevier Ltd. All rights reserved. Using fragment based and structure based drug discovery strategies a series of novel Sortilin inhibitors has been identified. The inhibitors are based on the N-substituted 1,2,3-triazol-4-one/ol heterocyclic template. X-ray crystallography shows that the 1,2,3-triazol-4-one/ol acts as a carboxylic acid isostere, making a bi-dentate interaction with an arginine residue of Sortilin, an interaction which has not been previously characterised for this heterocycle. (C) 2017 Elsevier Ltd. All rights reserved. Hepatitis C virus (HCV) NS5B RNA-dependent RNA polymerase (RdRp) plays a central role in virus replication. NS5B has no functional equivalent in mammalian cells, and as a consequence is an attractive target for selective inhibition. This paper describes the discovery of a novel family of HCV NS5B non nucleoside inhibitors inspired by the bioisosterism between sulfonamide and phosphonamide. Systematic structural optimization in this new series led to the identification of IDX375, a potent non nucleoside inhibitor that is selective for genotypes I a and lb. The structure and binding domain of IDX375 were confirmed by X-ray co-crystalisation study. (C) 2017 Elsevier Ltd. All rights reserved. Two series of novel 1,2,3-benzotriazin-4-one derivatives containing thiourea and acylthiourea were designed and synthesized. The bioassay results showed that most of the test compounds showed good nematicidal activity against M. incognita at the concentration of 10.0 mg L-1 in vivo. The compounds A13, A17 and B3 showed excellent nematicidal activity on the second stage juveniles of the root-knot nematode with the inhibition rate of 51.3%, 58.3% and 51.3% at the concentration of 1.0 mg L-1 respectively. It suggested that the structure of 1,2,3-benzotriazin-4-one derivatives containing thiourea and acylthiourea could be optimized further. (C) 2016 Elsevier Ltd. All rights reserved. Frontline nurses encounter operational failures (OFs), or breakdowns in system processes, that hinder care, erode quality, and threaten patient safety. Previous research has relied on external observers to identify OFs; nurses have been passive participants in the identification of system failures that impede their ability to deliver safe and effective care. To better understand frontline nurses' direct experiences with OFs in hospitals, we conducted a multi-site study within a national research network to describe the rate and categories of OFs detected by nurses as they provided direct patient care. Data were collected by 774 nurses working in 67 adult and pediatric medical-surgical units in 23 hospitals. Nurses systematically recorded data about OFs encountered during 10 work shifts over a 20-day period. In total, nurses reported 27,298 OFs over 4,497 shifts, a rate of 6.07 OFs per shift. The highest rate of failures occurred in the category of Equipment/Supplies, and the lowest rate occurred in the category of Physical Unit/Layout. No differences in OF rate were detected based on hospital size, teaching status, or unit type. Given the scale of this study, we conclude that OFs are frequent and varied across system processes, and that organizations may readily obtain crucial information about OFs from frontline nurses. Nurses' detection of OFs could provide organizations with rich, real-time information about system operations to improve organizational reliability. (c) 2017 Wiley Periodicals, Inc. Insomnia is the most common sleep problem in women. Increasing evidence suggests an association between insomnia and cardiovascular disease (CVD). However, information is limited on lifestyle and socio-environmental factors associated with sleep problems in women. In this study directed by Social Cognitive Theory, we examined the personal, behavioral, socio-environmental, and CVD risk factors associated with sleep characteristics (insomnia and sleep quality) in middle-aged women using a cross-sectional design. The study instruments included the Insomnia Severity Index (ISI), the Pittsburg Sleep Quality Index (PSQI), the Center for Epidemiological Studies Depression Scale (CES-D), and measures of social support and behavioral characteristics. Blood was drawn to assess serum glucose and lipids, and BMI was measured. Data were analyzed using hierarchical multiple regression and analysis of covariance (ANCOVA). Of 423 middle-aged women, 25% experienced insomnia (ISI10) and 41.3% reported poor sleep quality (PSQI5). Lesser education (middle school), more depressive symptoms, more screen time (3hours/day), and severe stress were associated with greater severity of insomnia and/or poorer sleep quality. Total and LDL cholesterol levels were higher in women with insomnia than normal sleepers, whereas the BMI was higher in those who reported poor sleep quality. Because personal, behavioral, and socio-environmental factors were significantly associated with insomnia and poor sleep quality, multifactorial approaches should be considered in developing sleep interventions and reducing cardiovascular risk. (c) 2017 Wiley Periodicals, Inc. Based on emerging evidence, mood disorders can be plausibly conceptualized as networks of causally interacting symptoms, rather than as latent variables of which symptoms are passive indicators. In an innovative approach in nursing research, we used network analysis to estimate the network structure of 20 perinatal depressive (PND) symptoms. Then, two proof-of-principle analyses are presented: Incorporating stress and reproductive biomarkers into the network, and comparing the network structure of PND symptoms between non-depressed and depressed women. We analyzed data from a cross-sectional sample of 515 Latina women at the second trimester of pregnancy and estimated networks using regularized partial correlation network models. The main analysis yielded five strong symptom-to-symptom associations (e.g., crysadness), and five symptoms of potential clinical importance (i.e., high centrality) in the network. In exploring the relationship of PND symptoms to stress and reproductive biomarkers (proof-of-principle analysis 1), a few weak relationships were found. In a comparison of non-depressed and depressed women's networks (proof-of-principle analysis 2), depressed participants had a more connected network of symptoms overall, but the networks did not differ in types of relationships (the network structures). We hope this first report of PND symptoms as a network of interacting symptoms will encourage future network studies in the realm of PND research, including investigations of symptom-to-biomarker mechanisms and interactions related to PND. Future directions and challenges are discussed. (c) 2017 Wiley Periodicals, Inc. African-American males ages 13 through 24 are disproportionately affected by sexually transmitted infections (STIs) and human immunodeficiency virus (HIV), accounting for over half of all HIV infections in this age group in the United States. Clear communication between African-American parents and their youth about sexual health is associated with higher rates of sexual abstinence, condom use, and intent to delay initiation of sexual intercourse. However, little is known about African-American fathers' perceptions of what facilitates and inhibits sexual health communication with their preadolescent and adolescent sons. We conducted focus groups with 29 African-American fathers of sons ages 10-15 to explore perceived facilitators and barriers for father-son communication about sexual health. Participants were recruited from barbershops in metropolitan and rural North Carolina communities highly affected by STIs and HIV, and data were analyzed using content analysis. Three factors facilitated father-son communication: (a) fathers' acceptance of their roles and responsibilities; (b) a positive father-son relationship; and (c) fathers' ability to speak directly to their sons about sex. We also identified three barriers: (a) fathers' difficulty in initiating sexual health discussions with their sons; (b) sons' developmental readiness for sexual health information; and (c) fathers' lack of experience in talking with their own fathers about sex. These findings have implications for father-focused prevention interventions aimed at reducing risky sexual behaviors in adolescent African-American males. (c) 2017 Wiley Periodicals, Inc. Home care provides preventive, support, and treatment services to economically vulnerable community populations. In this study, we examined the outcomes of a home care program for pressure ulcers (PrUs) in an economically vulnerable group. The 184 participants were admitted with PrUs and received services from a home care agency in South Korea during a study window of 5 years. The changes in PrU staging over time were analyzed in relation to the agency's home care data and the participants' health data. At enrollment, approximately 60% had a single ulcer; 40% had two or more. Most patients' ulcers were at stages 3 or 4, and most patients were bedridden. The maximum odds of reduced ulcer size from one measurement point to the next was estimated at 14.3% for ulcers in stages 1 and 2, 33.4% of those in stage 3, and 25.5% of those in stage 4; more than 10% of ulcers healed completely within a year. PrUs were a serious problem in this community-dwelling economically vulnerable group, and home care played a critical role in providing health care to this population. (c) 2017 Wiley Periodicals, Inc. Researchers have reported challenges in recruiting US military service members as research participants. We explored their reasons for participating. Eighteen US military service members who had participated in at least one health-related research study within the previous 3 years completed semi-structured individual interviews in person or by telephone, focused on the service members' past decisions regarding research participation. Service members described participation decisions for 34 individual research experiences in 27 separate studies. Service members' reasons for participation in research clustered in three themes: others-, self-, and fit-focused. Each decision included reasons characterized by at least two themes. Reasons from all three themes were apparent in two-thirds of individual participation decisions. Reasons described by at least half of the service members included a desire to make things better for others, to improve an organization, to help researchers, and to improve one's health; understanding how they fit in studies; and convenience of participation. Findings may help researchers, study sponsors, ethicists, military leaders, and military decision-makers better understand service members' reasons for participating in research and improve future recruitment of service members in health research. (c) 2017 Wiley Periodicals, Inc. Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.(c) 2017 Wiley Periodicals, Inc. Standards are intended to foster excellence and equity in student learning by institutionalizing high expectations for all students while allowing educators to have professional discretion in determining how to meet these goals. Recent studies suggest that principals play an essential role in interpreting and communicating the implications of standards for teachers' practice and creating supportive conditions for professional learning. Using a comparative case study of 3 high-poverty elementary schools, the present study extends this research by examining the specific actions principals take to organize the work of their schools in response to new and higher standards. It argues that principals who frame the challenge presented by standards as one that requires teachers to learn to work with students and content in new ways are more likely to close the gap between existing practice and the goals of policy than those who frame the challenge as one of executing top-down directives. Taking a student's perspective, this study aims to characterize students' descriptions of teaching higher and lower in mastery goal emphasis, in elementary, middle, traditional, and democratic schools. Data were collected by student surveys and interviews from fifth- through eighth-grade Israeli students. Nineteen interviews, describing 5 science teachers perceived by their students as higher or lower in mastery goal emphasis, were chosen for further analyses, which employed the TARGETS framework. Results provided concrete illustrations of higher and lower mastery-emphasizing teaching, in different types of schools. Task, time, and autonomy emerged as salient dimensions differentiating between higher and lower perceived mastery goal structure as well as preparation for tests, recognition equality, and teachers' attentiveness. This randomized controlled trial focused on 59 struggling readers in the third and fourth grades (30 female, 29 male) and examined the efficacy of an intervention aimed at increasing students' multisyllabic word reading (MWR). The study also explored the relative effects of an embedded motivational beliefs (MB) training component. Struggling readers were randomly assigned to 1 of 3 groups: MWR only, MWR with an MB component (MWR + MB), or business-as-usual control. Students were tutored in small groups in 24 sessions, three 40-minute lessons each week. Students in both MWR groups outperformed the control group on measures of word-reading fluency. MWR + MB students outperformed MWR only on sentence-level comprehension and outperformed the control group in ratings of attributions for success in reading. Findings are discussed in terms of their relevance to MWR instruction for students with persistent reading difficulties and the potential for enhancing intervention through targeting motivation. Teachers' oral language use may be an important factor in student achievement, particularly for students who struggle with language, learning, and behavior. This study examined features of teacher talk during whole-class instruction in 14 general education (GE) and 14 self-contained special education (SE) elementary classrooms that included students with or at risk for emotional and behavioral disorders. Across settings, 74% of teachers' utterances contained vagueness markers that may hinder comprehension. Within teachers, the quantity, complexity, and clarity of oral language tended to remain stable across lessons, regardless of lesson content. Teacher-rated severity of behavior did not differ by setting, but students in self-contained SE classrooms had significantly lower language and reading skills than their counterparts in GE settings. Analyses of multilevel models revealed no significant differences in form or content of teacher talk between groups of teachers across settings (GE or SE) or grade levels (K-2, 3-4). Building on work examining teachers' perceptions of the student-teacher relationship, this study investigated how young students draw themselves with their teachers. Fourteen kindergarten and first-grade teachers each nominated 2 disruptive and 2 well-behaved students. Students then completed 1 drawing of themselves with their classroom teacher and 1 with a support teacher (e.g., librarian, art teacher) at 2 time points: the end of the school year (Phase 1) and the beginning of the next year (Phase 2). In coding for 8 markers of relationship qualityvitality/creativity, pride/happiness, vulnerability, emotional distance, tension/anger, role reversal, bizarreness/dissociation, and global pathologywe found no differences in the way that disruptive and well-behaved students depicted their own relationships with teachers. Gender and phase effects were identified, however, with boys depicting greater relational negativity than girls and all students portraying greater emotional distance at the beginning of the school year. Competency with mathematics requires use of numerals and symbols as well as an understanding and use of mathematics vocabulary (e.g., add, more, triangle). Currently, no measures exist in which the primary function is to gauge mathematics-vocabulary understanding. We created a 64-item mathematics-vocabulary measure for first grade and piloted the assessment with 104 first-grade students. We also administered standardized measures of general word knowledge and mathematics fluency to investigate the validity of the mathematics-vocabulary measure. Results indicated a wide variability in how first-grade students interpret mathematics-vocabulary terms but strong reliability for the mathematics-vocabulary measure. The purpose of this longitudinal study was to investigate the relationships between mathematics teacher preparation and graduates' analyses of classroom teaching. Fifty-three graduates from an elementary teacher preparation program completed 4 video-based, analysis-of-teaching tasks in the semester before graduation and then in each of the 3 summers following graduation. Participants performed significantly better on the 3 tasks focused on mathematics topics studied in the program than on the task focused on a mathematics topic not studied in the program. After checking several alternative hypotheses, we conclude that a likely explanation for the performance differences is the mathematical knowledge for teaching that participants developed as freshmen in the preparation program. In this article we seek to lay bare a couple of potential conceptual and methodological issues that, we believe, are implicitly present in contemporary philosophy of technology (PhilTech). At stake are (1) the sustained pertinence of and need for coping strategies as to 'how to live with technology (in everyday life)' notwithstanding PhilTech's advancement in its non-essentialist analysis of 'technology' as such; (2) the issue of whether 'living with technology' is a technological affair or not (or both); and (3) the tightly related question concerning the status of the methodological bedrock of contemporary PhilTech, the 'empirical turn.' These matters are approached from the perspective of the philosophical notion of the 'art of living,' and our argumentation is developed both as a context for and on the basis of the contributions to the special issue 'The Art of Living with Technology.'. In this contribution the author tries to formulate an approach to the art of living with technology based on Heidegger's The Principle of Reason, a work often overlooked by contemporary commentators in the philosophy of technology. This approach couples the concept of releasement to insights hailing from Wolfgang Schirmacher concerning Heidegger's nihilism. Marc Van den Bosche suggests that Heidegger's conceptions of Gestell and Gelassenheit, taken together with his analysis of Nietzschean Nihilism (interpreted especially by Wolfgang Schirmacher), depicts our era in a way that "supplements" Andrew Feenberg and Don Ihde's work. Weaving these sources together, he sees the possibility of our becoming (quoting Schirmacher) "technicians" that "live, in a released way, within the groundless." Here, I raise some questions about whether the author has really fitted all these sources together and argue that his idea of becoming post-modern "technicians" appears to require that we first practice a very un-Heideggerian kind of "renunciation.". Several trends in contemporary philosophy have revived the question of the good life. This article addresses the more elaborate notion of an "art of living" in the specific context of the technosphere on the basis of recent works in philosophy of technology. It also brings ideas from Asian philosophy and from Buddhism in particular into the discussion. The focus is on the notion of non-confrontation, which could lead to a decisive change in the methods and scope of technology assessment within the humanities. The art of living in the technosphere emerges as an existential virtuosity that pertains to practical wisdom. 'The art of living with ICTs (information and communication technologies)' today not only means finding new ways to cope, interact and create new lifestyles on the basis of the new digital (network) technologies individually, as 'consumer-citizens'. It also means inventing new modes of living, producing and, not in the least place, struggling collectively, as workers and producers. As the so-called digital revolution unfolds in the context of a neoliberal cognitive and consumerist capitalism, its 'innovations' are predominantly employed to modulate and control both production processes and consumer behavior in view of the overall goal of extracting surplus value. Today, the digital networks overwhelmingly destroy social autonomy, instead engendering increasing social heteronomy and proletarianization. Yet it is these very networks themselves, as technical pharmaka in the sense of French 'technophilosopher' Bernard Stiegler, that can be employed as no other to struggle against this tendency. This paper briefly explores this possibility by reflecting upon current diagnoses of our 'technological situation' by some exemplary post-operaist Marxists from a Stieglerian, pharmacological perspective. What can the art of living after Foucault contribute to ethics in relation to the mediation of human existence by technology? To develop the relation between technical mediation and ethics, firstly the theme of technical mediation is elaborated in line with Foucault's notion of ethical problematization. Every view of what technology does to us at the same time expresses an ethical concern about technology. The contemporary conception of technical mediation tends towards the acknowledgement of ongoing hybridization, not ultimately good or bad but ambivalent, which means for us the challenge of taking care of ourselves as hybrid beings. Secondly, the work of Foucault provides elements for imagining this care for our hybrid selves, notably his notions of freedom as a practice and of the care of the self. A conclusions about technical mediation and ethics is that whereas the approaches of the delegation of morality to technology by Latour and mediated morality by Verbeek see technical mediation of behavior and moral outlook as an answer in ethics, this should rather be considered the problem that ethics is about. This essay shows that a sharp distinction between ethics and aesthetics is unfruitful for thinking about how to live well with technologies, and in particular for understanding and evaluating how we cope with human existential vulnerability, which is crucially mediated by the development and use of technologies such as electronic ICTs. It is argued that vulnerability coping is a matter of ethics and art: it requires developing a kind of art and techne in the sense that it always involves technologies and specific styles of experiencing and dealing with vulnerability, which depend on social and cultural context. It is suggested that we try to find better, perhaps less modern styles of coping with our vulnerability, recognize limits to our attempts to "design" our new forms, and explore what kinds of technologies we need for this project. The neo-liberal reform of the university has had a huge impact on higher education and promises still more changes in the future. Many of these changes have had a negative impact on academic careers, values, and the educational experience. Educational technology plays an important role in the defense of neo-liberal reform, less through actual accomplishment than as a rhetorical justification for supposed "progress." This paper outlines the main claims and consequences of this rhetorical strategy and its actual effects on the university to date. The entanglement of ethics and technology makes it necessary for us to understand and reflect upon our own practices and to question technological hypes. The information and communication technology (ICT) literacy required to navigate the twenty-first century has to do with recognizing our own human limitations, developing critical measures and acknowledging feelings of estrangement, puzzlement as well as sheer wonder of technology. ICT literacy is indeed all about visions of the good life and the art of living in the twenty-first century. The main focus of this paper is to explore and discuss ICT in relation to pupils and teachers and try to understand why and how these technologies are implemented in the school system. This focus not only allows us to better understand how concepts and habits with respect to ICT are shaped in numerous children, but teaches us to acquire an enhanced sensitivity with regard to ICT in the 'classic' literacy context of the educational system. This article addresses the art of living in a technological culture as the active engagement with technomoral change. It argues that this engagement does not just take the form of overt deliberation. It shows in more modest ways as reflection-in-action, an experimental process in which new technology is fitted into existing practices. In this process challenged values are re-articulated in pragmatic solutions to the problem of working with new technology. This art of working with technology is also modest in the sense that it is not oriented to shaping one's own subjectivity in relation to technology. It emanates from human existence as relational and aims at securing good relationships. The argument will be developed in relation to a case study of the ways in which homecare workers engaged with the value of privacy, challenged by tele-monitoring technology that was newly introduced into their work. This essay takes an epistemological perspective on the question of the 'art of living with technology.' Such an approach is needed as our everyday notion and understanding of technology keep being framed in the old categories of instrumentalism and essentialism-notwithstanding philosophy of technology's substantial attempts, in recent times, to bridge the stark dichotomy between those two viewpoints. Here, the persistent dichotomous thinking still characterizing our everyday involvement with technology is traced back to the epistemological distinction between 'concrete' and 'abstract.' Those terms are often contrasted, and a tendency can be found, in the literature as well as in popular discourse, to either conceptually favor one of both or let them collapse into each other. The current essay makes a plea, in an exploratory manner, and on the basis of insights hailing from among others Gregory Bateson and Alfred Korzybski, to not choose either one of those options, but to practice ourselves in navigating ladders of abstraction. Fitness is a central concept in evolutionary theory. Just as it is central to biological evolution, so, it seems, it should be central to cultural evolutionary theory (CET). But importing the biological fitness concept to CET is no straightforward task-there are many features unique to cultural evolution that make this difficult. This has led some theorists to argue that there are fundamental problems with cultural fitness that render it hopelessly confused. In this essay, we defend the coherency of cultural fitness against those who call it into doubt. We consider how cue-reading, sensory-manipulation, and signalling games may initially evolve from ritualized decisions and how more complex games may evolve from simpler games by polymerization, template transfer, and modular composition. Modular composition is a process that combines simpler games into more complex games. Template transfer, a process by which a game is appropriated to a context other than the one in which it initially evolved, is one mechanism for modular composition. And polymerization is a particularly salient example of modular composition where simpler games evolve to form more complex chains. We also consider how the evolution of new capacities by modular composition may be more efficient than evolving those capacities from basic decisions. This article develops a view of shape representation both in visual experience and in subpersonal visual processing. The view is that, in both cases, shape is represented in a 'layered' manner: an object is represented as having multiple shape properties, and these properties have varying degrees of abstraction. I argue that this view is supported both by the facts about visual phenomenology and by a large collection of evidence in perceptual psychology. Such evidence is provided by studies of shape discriminability, apparent motion, multiple-object tracking, and structure-from-motion. Recent neuroscientific work has also corroborated this psychophysical evidence. Finally, I draw out implications of the layered view for processes of concept acquisition. Biologists and philosophers of biology have argued that learning rules that do not lead organisms to play evolutionarily stable strategies (ESSes) in games will not be stable and thus will not be evolutionarily successful (Harley [1981]; Maynard-Smith [1982]). This claim, however, stands at odds with the fact that learning generalization-a behaviour that cannot lead to ESSes when modelled in games-is observed throughout the animal kingdom (Mednick and Freedman [1960]). In this article, I use learning generalization to illustrate how previous analyses of the evolution of learning have gone wrong. It has been widely argued that the function of learning generalization is to allow for swift learning about novel stimuli. I show that in evolutionary game theoretic models, learning generalization-despite leading to sub-optimal behaviour-can indeed speed learning. I further observe that previous analyses of the evolution of learning ignored the short-term success of learning rules. If one drops this assumption, I argue, it can be shown that learning generalization will be expected to evolve in these models. I also use this analysis to show how ESS methodology can be misleading, and to reject previous justifications about ESS play derived from analyses of learning. Orthodoxy has it that only metaphysically elite properties can be invoked in scientifically elite laws. We argue that this claim does not fit scientific practice. An examination of candidate scientifically elite laws like Newton's F = ma reveals properties invoked that are irreversibly defined and thus metaphysically non-elite by the lights of the surrounding theory: Newtonian acceleration is irreversibly defined as the second derivative of position, and Newtonian resultant force is irreversibly defined as the sum of the component forces. We think that scientists are happy to invoke metaphysically non-elite properties in scientifically elite laws for reasons of convenience, such as to simplify the equations and to make them more modular. On this basis, we draw a deflationary moral about laws themselves, as being merely convenient summaries. It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal-mechanical explanation. The desirability of what actually occurs is often influenced by what could have been. Preferences based on such value dependencies between actual and counterfactual outcomes generate a class of problems for orthodox decision theory, the best-known perhaps being the so-called Allais paradox. In this article we solve these problems by extending Richard Jeffrey's decision theory to counterfactual prospects, using a multidimensional possible-world semantics for conditionals, and showing that preferences that are sensitive to counterfactual considerations can still be desirability-maximizing. We end the article by investigating the conditions necessary and sufficient for a desirability function to be a standard expected-utility function. It turns out that the additional conditions imply highly implausible epistemic principles. This article compares inference to the best explanation with Bayes's rule in a social setting, specifically, in the context of a variant of the Hegselmann-Krause model in which agents not only update their belief states on the basis of evidence they receive directly from the world, but also take into account the belief states of their fellow agents. So far, the update rules mentioned have been studied only in an individualistic setting, and it is known that in such a setting both have their strengths as well as their weaknesses. It is shown here that in a social setting, inference to the best explanation outperforms Bayes's rule according to every desirable criterion. In this article I develop a model of theoretical understanding in science. This is a philosophical theory that specifies the conditions that are both necessary and sufficient for a scientist to satisfy the construction 'S understands theory T'. I first consider how this construction is preferable to others, then build a model of the requisite conditions on the basis of examples from elementary physics. I then show how this model of theoretical understanding can be made philosophically robust and provide a more sophisticated account than we see from models of a similar kind developed by those working in the psychology of physics and artificial intelligence. If the most familiar overlapping (branching universe) interpretation of Everettian quantum mechanics (EQM) is correct, then each of us is constantly splitting into multiple people. This consequence gives rise to the quantum doomsday argument, which threatens to draw crippling epistemic consequences from EQM. However, a diverging (parallel universe) interpretation of EQM undermines the quantum doomsday argument completely. This appears to tell in favour of the diverging interpretation. But it is surprising that a metaphysical question that is apparently underdetermined by the physics should be settled by purely epistemological considerations; and I argue that the positive case for divergence based on the quantum doomsday effect is ultimately unsuccessful. I discuss how some influential treatments of Everettian confirmation handle the quantum doomsday puzzle, and suggest that it can most satisfyingly be resolved via a naturalistic approach to the metaphysics of modality. Clinical assessment of the hand is important for diagnosing underlying hand disorders. Using a case study approach, the clinical assessment for three disorders of the hands is presented: trigger finger (stenosing tenosynovitis), carpal tunnel syndrome, and ulnar-sided wrist injury (styloid impingement). We assess the annular one pulley and finger range of motion for patients with trigger finger. To diagnose for carpal tunnel syndrome, assessment for Tinel's sign, Phalen's sign, abductor pollicis brevis muscle bulk, two-point discrimination, and obtaining a nerve conduction study are performed. Assessment for ulnar-sided wrist injury includes wrist range of motion, assessment of distal radial ulnar joint stability, provocation tests, grip strength, x-ray, and magnetic resonance imaging. This article begins with a description of the hand and wrist anatomy. For each case study, the clinical history is described, followed by a discussion of the pathophysiology, clinical assessments, and diagnostic tests. BACKGROUND: Childhood obesity is a complex healthcare problem that affects all aspects of a child's health. The American Academy of Pediatrics and the Expert Committee recommends that all children be evaluated for current medical conditions including the risk for obesity by identifying elevated body mass index (BMI), physical activity habits, and diet. Childhood obesity is defined as a BMI of 95th percentile or greater on standardized age-based growth charts. Abdominal and visceral fat mass has a negative effect on bone formation during childhood and adolescence. Effective interventions are aimed at prevention and treatment and include collection and assessment of obesity, eating habits, physical activity, and family history. At a local outpatient pediatric orthopaedic practice, few patients had a diagnosis of childhood obesity and weight management varied by providers. PURPOSE: The purpose of this quality improvement project was to improve identification of obese children and increase referrals to a weight management program. METHODS: Setting: A hospital-affiliated pediatric orthopaedic clinic staffed with 3 orthopaedic surgeons and 2 nurse practitioners. Population: 6- to 18-year-olds with a BMI of greater than 95th percentile (N = 239). Data Collection : Electronic medical record chart review for documented obesity and referral to weight management program: Intervention: Provider educational in-service reviewing management guidelines and referral process. RESULTS: Average percentages of documented obesity diagnosis increased from 11% to 53%. The number of referrals to Heart Healthy weight management program increased by 400%. CONCLUSION: An educational-based intervention in a pediatric orthopaedic clinic was effective in increasing the number of patients with a diagnosis of obesity and referred to a weight management program. BACKGROUND: Enhanced recovery after surgery (ERAS) programs or hip and knee replacements have had a significant effect on streamlining patient care with shorter stays, no increase in complications, and improved outcomes including reduced mortality. PURPOSE: To compare outcomes following the introduction of an ERAS program for hip and knee replacements developed at our institution with a historical cohort of patients. METHODS: ERAS protocols were developed at our institution for patients undergoing hip and knee joint replacements. Key aspects were changes in preadmission, a new education session, improved management of perioperative anemia, standardized anesthetic guidelines, day of surgery mobilization, and improved discharge planning. The results of the first 18 months (528 consecutive patients) were compared with those of a historical cohort of 507 patients from the 18 months prior to their introduction. RESULTS: In the ERAS group, the mean age was 68.3 years for patients who underwent hip replacement and 70.4 years for patients who underwent knee replacement. Thirty-two percent of patients were ASA (American Society of Anesthesiologists) Grades III and IV. The average preoperative Oxford score was 11. The average length of stay (ALOS) fell from 5.6 to 4.3 days for patients who underwent hip replacement and from 5.7 to 4.8 days for patients who underwent knee replacement (p <.001). Ninety-six percent of patients were discharged home. The 30-day readmission rate increased from 3.2% to 5.5% (p =.065). Six-month Oxford knee scores were higher in the ERAS group (39.8 vs. 36.3, p =.03). There was no increase in mortality or early revision rate. CONCLUSIONS: Substantial reductions in ALOS can be gained with the introduction of ERAS protocols, with high patient satisfaction and no increase in complications in a consecutive unselected group of public hospital patients. This requires a multidisciplinary approach and a strong clinical input. Lumbar fusion is a surgical procedure performed to eliminate painful motion in a spinal segment by joining, or fusing, two or more vertebrae. Although the surgery has a high rate of producing radiographic fusion, many patients report pain, functional disability, an inability to return to work, and prolonged opioid pain reliever use following the procedure. Using the biopsychosocial model of low back pain as a framework, this review of the literature describes the biological, psychological, and social factors that have been associated with these negative outcomes. The findings suggest that at least some of the variability in postoperative outcomes may be due to preoperative patient characteristics, and evidence the theorized relationship between biopsychosocial factors and low back disability. The review also highlights a gap in the literature regarding biopsychosocial predictors of prolonged opioid use following lumbar fusion. BACKGROUND: Hip hemiarthroplasty and dynamic hip screw (DHS) fixation are common procedures performed in trauma units, but there is little information regarding perioperative pain experience with respect to these treatment modalities. PURPOSE: To evaluate the relationship between pain, analgesia requirements, and type of procedure for hip fracture surgery. METHODS: An analysis was performed on consecutive patients presenting with a hip fracture in 2 hospitals over 2 years. Patients with a diagnosis of dementia were excluded because of the limitations of pain assessment. Postoperative pain scores were taken from standardized patient observation charts. Perioperative opiate consumption was calculated from inpatient drug charts. RESULTS: A total of 357 patients were studied; 205 patients (53%) underwent a cemented hemiarthroplasty and 152 (47%) had fixation with a DHS. Patients who underwent a DHS fixation had more pain than those who had a hemiarthroplasty and required almost double the amount of opiates. CONCLUSION: The reason for the elevated pain scores and higher morphine requirement in the DHS group (DG) remains unclear. It could be related to highly sensitive periosteum reaction in the DG. It is important to recognize the difference in pain experienced between the groups, and analgesia should be tailored toward the individual based upon clinical assessment and knowledge of the surgery performed. A comprehensive understanding of this principle will allow for improved perioperative surgical care and patient experience. BACKGROUND: No study comparing short message service (SMS) texts and telephone counseling for patients undergoing total knee replacement (TKR) has been reported. PURPOSE: The purpose of the study was to provide postdischarge telephone counseling and SMS texts to TKR patients and to analyze the effects of these services on their knee function (KF), activities of daily living (ADL), and life satisfaction (LS). METHODS: This study used a randomized clinical trial design. This study was conducted with 40 patients (counseling group: 21; SMS group: 19). In the telephone counseling group and the SMS group, KF, ADL, and LS were assessed before surgery and 1 and 3 months after TKR. RESULTS: Telephone counseling and SMS texts have the same effects on KF, ADL, and LS of TKR patients. CONCLUSION: Future research is needed to determine optimal frequency and duration of post-TKR SMS to support patients who have undergone TKR. The pedagogical and theoretical questions addressed in this study relate to the extent to which native Japanese readers with little or no knowledge of Chinese characters recognize Chinese characters that are viewed as abbreviations of the kanji they already know. Three graphic similarity functions (i.e., an orthographically acceptable similarity, a physical similarity, and an extended physical similarity function) were formulated to predict the Japanese learners' target kanji production. Results showed that the learners' performance was poor, with only approximately 30% correct, and that the extended physical similarity function incorporating character frequency and graphic neighbors more accurately accounted for the target production than other functions. Some pedagogical implications and issues for further research are briefly discussed. A few studies suggest that gifted children with dyslexia have better literacy skills than averagely intelligent children with dyslexia. This finding aligns with the hypothesis that giftedness-related factors provide compensation for poor reading. The present study investigated whether, as in the native language (NL), the level of foreign language (FL) literacy of gifted students with dyslexia is higher than the literacy level of averagely intelligent students with dyslexia and whether this difference can be accounted for by the difference in their NL literacy level. The sample consisted of 148 Dutch native speaking secondary school students divided in four groups: dyslexia, gifted/dyslexia, typically developing (TD), and gifted. All students were assessed on word reading and orthographic knowledge in Dutch and English when they were in 7th or 8th grade. A subsample (n = 71) was (re)assessed on Dutch, English, French, and German literacy one year later. Results showed that Dutch gifted students with dyslexia have higher NL literacy levels than averagely intelligent students with dyslexia. As in the NL, a stepwise pattern of group differences was found for English word reading and spelling, i.e., dyslexia < gifted/dyslexia < TD < gifted. However, it was not found for French and German literacy performance. These results point towards compensation: the higher English literacy levels of gifted/dyslexic students compared to their averagely intelligent dyslexic peers result from mechanisms that are unique to English as a FL. Differences in results between FLs are discussed in terms of variation in orthographic transparency and exposure. The task of writing arguments requires a linguistic and cognitive sophistication that eludes many adults, but students in the US are expected to produce texts that articulate and support a claim-simple written arguments-starting in the fourth grade. Students from language-minority homes likewise must learn to produce such writing, despite their relatively limited experience with the English language, reflected in the availability of smaller mental lexicons and more restricted syntactic constructions. Yet some features of bilingual children's cognition, such as precocious development of theory of mind and strong metalinguistic awareness, might support the crafting of arguments in writing, where the explicit consideration of multiple points of view can serve to strengthen one's case for a claim. In this study we examine the incidence of social perspective-taking acts in the argumentative essays of language-minority and English-only students in Grades 4-6 and find that language-minority students match or surpass the English-only students on two critical measures of perspective taking (perspective acknowledgment and perspective articulation). We also explore possible links between students' use of perspective taking in their argumentative essays and a validated formal measure of the same skill, uncovering different relationships between them in the two language groups. Links to previously attested bilingual advantages and to the development of argumentation are discussed. This study investigated whether German learners of English as a foreign language (EFL) acquire additional recoding strategies that they do not need for recoding in the consistent German orthography. Based on the psycholinguistic grain size theory (Ziegler & Goswami, 2005) we expected students with little experience in EFL to use the same small-grain recoding strategy as in German, while more advanced students were expected to switch flexibly between small and large grain size recoding strategies when reading English nonwords. German students in Grades 5, 7, and 9, as well as university students were presented with an experimental nonword reading paradigm introduced by Goswami, Ziegler, Dalton, and Schneider (2003) which assesses the effects of language (nonwords derived from German vs. English), orthographic neighborhood, item length and presentation format (blocked vs. mixed) on reading latencies and accuracies. The data were analyzed using hierarchical linear models. The youngest age group did not use larger units to read English nonwords, but mostly applied simple grapheme-phoneme translation, as they would in their first language. University students were able to switch flexibly between large- and small size recoding strategies. The relation is studied between teachers' pedagogical content knowledge of reading and the quality of their subsequent classroom behaviour in teaching fluent reading. A confirmatory factor analysis model with two latent variables is tested and shows adequate goodness-of-fit indices. Contrary to our expectations, the results of structural equation modelling reveal a small but significant gamma-value of .29, indicating that only 8% of the variance in teachers' classroom behaviour in teaching fluent reading is accounted for by teachers' pedagogical content knowledge of reading. Presumably teacher knowledge is not as stable and conclusive as one might think. More research is needed in determining the factors that work restricting for teachers in putting their knowledge into classroom practice. It is recommended that preservice and in-service teacher training should not be limited to transfer of knowledge, but should also assist teachers in designing and performing effective fluent reading instruction. This study investigated Chinese children's development of sensitivity to positional (orthographic), phonological, and semantic cues of radicals in encoding novel Chinese characters. A newly designed picture-novel character mapping task, along with nonverbal reasoning ability, vocabulary, and Chinese character recognition were administered to 198 kindergartners, 172 second graders and 165 fifth graders. Children's strategies in using positional, phonological, and semantic cues of radicals varied across grades. The higher the children's grade level, the more commonly children used semantic and positional cues of radicals. Regression analyses showed that the contribution of semantic radical awareness for explaining Chinese character reading increased as children's grade increased, whereas the contribution of positional regularity awareness decreased. These findings suggest that learning Chinese characters involves a transition from a sound- and position-based approach to a meaning-based approach. Much previous research has conceptualized pauses during writing as indicators of the engagement of higher-level cognitive processes. In the present study 101 university students composed narrative or argumentative essays, while their key logging was recorded. We investigated the relation between pauses within three time intervals (300-999, 1000-1999, and > 2000 ms), at different text boundaries (i.e., between words, sentences, and paragraphs), genre (i.e., narrative vs. argumentative), and transcription fluency (i.e., typing speed). Moreover, we investigated the relation between pauses and various lexical characteristics of essays (e.g., word frequency, sentence length) controlling for transcription fluency and genre. In addition to replicating a number of previously reported pause effects in composition, we also show that pauses are related to various aspects of writing, regardless of transcription fluency and genre. Critically our results show that the majority of pause effects in written composition are modulated by pause location. For example, increased pause rates at word boundaries predicted word frequency, while pause rates at sentence boundaries predicted sentence length, suggesting different levels of processing at these text boundaries. Lastly, we report some inconsistencies when using various definitions of pauses. We discuss potential mechanisms underlying effects of pauses at different text boundaries on writing. We examined how raters and tasks influence measurement error in writing evaluation and how many raters and tasks are needed to reach a desirable level of .90 and .80 reliabilities for children in Grades 3 and 4. A total of 211 children (102 boys) were administered three tasks in narrative and expository genres, respectively, and their written compositions were evaluated in widely used evaluation methods for developing writers: holistic scoring, productivity, and curriculum-based writing scores. Results showed that 54 and 52% of variance in narrative and expository compositions were attributable to true individual differences in writing. Students' scores varied largely by tasks (30.44 and 28.61% of variance), but not by raters. To reach the reliability of .90, multiple tasks and raters were needed, and for the reliability of .80, a single rater and multiple tasks were needed. These findings offer important implications about reliably evaluating children's writing skills, given that writing is typically evaluated by a single task and a single rater in classrooms and even in some state accountability systems. Many studies have shown that learning to read in a second language (L2) is similar, in many ways, to learning to read in a first language (L1). Nevertheless, reading development also relies upon oral language proficiency and is greatly influenced by orthographic consistency. This longitudinal study aimed to analyze the role of linguistic predictors (phonological awareness, letter knowledge, pseudoword repetition, morphosyntactic comprehension, lexical knowledge and rapid naming) in reading outcomes (fluency, accuracy and comprehension) in a group of bilingual children (n = 30) reading Italian as an L2, compared to a group of monolingual children (n = 56). We ran a multi-group structural equation model. Our findings showed that rapid automatized naming was a significant predictor of reading speed in both groups. However, the study revealed different patterns of predictors for reading accuracy, predictors for monolinguals being LK, phonological awareness and lexical knowledge, while pseudoword repetition was a predictor for bilinguals. Morphosyntactic comprehension was the most significant predictor of comprehension skills in bilingual children. Implications for clinical and educational settings are discussed. We examined the role of different cognitive skills in word reading (accuracy and fluency) and spelling accuracy in syllabic Hiragana and morphographic Kanji. Japanese Hiragana and Kanji are strikingly contrastive orthographies: Hiragana has consistent character-sound correspondences with a limited symbol set, whereas Kanji has inconsistent character-sound correspondences with a large symbol set. One hundred sixty-nine Japanese children were assessed at the beginning of grade 1 on reading accuracy and fluency, spelling, phonological awareness, phonological memory, rapid automatized naming (RAN), orthographic knowledge, and morphological awareness, and on reading and spelling at the middle of grade 1. The results showed remarkable differences in the cognitive predictors of early reading accuracy and spelling development in Hiragana and Kanji, and somewhat lesser differences in the predictors of fluency development. Phonological awareness was a unique predictor of Hiragana reading accuracy and spelling, but its impact was relatively weak and transient. This finding is in line with those reported in consistent orthographies with contained symbol sets such as Finnish and Greek. In contrast, RAN and morphological awareness were more important predictors of Kanji than of Hiragana, and the patterns of relationships for Kanji were similar to those found in inconsistent orthographies with extensive symbol sets such as Chinese. The findings suggested that Japanese children learning two contrastive orthographic systems develop partially separate cognitive bases rather than a single basis for literacy acquisition. The goal of this study was to examine oral word reading fluency from a developmental perspective in a longitudinal study of students from second grade to sixth grade. The sample was consisted of native English speaking students that took part in a large longitudinal study. Participants were assessed on cognitive and literacy measures such as working memory, phonological awareness, rapid automatized naming and syntactic awareness-oral cloze. Two main research questions were examined: first, what relationships will be found between the cognitive, literacy and linguistic measures, and which of them simultaneously predict oral reading fluency, in each age group? And second, which cognitive and literacy measures in second grade predict word reading fluency in sixth grade? Results show that cognitive and literacy measures contribute differently to word reading fluency skill across the different grades, while the only strong predictor across all age groups, was the phonological awareness. Finally, taking together past and previous findings, a proposed definition of fluency from a developmental perspective is suggested, based on the results of the study that show in a clear manner, that reading fluency, its contributors and its predictors, change respectively to the reading phase obtained in each grade. This article uses the case of the first randomized controlled trial (RCT) evaluating laparoscopic cholecystectomy to investigate the introduction of minimally invasive surgery in the 1990s and explore the meaning of RCTs within the context of the introduction of a new surgical technology. It thus brings together the history of the use of laparoscopic cholecystectomy to remove the gallbladder, and the history of the RCT, shedding light on particular aspects of both. We first situate the RCT in the context of the history of the various treatment options for gallstones, or cholelithiasis, then characterize the specific situation of the rapid, patient-driven spread of laparoscopic cholecystectomy, and in a next step describe how the local context of laparoscopic cholecystectomy as a new technology made it possible and desirable to conduct an RCT, despite numerous obstacles. This article then shows that in order to capture and understand the rationale of an RCT it is worth it to explore the various levels and dimensions of its context, demonstrating how even the RCT as an ostensibly universal tool draws its meaning from its contexts and that this meaning goes beyond the simple determination of efficiency and safety, including, maybe most importantly, the control and management of new technologies. The article examines the establishment and growth between 1793 and 1802 of the West India Regiments, British army corps manned by slaves of African descent and commanded by European officers. Focusing on the medical history of British military operations in the West Indies, the article demonstrates that the rationale behind the regiments was medical, but that the impetus for them came from senior military commanders rather than from the medical practitioners whose writings are usually privileged in the historiography. The senior officers who commanded the West Indian expeditions in the French Revolutionary Wars mobilized their own particular brand of medical theory, based explicitly on their experience of the region's epidemiological environment, in support of the policy. This willingness to adopt and adapt medical ideas heavily influenced both military policy regarding the regiments, and commanders' relationships with their medical men. This paper focuses on the history of a portable shock-producing electrotherapeutic device known as the medical battery (1870-1920), which provided both direct and alternating current and was thought to cure a wide variety of ailments. The product occupied a unique space at the nexus of medicine, consumerism and quackery: it was simultaneously considered a legitimate device by medical professionals who practiced electrotherapeutics, yet identical versions were sold directly to consumers, often via newspaper advertisements and with cure-all marketing language. Indeed, as I show in this paper, the line between what was considered a medical device and a consumer product was often blurred. Even though medical textbooks and journals never mentioned ( much less promoted) the home use of electricity, every reputable electrotherapy instrument manufacturer sold a "family battery" for patients to use on themselves at home. While a handful of physicians spoke out against the use of electricity by the laity-as they felt it undermined the image of electrotherapy as a skilled medical procedure-existing evidence suggests that many physicians were likely recommending the home use of medical electricity to their patients. Taken together, this paper shows how the professional ideals of electrotherapeutics were not always aligned with physicians' actual practices. Anatomical nomenclature is medicine's official language. Early in their medical studies, students are expected to memorize not only the bodily geography but also the names for all the structures that, by consensus, constitute the anatomical body. The making and uses of visual maps of the body have received considerable historiographical attention, yet the history of production, communication, and reception of anatomical names-a history as long as the history of anatomy itself-has been studied far less. My essay examines the reforms of anatomical naming between the first modern nomenclature, the 1895 Basel Nomina Anatomica (BNA), and the 1955 Nomina Anatomica Parisiensia (NAP, also known as PNA), which is the basis for current anatomical terminology. I focus on the controversial and ultimately failed attempt to reform anatomical nomenclature, known as Jena Nomina Anatomica (INA), of 1935. Discussions around nomenclature reveal not only how anatomical names are made and communicated, but also the relationship of anatomy with the clinic; disciplinary controversies within anatomy; national traditions in science; and the interplay between international and scientific disciplinary politics. I show how the current anatomical nomenclature, a successor to the NAP, is an outcome of both political and disciplinary tensions that reached their peak before 1945. Due to the unprecedented growth of unedited videos, finding highlights relevant to a text query in a set of unedited videos has become increasingly important. We refer this task as semantic highlight retrieval and propose a query-dependent video representation for retrieving a variety of highlights. Our method consists of two parts: 1) "viralets", a mid-level representation bridging between semantic [Fig. 1(a)] and visual [Fig. 1(c)] spaces and 2) a novel Semantic-MODulation (SMOD) procedure to make viralets query-dependent (referred to as SMOD viralets). Given SMOD viralets, we train a single highlight ranker to predict the highlightness of clips with respect to a variety of queries (two examples in Fig. 1), whereas existing approaches can be applied only in a few predefined domains. Other than semantic highlight retrieval, viralets can also be used to associate relevant terms to each video. We utilize this property and propose a simple term prediction method based on nearest neighbor search. To conduct experiments, we collect a viral video dataset(1) including users' comments, highlights, and/or original videos. Among a testing database with 1189 clips (13% highlights and 87% non-highlights), our highlight ranker achieves 41.2% recall at top-10 retrieved clips. It is significantly higher than the state-of-the-art domain-specific highlight ranker and its extension. Similarly, our method also outperforms all baseline methods on the publicly available video highlight dataset. Finally, our simple term prediction method utilizing viralets outperforms the state-of-the-art matrix factorization method (adapted from Kalayeh et al.). algorithm called linear spectral clustering (LSC), which is capable of producing superpixels with both high boundary adherence and visual compactness for natural images with low computational costs. In LSC, a normalized cuts-based formulation of image segmentation is adopted using a distance metric that measures both the color similarity and the space proximity between image pixels. However, rather than directly using the traditional eigen-based algorithm, we approximate the similarity metric through a deliberately designed kernel function such that pixel values can be explicitly mapped to a high-dimensional feature space. We then apply the conclusion that by appropriately weighting each point in this feature space, the objective functions of the weighted K-means and the normalized cuts share the same optimum points. Consequently, it is possible to optimize the cost function of the normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC possesses linear computational complexity and high memory efficiency, since it avoids both the decomposition of the affinity matrix and the generation of the large kernel matrix. By utilizing the underlying mathematical equivalence between the two types of seemingly different methods, LSC successfully preserves global image structures through efficient local operations. Experimental results show that LSC performs as well as or even better than the state-of-the-art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation. The applicability of LSC is further demonstrated in two related computer vision tasks. In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction. Recent work in signal processing in general and image processing in particular deals with sparse representation related problems. Two such problems are of paramount importance: an overriding need for designing a well-suited overcomplete dictionary containing a redundant set of atoms-i.e., basis signals-and how to find a sparse representation of a given signal with respect to the chosen dictionary. Dictionary learning techniques, among which we find the popular K-singular value decomposition algorithm, tackle these problems by adapting a dictionary to a set of training data. A common drawback of such techniques is the need for parameter-tuning. In order to overcome this limitation, we propose a fully-automated Bayesian method that considers the uncertainty of the estimates and produces a sparse representation of the data without prior information on the number of non-zeros in each representation vector. We follow a Bayesian approach that uses a three-tiered hierarchical prior to enforce sparsity on the representations and develop an efficient variational inference framework that reduces computational complexity. Furthermore, we describe a greedy approach that speeds up the whole process. Finally, we present experimental results that show superior performance on two different applications with real images: denoising and inpainting. Here we study the extreme visual recovery problem, in which over 90% of pixel values in a given image are missing. Existing low rank-based algorithms are only effective for recovering data with at most 90% missing values. Thus, we exploit visual data's smoothness property to help solve this challenging extreme visual recovery problem. Based on the discrete cosine transform (DCT), we propose a novel DCT regularizer that involves all pixels and produces smooth estimations in any view. Our theoretical analysis shows that the total variation regularizer, which only achieves local smoothness, is a special case of the proposed DCT regularizer. We also develop a new visual recovery algorithm by minimizing the DCT regularizer and nuclear norm to achieve a more visually pleasing estimation. Experimental results on a benchmark image data set demonstrate that the proposed approach is superior to the state-of-the-art methods in terms of peak signal-to-noise ratio and structural similarity. Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first-and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features. Finding an effective and efficient representation is very important for image classification. The most common approach is to extract a set of local descriptors, and then aggregate them into a high-dimensional, more semantic feature vector, like unsupervised bag-of-features and weakly supervised part-based models. The latter one is usually more discriminative than the former due to the use of information from image labels. In this paper, we propose a weakly supervised strategy that using multi-instance learning (MIL) to learn discriminative patterns for image representation. Specially, we extend traditional multi-instance methods to explicitly learn more than one patterns in positive class, and find the "most positive" instance for each pattern. Furthermore, as the positiveness of instance is treated as a continuous variable, we can use stochastic gradient decent to maximize the margin between different patterns meanwhile considering MIL constraints. To make the learned patterns more discriminative, local descriptors extracted by deep convolutional neural networks are chosen instead of hand-crafted descriptors. Some experimental results are reported on several widely used benchmarks (Action 40, Caltech 101, Scene 15, MIT-indoor, SUN 397), showing that our method can achieve very remarkable performance. In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space, which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features, such as dark-channel, saturation-value, and chroma, which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images. In this paper, we propose a fast weak classifier that can detect and track eyes in video sequences. The approach relies on a least-squares detector based on the inner product detector (IPD) that can stimate a probability density distribution for a feature's location-which fits naturally with a Bayesian estimation cycle, such as a Kalman or particle filter. As a least-squares sliding window detector, it possesses tolerance to small variations in the desired pattern while maintaining good generalization capabilities and computational efficiency. We propose two approaches to integrating the IPD with a particle filter tracker. We use the BioID, FERET, LFPW, and COFW public datasets as well as five manually annotated high-definition video sequences to quantitatively evaluate the algorithms' performance. The video data set contains four subjects, different types of backgrounds, blurring due to fast motion, and occlusions. All code and data are available. Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models. Achieving a stable video quality is an important task in video compression. In this paper, we present a multi-layer quantization control method to compress video to a certain target video quality based on the new hierarchical partition structure of H.265/HEVC. We first model the rate-quality characteristics of different video sequences using.-support vector regression with a Gaussian radial basis function as the kernel function. Then, we propose a Kalman filter-based multi-layer quantization control method for the quality-constrained video coding. The advantage of our proposed approach in H.265/HEVC is demonstrated through experimental results. Compared with the other H.265/HEVC rate control algorithms and the fixed-QP scheme, the proposed method achieves a more stable video quality. A linear synthesis model-based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it, however, suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task-adapted feature transformation and regularization to encode our preferences, domain prior knowledge, and task-oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms, but it can also dramatically reduce the time complexities in both training and testing phases. Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains, and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing data sets and on a new data set comprising a wide variety of realistic videos captured by surveillance cameras. This particular data set includes surveillance videos depicting criminal activities, car accidents, and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared with state-of-the-art non-online methods. The capability to automatically evaluate the quality of long wave infrared (LWIR) and visible light images has the potential to play an important role in determining and controlling the quality of a resulting fused LWIR-visible light image. Extensive work has been conducted on studying the statistics of natural LWIR and visible images. Nonetheless, there has been little work done on analyzing the statistics of fused LWIR and visible images and associated distortions. In this paper, we analyze five multi-resolution-based image fusion methods in regards to several common distortions, including blur, white noise, JPEG compression, and non-uniformity. We study the natural scene statistics of fused images and how they are affected by these kinds of distortions. Furthermore, we conducted a human study on the subjective quality of pristine and degraded fused LWIR-visible images. We used this new database to create an automatic opinion-distortion-unaware fused image quality model and analyzer algorithm. In the human study, 27 subjects evaluated 750 images over five sessions each. We also propose an opinion-aware fused image quality analyzer, whose relative predictions with respect to other state-of-the-art models correlate better with human perceptual evaluations than competing methods. An implementation of the proposed fused image quality measures can be found at https://github.com/ujemd/NSS-of-LWIR-and-Vissible-Images. Also, the new database can be found at http://bit.ly/2noZlbQ. Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e., the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance. We propose using stationary Gaussian processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Furthermore, we propose using Gaussian process transforms (GPTs), which are Karhunen-Loeve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by the eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models, whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets. Branch retinal artery occlusion (BRAO) is an ocular emergency, which could lead to blindness. Quantitative analysis of the BRAO region in the retina is necessary for the assessment of the severity of retinal ischemia. In this paper, a fully automatic framework was proposed to segment BRAO regions based on 3D spectral-domain optical coherence tomography (SD-OCT) images. To the best of our knowledge, this is the first automatic 3D BRAO segmentation framework. First, the input 3D image is automatically classified into BRAO of acute phase and BRAO of chronic phase or normal retina using an AdaBoost classifier based on combining local structural, intensity, textural features with our new feature distribution analyzing strategy. Then, BRAO regions of acute phase and chronic phase are segmented separately. A thickness model is built to segment BRAO in the chronic phase. While for segmenting BRAO in the acute phase, a two-step segmentation strategy is performed: rough initialization and refine segmentation. The proposed method was tested on SD-OCT images of 35 patients (12 BRAO acute phase, 11 BRAO chronic phase, and 12 normal eyes) using the leave-one-out strategy. The classification accuracy for BRAO acute phase, BRAO chronic phase, and normal retina were 100%, 90.9%, and 91.7%, respectively. The overall true positive volume fraction (TPVF) and false positive volume fraction (FPVF) for the acute phase were 91.1% and 5.5% and for the chronic phase were 92.7% and 8.4%, respectively. Moire artifacts are generally caused by the interference between the overlap of the sensor's sampling grid and high-frequency (nearly) periodic textures, and heavily affect the image quality. However, it is difficult to effectively remove moire artifacts from textured images as the structure of moire patterns is similar to that of textures in some sense. In this paper, we propose a novel textured image demoireing method by signal decomposition and guided filtering. Given a textured image with moire artifacts, we first remove moire artifacts in the green (G) channel using the proposed low-rank and sparse matrix decomposition model. This model regularizes the texture layer by the low-rank prior in spatial domain and the moire layer by sparse representation in frequency domain. An alternating direction method under the augmented Lagrangian multiplier framework is used to solve the matrix decomposition model. Then, since the red (R) and blue (B) channels are more heavily polluted by moire artifacts than the G channel, we propose to remove moire artifacts in its R and B channels via guided filtering by the obtained texture layer of the G channel. Experimental results demonstrate that our method outperforms the state-of-the-art methods for both synthetic and real images. Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of data sets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture. Gaussian process regression (GPR) is an effective statistical learning method for modeling non-linear mapping from an observed space to an expected latent space. When applying it to example learning-based super-resolution (SR), two outstanding issues remain. One is its high computational complexity restricts SR application when a large data set is available for learning task. The other is that the commonly used Gaussian likelihood in GPR is incompatible with the true observation model for SR reconstruction. To alleviate the above two issues, we propose a GPR-based SR method by using dictionary-based sampling (DbS) and student-t likelihood. Considering that dictionary atoms effectively span the original training sample space, we adopt a DbS strategy by combining all the neighborhood samples of each atom into a compact representative training subset so as to reduce the computational complexity. Based on statistical tests, we statistically validate that student-t likelihood is more suitable to build the observation model for the SR problem. Extensive experimental results show that the proposed method outperforms other competitors and produces more pleasing details in texture regions. Recently, specially crafted unidimensional optimization has been successfully used as line search to accelerate the overrelaxed and monotone fast iterative shrinkage-threshold algorithm (OMFISTA) for computed tomography. In this paper, we extend the use of fast line search to the monotone fast iterative shrinkage-threshold algorithm (MFISTA) and some of its variants. Line search can accelerate the FISTA family considering typical synthesis priors, such as the l(1)-norm of wavelet coefficients, as well as analysis priors, such as anisotropic total variation. This paper describes these new MFISTA and OMFISTA with line search, and also shows through numerical results that line search improves their performance for tomographic high-resolution image reconstruction. Feature space transformation techniques have been widely studied for dimensionality reduction in vector-based feature space. However, these techniques are inapplicable to sequence data because the features in the same sequence are not independent. In this paper, we propose a method called max-min inter-sequence distance analysis (MMSDA) to transform features in sequences into a low-dimensional subspace such that different sequence classes are holistically separated. To utilize the temporal dependencies, MMSDA first aligns features in sequences from the same class to an adapted number of temporal states, and then, constructs the sequence class separability based on the statistics of these ordered states. To learn the transformation, MMSDA formulates the objective of maximizing the minimal pairwise separability in the latent subspace as a semi-definite programming problem and provides a new tractable and effective solution with theoretical proofs by constraints unfolding and pruning, convex relaxation, and within-class scatter compression. Extensive experiments on different tasks have demonstrated the effectiveness of MMSDA. In this paper, we develop a two-way multi-relay scheme for JPEG 2000 image transmission. We adopt a modified time-division broadcast cooperative protocol, and derive its power allocation and relay selection under a fairness constraint. The symbol error probability of the optimal system configuration is then derived. After that, a joint source-channel coding (JSCC) problem is formulated to find the optimal number of JPEG 2000 quality layers for the image and the number of channel coding packets for each JPEG 2000 codeblock that can minimize the reconstructed image distortion for the two users, subject to a rate constraint. Two fast algorithms based on dynamic programming and branch and bound are then developed. Simulation demonstrates that the proposed JSCC scheme achieves better performance and lower complexity than other similar transmission systems. Longgu ("dragon bone," Ryu-kotsu, Fossilia Ossis Mastodi, or Os Draconis) is the only fossil crude drug listed in the Japanese Pharmacopoeia. All longgu in the current Japanese market is imported from China, where its resources are being depleted. Therefore, effective countermeasures are urgently needed to prevent resource depletion. One possible solution is the development of a substitute made from bones of contemporary animals that are closely related to the original animal source of the current longgu. However, no research has been conducted on the original animal source of longgu, except for a report on the longgu specimens present in the Shosoin Repository. Taxonomic examination was performed on the fossil specimens related to longgu which are owned by the Museum of Osaka University, Japan. In total, 20,939 fossil fragments were examined, of which 20,886 were mammalian fossils, and 246 of these fossils were classified into nine families. The longgu specimens from the Japanese market belonged to a relatively smaller variety of taxa than those from the Chinese market. Despite the variety of taxa in longgu, medical doctors using Kampo preparations with longgu have not reported any problems due to the presence of impurities in the original animal source. These results suggest that the effect of longgu is independent of its origin as long as it is closely related to the origin of the current longgu. Thus, despite the considerable effects of fossilization, our results could help in developing an optimal substitute for longgu.] Puerarin is one of the major active ingredients in Gegen, a traditional Chinese herb that has been reported to have a wide variety of beneficial pharmacology functions. Previous studies have implicated that the damaging effects of hyperglycemia resulting from oxidative stress and glucose fluctuation may be more dangerous than constant high glucose in the development of diabetes-related complications. The present study focuses on the effects of puerarin on glucose fluctuation-induced oxidative stress-induced Schwann cell (SC) apoptosis in vitro. Primarily cultured SCs were exposed to different conditions and the effect of puerarin on cell viability was determined by MTT assays. Intracellular reactive oxygen species (ROS) generation and mitochondrial transmembrane potential were detected by flow cytometry analysis. Apoptosis was confirmed by the Annexin V-FITC/PI and TUNEL method. Quantitative real-time reverse transcriptase polymerase chain reaction was performed to analyze the expression levels of bax and bcl-2. Western blot was performed to analyze the expression levels of some important transcription factors and proteins. The results showed that incubating SCs with intermittent high glucose for 48 h decreased cell viability and increased the number of apoptotic cells whereas treating with puerarin protected SCs against glucose fluctuation-induced cell damage. Further study demonstrated that puerarin suppressed activation of apoptosis-related proteins including PARP and caspase-3, downregulation of bcl-2, and upregulation of intracellular distribution of bax from cytosol to mitochondria, which was induced by glucose fluctuation. Moreover, puerarin inhibited the elevation of intracellular ROS and mitochondrial depolarization induced by glucose fluctuation. These results suggest that puerarin may protect SCs against glucose fluctuation-induced cell injury through inhibiting apoptosis as well as oxidative stress. Hericium erinaceus (H. erinaceus) improves the symptoms of menopause. In this study, using ovariectomized mice as a model of menopause, we investigated the anti-obesity effect of this mushroom in menopause. Mice fed diets containing H. erinaceus powder showed significant decreases in the amounts of fat tissue, plasma levels of total cholesterol, and leptin. To determine the mechanism, groups of mice were respectively fed a diet containing H. erinaceus powder, a diet containing ethanol extract of H. erinaceus, and a diet containing a residue of the extract. As a result, H. erinaceus powder was found to increase fecal lipid levels in excreted matter. Further in vitro investigation showed that ethanol extract inhibited the activity of lipase, and four lipase-inhibitory compounds were isolated from the extract: hericenone C, hericenone D, hericenone F, and hericenone G. In short, we suggest that H. erinaceus has an anti-obesity effect during menopause because it decreases the ability to absorb lipids. Apium graveolens is a food flavoring which possesses various health promoting effects. This study investigates the effect of a sub-acute administration of A. graveolens on cognition and anti-depression behaviors via antioxidant and related neurotransmitter systems in mice brains. Cognition and depression was assessed by various models of behavior. The antioxidant system of glutathione peroxidase (GPx), % inhibition of superoxide anion (O-2(-)), and lipid peroxidation were studied. In addition, neurochemical parameters including acetylcholinesterase (AChE) and monoamine oxidase-type A (MAO-A) were also evaluated. Nine groups of male mice were fed for 30 days with different substances-a control, vehicle, A. graveolens extract (65-500 mg/kg), and reference drugs (donepezil and fluoxetine). The results indicated that the effect of the intake of A. graveolens extract (125-500 mg/kg) was similar to the reference drugs, as it improved both spatial and non-spatial memories. Moreover, there was a decrease in immobility time in both the forced swimming and tail suspension tests. In addition, the A. graveolens extract reduced lipid peroxidation of the brain and increased GPx activity and the % inhibition of O-2(-), whereas the activities of AChE and MAO-A were decreased. Thus, our data have shown that the consumption of A. graveolens extract improved cognitive function and anti-depression activities as well as modulating the endogenous antioxidant and neurotransmitter systems in the brain, resulting in increased neuronal density. This result indicated an important role for A. graveolens extract in preventing age-associated decline in cognitive function associated with depression. Iriomoteolides-9a (1) and 11a (2), new 15- and 19-membered macrolides, respectively, have been isolated from the marine dinoflagellate Amphidinium species (strain KCA09052). Compounds 1 and 2 were obtained from the extracts of the algal cells inoculated in the PES and TKF seawater medium, respectively. The structures of 1 and 2 were assigned on the basis of detailed NMR analyses. Compounds 1 and 2 exhibited cytotoxic activity against human cervix adenocarcinoma HeLa cells. Three new flavonoid glycosides-soyaflavono-sides A (1), B (2), and C (3)-together with 23 known ones were obtained from the 70% EtOH extract of Flos Sophorae (Sophora japonica, Leguminosae). Their structures were elucidated by chemical and spectroscopic methods. Among the known isolates, 14, 18, 20, 22, and 26 were isolated from the Sophora genus for the first time; 12, 19, 24, and 25 were obtained from the species firstly. Moreover, NMR data for compounds 18 and 26 are reported for the first time here. Meanwhile, compounds 4, 8-13, 15, 16, 19, 21, and 22 presented obvious inhibitory effects on TG accumulation in HepG2 cells. Analysis of the structure-activity relationship indicated that all of the quercetin glycosides examined in this study possess significant activity that is not significantly influenced by the amount of glycosyl present, whereas increasing the amount of glycosyl reduced the activities of isorhamnetin glycosides and orobol. In addition, a high dose (30 mu mol/l) of kaempferol was found to inhibit HepG2 cell growth, while a low dose (10 mu mol/l) was observed to decrease TG accumulation. Oxyresveratrol is a major active compound in the heartwood of Artocarpus lacucha. It plays an important role in anti-tyrosinase, antioxidant, anti-inflammatory, antiviral and neuroprotective properties. There are many A. lacucha commercial products available on the market for skin whitening and anti-aging effects. To evaluate the quality of raw material from the plant, a monoclonal antibody (MAb) against oxyresveratrol was generated in this study. The immunogen was prepared by the Mannich reaction for the conjugation of oxyresveratrol and cationized bovine serum albumin (cBSA). The conjugation of oxyresveratrol-cBSA at a ratio of 1: 50 was used for the immunization. The novel MAb (E4) was specific to oxyresveratrol and resveratrol. An indirect competitive enzyme-linked immunosorbent assay (ELISA) using the MAb (E4) was developed for the determination of oxyresveratrol. The linear range for the measurement of oxyresveratrol was 63-500 ng/mL and the precision (% relative standard deviation) was found to be <10% with the percentages of recovery from 95.93-103.55%. According to the validation analysis, the established ELISA can be applied for the determination of oxyresvertrol in the heartwood of A. lacucha and samples of the traditional drug Puag-Haad. With reliability and high sensitivity, this assay can provide an alternative approach for the quantitative analysis of oxyresveratrol in A. lacucha samples. In the course of our studies on anti-mycobacterial substances from marine organisms, the known dimeric sphingolipid, leucettamol A (1), was isolated as an active component, together with the new bromopyrrole alkaloid, 5-bromophakelline (2), and twelve known congeners from the Indonesian marine sponge Agelas sp. The structure of 2 was elucidated based on its spectroscopic data. Compound 1 and its bis TFA salt showed inhibition zones of 12 and 7 mm against Mycobacterium smegmatis at 50 mu g/disk, respectively, while the N,N'-diacetyl derivative (1a) was not active at 50 mu g/disk. Therefore, free amino groups are important for anti-mycobacterial activity. This is the first study to show the anti-mycobacterial activity of a bis-functionalized sphingolipid. Compound 13 exhibited weak PTP1B inhibitory activity (29% inhibition at 35 mu M). Genistein, a major source of phytoestrogen exposure for humans and animals, has been shown to mediate neuroprotection in Alzheimer's disease and status epilepticus. In the present study, we investigated the effect of genistein on pentylenetetrazole-induced seizures in ovariectomized mice and the possible involvement of estrogenic and serotonergic pathways in the probable effects of genistein. Intraperitoneal (i.p.) administration of genistein (10 mg/kg) significantly increased the seizure threshold 30 min prior to induction of seizures 14 days after ovariectomy surgery. Administration of fulvestrant (1 mg/kg, i.p.), an estrogen receptor antagonist, completely reversed the anticonvulsant effect of genistein (10 mg/kg) in ovariectomized mice. Administration of the antagonist of serotonin receptor (5-HT3), tropisetron (10 mg/kg, i.p.), eliminated the anticonvulsant effect of genistein, whereas co-administration of m-chlorophenylbiguanide (5-HT3 receptor agonist; 1 mg/kg) and a non-effective dose of genistein (5 mg/kg) increased the seizure threshold. To conclude, it seems that estrogenic/serotonergic systems might be involved in the anticonvulsant properties of genistein. Four new galloyl-oxygen-diphenyl (GOD)-type ellagitannins, brambliins A-D (1-4), were isolated from the leaves of Rubus suavissimus. Their structures were elucidated by extensive spectroscopic analyses and the absolute configurations of 1-4 were determined by chemical and phytochemical evidence. These GOD-type ellagitannins inhibited the formation of dental plaque, which is beneficial for oral hygiene. Two new secolignans, urticin A (1) and urticin B (2), were isolated from the ethanol extract of Urtica fissa rhizomes. Their structures were elucidated on the basis of extensive spectroscopic evidence (UV, IR, HR-ESI-MS, and NMR). Urticin A and urticin B possessed in vitro anti-inflammatory activities, which significantly inhibited the TNF-alpha and NO release induced by LPS in RAW 264.7 cells. A new pyrrolidine derivative, (5S)-hydroxyethyl 2-oxopyrrolidine-5-carboxylate (1), a new flavonol glycoside, tamaraxetin 3,7-di-O-alpha-L-rhamnopyranoside (2), and a new triterpene saponin, polyscioside A methyl ester (3), along with six known compounds (4-9) were isolated from the leaves of Polyscias balfouriana. Their chemical structures were elucidated on the basis of extensive spectroscopic analysis. A new ana-quinonoid tetracene metabolite, named sharkquinone (1), and the known SS-228R (2) have been isolated from the ethyl acetate extract of the culture of marine Streptomyces sp. EGY1. The strain was isolated from sediment sample collected from the Red Sea coast of Egypt. The structure of sharkquinone (1) was elucidated using detailed spectral (HRESI-MS, 1D and 2D NMR) analyses and quantum chemical calculations. This is the first report of the isolation of ana-quinonoid tetracene derivative from a natural source. Compound 1 showed the ability to overcome tumor necrosis factor-related apoptosis- inducing ligand (TRAIL) resistance at a concentration of 10 mu M in human gastric adenocarcinoma (AGS) cells. Phytochemical investigation of the stems from Brucea javanica led to the isolation of two new quassinoids, brujavanol C (1) and brujavanol D (2), together with six known compounds (3-8). The chemical structures were elucidated by means of various spectroscopic methods. All the isolated compounds were evaluated for antimalarial activity against Plasmodium falciparum and compounds 6 and 7 exhibited the most potent activity against the K1 strain, with IC50 values of 1.41 and 1.06 mu M, respectively. Two new polyhydroxy polyacetylenes, herpecaudenes A and B (1 and 2), were isolated from the ethanol extract of fruits of Herpetospermum caudigerum, an important Tibetan medicine. The structures of them were elucidated on the basis of extensive spectroscopic methods including UV, IR, HRESIMS, H-1 and C-13 NMR, HMBC, HSQC, and H-1-H-1 COSY. Compound 2 showed significant inhibitory effects on NO production in LPS-activated RAW 264.7 macrophages with IC50 values of 7.05 +/- 1.59 mu M. This paper proposes the utilisation of a method for matching qualifications' graduate profile outcomes to job roles and work responsibilities as apprenticeship progresses. In so doing, attainment of qualifications is made possible through the workplace validation of graduate profiles. Conferred occupational identity by other workers or managers before apprentices' self-inference is argued to provide reliability to the validation process. The data supporting the premises introduced and discussed in this article are derived from a longitudinal study of bakers' apprenticeship. In the study, the metaphoric phases of "belonging to a workplace, becoming and being" are used to explain apprentices' progressive occupational roles as allocated through work tasks. Occupational responsibilities alter as apprentices proceed through proximal participation into apprenticeship. Apprentices begin as junior apprentices, progressing to nascent bakers, senior apprentices, bakers as conferred by workmates and supervisors, and eventually charge-hands or shift leaders, overseeing other bakery workers. Therefore, apprentices' occupational identity become associated with designated job roles as accompanied by the role's attendant tasks and work responsibilities. When apprentices are able to complete assigned duties, other workers confer the occupational descriptors on apprentices before apprentices eventually infer their status within the organisational hierarchy. National systems of vocational education and training around the globe are facing reform driven by quality, international mobility, and equity. Evidence suggests that there are qualitatively distinctive challenges in providing and sustaining workplace learning experiences to international students. However, despite growing conceptual and empirical work, there is little evidence of the experiences of these students undertaking workplace learning opportunities as part of vocational education courses. This paper draws on a four-year study funded by the Australian Research Council that involved 105 in depth interviews with international students undertaking work integrated learning placements as part of vocational education courses in Australia. The results indicate that international students can experience different forms of discrimination and deskilling, and that these were legitimised by students in relation to their understanding of themselves as being an 'international student' (with fewer rights). However, the results also demonstrated the ways in which international students exercised their agency towards navigating or even disrupting these circumstances, which often included developing their social and cultural capital. This study, therefore, calls for more proactively inclusive induction and support practices that promote reciprocal understandings and navigational capacities for all involved in the provision of work integrated learning. This, it is argued, would not only expand and enrich the learning opportunities for international students, their tutors, employers, and employees involved in the provision of workplace learning opportunities, but it could also be a catalyst to promote greater mutual appreciation of diversity in the workplace. Taking the distinction between the Institution of Apprenticeship, that is, the social partnership arrangements which underpin its organisation, and Apprenticeship as a Social Model of Learning, in other words, he configuration of pedagogic and occupational etc. dimensions which constitute the model, as its starting point the paper: (i) argues the emergence of de-centred, distributed and discontinuous conditions associated with project-work present challenges for extant ideas about apprenticeship as a social model of learning; (ii) explores this claim in relation to Fuller and Unwin's four inter-connected dimensions of apprenticeship as a social model of learning by considering a case study of apprenticeship designed to prepare apprentices to work in the above conditions; (iii) relates issues arising from the case study to research on project work from the fields of Organisational and Cultural Studies; and (iv) based on this evidence base introduces a typology of 'Apprenticeship for Liquid Life'. It was investigated how domain-specific knowledge, fluid intelligence, vocational interest and work-related self-efficacy predicted domain-specific problem-solving performance in the field of office work. The participants included 100 German VET (vocational education and training) students nearing the end of a 3-year apprenticeship program as an industrial clerk (n = 63) which usually leads to a position in office work, lower or middle management, or a similar apprenticeship program to become IT-systems management assistants (n = 37). The participants worked on three computer-based problem scenarios dealing with operative controlling, a relevant domain to both training occupations, and completed further assessments to measure the variables listed above. Theoretical considerations, prior research and domain analyses suggested that industrial clerks would have greater domain-specific problem-solving competence (H1a) and domain-specific knowledge (H1b) than IT-systems management assistants and that domain-specific knowledge would be the strongest predictor of problem-solving competence (H2: "knowledge-is-power" hypothesis); all hypotheses were confirmed. Hypothesis 3, the "Elshout-Raaheim hypothesis," predicts that fluid intelligence and problem-solving competence are most strongly correlated in the context of intermediate levels of task-related content knowledge, however the highest correlation was found in the group with low domain-specific knowledge. The findings suggest that intelligence plays a minor role in later stages of competence development whereas typical problem situations in later stages particularly require prior knowledge. The relationship of intelligence, knowledge and problem solving as well as limitations of the study, particularly weaknesses in the measurement of non-cognitive dispositions, are discussed. The aim is to explore challenges related to the integration between product development and production in product introduction and, given these challenges, to analyse the learning potential of boundary crossing in the context of product introduction. The paper draws on evidence from a Swedish manufacturing company. The theoretical framework is based on a boundary-crossing perspective, which in turn is framed by a workplace learning perspective. Data were collected through interviews with 19 employees from the product development department and 21 employees from the production plant, and 8 focus-group interviews. Within the company, there were many challenges related to product introduction, but the findings also show these challenges can provide learning opportunities by enabling the boundaries to be crossed between the product development department and the production plant. Several forms of intrapersonal or interpersonal boundary crossing were identified. Individuals acted as brokers, and prototypes, pre-series, DfA analysis and a crossfunctional team served as boundary objects and encounters. Nothing in our study indicates that the boundary crossing identified on the intrapersonal and interpersonal levels created learning potentials on the organisational level in the company. The conclusion is that it is necessary to consider the learning potential made available by boundary crossing in order to support learning, and thereby improve the integration between product development and production in product introduction. By seeing and using prototypes and pre-series production as learning opportunities you can create a better preparedness and provide collective access to knowledge required for successful product introduction. This essay looks at the ways in which Aristotle signals his confidence in observation claims in his biological works. Widely seen as an astute observer of the natural world, Aristotle in fact makes surprisingly few explicit claims to personal observation, even if circumstantial and other evidence often provides strong hints of his own involvement. At the same time, because of the incredible variety (and often the localization) of biological species, Aristotle also necessarily relies heavily on the testimony of others. This essay shows how Aristotle employs careful rhetorical strategies to signal or qualify his certainty both in his own observations and in the reports of others, on a case-by-case basis, for his reader. Over the past twenty-five years, history of science has expanded into history of knowledge. Plurality has been the main message. Commonality, by contrast, is the main finding of the present study. It examines the knowledge practices of the full range of participants in cases of public inquirytrials, tests, inspectionsinvolving human bodies in contexts of criminal law, police, public health, marriage and family, claims to community aid, and regulation of trades. The cases come from the archives of three agencies of inquiry and evaluationa government, a university faculty, and a guildin a variety of polities in the Holy Roman Empire between about 1500 and 1650. Participants of widely differing education, occupation, and experiencelearned, artisanal, and domestic, as well as specialized versions of theseare found to have shared practices of observation, description, explanation, and argument. This finding opens the prospect of a history of shared empirical rationality, in contrast to the hegemony of difference, dialogue or transfer, and expertise, in how we understand knowing in modern as well as premodern Europe. This essay presents a new explanation for the emergence after 1877 of public and expert fascination with a single observed feature of the planet Mars: its network of canals. Both the nature of these canals and their widespread notoriety emerged, it is argued, from a novel partnership between two practices then in their ascendancy: astrophysics and the global telegraphic distribution of news. New technologies of global media are shown to have become fundamentally embedded within the working practices of remote astrophysical sites, entangling professional spaces of observation with popular forms of journalism. These collaborations gave rise to a new type of event astronomy, as exemplified by the close working relationship forged between the enterprising Harvard astronomer William Henry Pickering and the New York Herald. Pickering's telegrams to the Herald, sent from his remote mountain outstation in Arequipa, Peru, are shown to be at the heart of the great Mars boom of August 1892, with significant consequences for emerging and contested accounts of the red planet. By tracing the particular transmission effects typical to this new kind of astronomical work, the essay shows how the material, temporal, and linguistic constraints imposed by telegraphic news distribution shaped and bounded what could be said about, and therefore what could be known about, Mars. This essay examines the Mission paleontologique francaise of the 1920s, a series of scientific expeditions into the Ordos Desert in Inner Mongolia in which a team of Jesuit scholar-scientists worked with local collaborators to provide material for the Museum d'Histoire Naturelle in Paris. The case study shows that the global and colonial expansion of Western science in the early twentieth century provided space for traditional scientific institutions, such as universalizing metropolitan collections and clerical scholarly networks, to extend their research projects. The linking of approaches, agendas, and geographic regions was facilitated by the concepts and practices of the deep-time sciences of geology, paleontology, and human prehistory. These were based on the interchange of expertise, common projects of unveiling the development of life, and the alignment of different regions and specimens. Moreover, the expeditions did not just conduct research based around global movement and transmission. They also conceptualized the ancient development of life in terms of movement, migration, and exchange. The act of forming research networks that linked Asia and Europe also led scientists to conceive of these regions as bound by deep natural processes. Circulation and transfer became important actors' categories used to understand the origins and history of life. The effects of the Spanish Civil War (1936-1939) on entomology are evaluated quantitatively using publication-related data. The authors tested the hypothesis that all research results are equally affected by a period of severe disruption. This hypothesis is rejected, and they quantified the degree to which different research outputs were affected. The recovery of scientific production was fast; there was no major destruction of infrastructure. Exiles were not an important factor, and half of the entomologists were active both immediately before and just after the war. Important differences are found in the postwar period in relation to the international situation influencing Spain and the new organization of the state. A decrease is detected in publication in foreign journals, and there was less use of foreign languages. There was a growing importance of publications and scientists associated with the public sector. Conversely, there was a clear decline in research outside the public sector, and local learned societies recovered much more slowly than governmental institutions, which explains, for instance, the near-disappearance of publications in Catalan until the late 1950s. The study indicates that an abrupt social alteration will have a relatively minor impact on scientific production as long as there is a base of continuity of human and material resources and continuous government financial support. Scientometric approaches to the history of science, like other works of digital humanities, provide the most productive insights when used in concert with traditional, well-tested methods of the discipline. This is illustrated by what might be missed through applying a predominantly scientometric approach to the influence of the Spanish Civil War on the history of Spanish entomology. Existing critiques of both prosopography and quantitative studies of scientific productivity are also drawn on for the insights they might offer for assessing both the methods and the utility of such approaches. This essay is an introduction to an Isis Focus section on the social and scientific relevance of history of science museums: Why Science Museums Matter: History of Science in Museums in the Twenty-First Century. Using the history of Museum Boehaave, the Dutch National Museum for the History of Science and Medicine, as a guideline, the essay shows that over the course of time addressing a variety of audiences became a major worry for science museums, which also had to bridge a widening gap between popular views on history of science and those developed by professional historians of science. Science museums come in all shapes and sizes. In order to provide an overview of the present-day landscape of science museums, this survey proposes a typology based on their historical origins. It suggests that, despite their distinct beginnings, museums of various types have converged in their institutional identities. This essay explores how concerns relevant to academic historians of science do and do not translate to the museum setting. It takes as a case study a 2014 exhibition on the story of longitude, with which the author was involved. This theme presented opportunities and challenges for sharing nuanced accounts of science, technology, and innovation. Audience expectation, available objects, the requirements of display, and economic constraints were all factors that could impede effective communication of the preferred version of the story, developed in part through an associated research project. Careful choices regarding objects and design, together with the use of theatrical and multimedia spaces and digital displays, helped to shift visitor interest from the well-known version of the story and toward a longer and more peopled account. However, the persistence of heroic and genius narratives meant that this could not always be achieved and that effective engagement must include direct conversation. Stories and artifacts from the history of science are difficult to find in many popular science museums in the United States. This essay makes a case for why such museums should include the history of science in their halls and how they might go about doing so. Using several Boston-area museums as case studies, it explains why the history of science can be so hard to find in contemporary science museums and then offers several examples of instances in which museums have successfully integrated science and history in their halls. Ultimately, the essay suggests, historians of science and popular science museums should cultivate new partnerships; it concludes with a brief sketch of how they might do so. The term museum means different things in different periods, to be sure; nonetheless, over the past 250 years in Portugal the state failed in creating a national, sustainable museum of science. This essay analyzes, in broad terms, the context, relevance, and consequences of this absence for Portuguese scientific museums, collections, and heritage. It also discusses the implications for the history of science and, especially, for the public interpretation of the past. The term museum of science is used in a broad sense, encompassing museums with collections of scientific instruments, medicine, pharmacy, and natural history, including botanic gardens. Museums of technology and industry, however, are outside the scope of this discussion. This essay discusses educational perspectives in science museums. It places a particular focus on the potential afforded by recent changes in the understanding of science education. Issues raised by the Nature of Science approach have gained substantial relevance in the educational discussion during the last decades. These changes are sketched and their potential for educational approaches in science museums is outlined. The Whole Science approach and the storytelling approach are discussed in greater detail, especially the way practical experiences are combined with theoretical considerations. Past work on integration methods that preserve a conformal symplectic structure focuses on Hamiltonian systems with weak linear damping. In this work, systems of PDEs that have conformal symplectic structure in time and space are considered, meaning conformal symplecticity is fully generalized for PDEs. Using multiple examples, it is shown that PDEs with this particular structure have interesting applications. What it means to preserve a multi-conformal-symplectic conseivation law numerically is explained, along with presentation of two numerical methods that preserve such properties. Then, the advantages of the methods are briefly explored through applications to linear equations, consideration of momentum and energy dissipation, and backward error analysis. Numerical simulations for two PDEs illustrate the properties of the methods, as well as the advantages over other standard methods. (C) 2017 Elsevier B.V. All rights reserved. We consider two-scale elliptic equations whose coefficients are random. In particular, we study two cases: in the first case, the coefficients are obtained from an ergodic dynamical system acting on a probability space, and in the second the case, the coefficients are periodic in the microscale but are random. We suppose that the coefficients also depend on the macroscopic slow variables. While the effective coefficient of the ergodic homogenization problem is deterministic, to approximate it, it is necessary to solve cell equations in a large but finite size "truncated" cube and compute an approximated effective coefficient from the solution of this equation. This approximated effective coefficient is, however, realization dependent; and the deterministic effective coefficient of the homogenization problem can be approximated by taking its expectation. In the periodic random setting, the effective coefficient for each realization is obtained from the solutions of cell equations which are posed in the unit cube, but to compute its average by the Monte Carlo method, we need to consider many uncorrelated realizations to accurately approximate the average. Straightforward employment of finite element approximation and the Monte Carlo method to compute this expectation with the same level of finite element resolution and the same number of Monte Carlo samples at every macroscopic point is prohibitively expensive. We develop a hierarchical finite element Monte Carlo algorithm to approximate the effective coefficients at a dense hierarchical network of macroscopic points. The method requires an optimal level of complexity that is essentially equal to that for computing the effective coefficient at one macroscopic point, and achieves essentially the same accuracy. The levels of accuracy for solving cell problems and for the Monte Carlo sampling are chosen according to the level in the hierarchy that the macroscopic points belong to. Solutions and the effective coefficients at the points where the cell problems are solved with higher accuracy and the effective coefficients are approximated with a larger number of Monte Carlo samples are employed as correctors for the effective coefficient at those points at which the cell problems are solved with lower accuracy and fewer Monte Carlo samples. The method combines the hierarchical finite element method for solving cell problems at a dense network of macroscopic points with the optimal complexity developed in Brown et al. (2013), with a hierarchical Monte Carlo sampling algorithm that uses different number of samples at different macroscopic points depending on the level in the hierarchy that the macroscopic points belong to. Proof of concept numerical examples confirm the theoretical results. (C) 2017 Elsevier B.V. All rights reserved. The main purpose of this paper is to present a new corrected decoupled scheme combined with a spatial finite volume method for chemotaxis models. First, we derive the scheme for a parabolic-elliptic chemotaxis model arising in embryology. We then establish the existence and uniqueness of the numerical solution, and we prove that it converges to a corresponding weak solution for the studied model. In the last section, several numerical tests are presented by applying our approach to a number of chemotaxis systems. The obtained numerical results demonstrate the efficiency of the proposed scheme and its effectiveness to capture different forms of spatial patterns. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we discuss finite element methods for the incompressible Stokes problem and the nearly incompressible linear elasticity problem. Specifically, we present a finite element pair for the incompressible Stokes problem, which satisfies the discrete inf-sup condition and the discrete Korn's inequality, and moreover, which is element-wise conservative. The pair provides a locking-free method for the nearly incompressible linear elasticity problem without reduced integration. (C) 2017 Elsevier B.V. All rights reserved. In this paper we analyze a conforming finite element method for the numerical simulation of non-isothermal incompressible fluid flows subject to a heat source modeled by a generalized Boussinesq problem with temperature-dependent parameters. We consider the standard velocity-pressure formulation for the fluid flow equations which is coupled with a primal-mixed scheme for the convection-diffusion equation modeling the temperature. In this way, the unknowns of the resulting formulation are given by the velocity, the pressure, the temperature, and the normal derivative of the latter on the boundary. Hence, assuming standard hypotheses on the discrete spaces, we prove existence and stability of solutions of the associated Galerkin scheme, and derive the corresponding Cea's estimate for small and smooth solutions. In particular, any pair of stable Stokes elements, such as Hood-Taylor elements, for the fluid flow variables, continuous piecewise polynomials of degree <= k + 1 for the temperature, and piecewise polynomials of degree <= k for the boundary unknown becomes feasible choices of finite element subspaces. Finally, we derive optimal a priori error estimates, and provide several numerical results illustrating the performance of the conforming method and confirming the theoretical rates of convergence. (C) 2017 Elsevier B.V. All rights reserved. Fractures play a significant effect on the macro-scale flow, thus should be described exactly. Accurate modeling of flow in fractured media is usually done by discrete fracture model (DFM), as it provides a detailed representation of flow characteristic. However, considering the computational efficiency and accuracy, traditional numerical methods are not suitable for DFM. In this study, a multiscale mixed finite element method (MsMFEM) is proposed for detailed modeling of two-phase oil water flow in fractured reservoirs. In MsMFEM, the velocity and pressure are first obtained on coarse grid. The interaction between the fractures and the matrix is captured through multiscale basis functions calculated numerically by solving DFM on the local fine grid. Through multiscale basis functions, this method can not only reach a high efficiency as upscaling technology, but also finally generate a more accurate and conservative velocity field on the full fine-scale grid. In our approach; oversampling technique is applied to get more accurate small-scale details. Triangular fine-scale grid is applied, making it possible to consider fractures in arbitrary directions. The validity of MsMFEM is proved through comparing experimental and numerical results. Comparisons of the multiscale solutions with the full fine-scale solutions indicate that the later one can be totally replaced by the former one. The results demonstrate that the multiscale technology is a promising method for multiscale flows in high-resolution fractured media. (C) 2017 Elsevier B.V. All rights reserved. The generalized k-out-of-n: F system (G(k,n:F)) consists of N modules ordered in a line or circle. The ith module is composed of n(i) components in parallel (n(i) >= i = 1, 2,..., N). The G(k,n:F) fails if and only if there exist at least f failed components or if there exist at least k consecutive failed modules. To evaluate the reliability of G(k,n:F), we introduce the concept of a generalized sequence of multivariate Bernoulli trials (GMVBT) and define the bivariate run statistic based on this sequence. We bring out the relation between the probability distribution of the bivariate run statistic and the reliability of G(k,n:F). We demonstrate the evaluation of reliability of G(k,n:F) and some other related systems through a numerical example. (C) 2017 Elsevier B.V. All rights reserved. In this paper we obtain a general statement concerning pathwise convergence of the full discretization of certain stochastic partial differential equations (SPDEs) with non globally Lipschitz continuous drift coefficients. We focus on non-diagonal colored noise instead of the usual space-time white noise. By applying a spectral Galerkin method for spatial discretization and a numerical scheme in time introduced by Jentzen, Kloeden and Winkel we obtain the rate of path-wise convergence in the uniform topology. The main assumptions are either uniform bounds on the spectral Galerkin approximation or uniform bounds on the numerical data. Numerical examples illustrate the theoretically predicted convergence rate. (C) 2017 Elsevier B.V. All rights reserved. Numerical methods for solving nonlinear systems of weakly singular Volterra integral equations (VIES) possessing weakly singular solutions appear almost nonexistent in the literature, except for a few treatments of single first kind Abel equations. To reduce this gap, an extension is presented, of the adaptive Huber method designed for VIEs with singular kernels such as K(t, tau) = (t - tau)(-1/2) and K(t, tau) = exp[-lambda(t - tau)](t - tau)(-1/2)(where lambda >= 0) and a variety of nonsingular kernels. The method was thus far restricted to bounded solutions having at least two derivatives. Under a number of assumptions specified, the extension applies to solutions Up (t) that can be written as sums of singular components C(mu)t(-1/2) (with unknown coefficients c(mu)), and nonsingular components (U) over bar (mu)(t). In the solution process, factor t(-1/2) is handled analytically, whereas c(mu) and (U) over bar (mu)(t) are determined numerically. Computational experiments reveal that the extended method determines singular solutions equally well as the unextended method determined nonsingular solutions. The method is intended primarily for a class of VIEs encountered in electroanalytical chemistry, but it can also be of interest to other application areas. (C) 2017 Elsevier B.V. All rights reserved. The radiative transfer equation (RTE) arises in a wide variety of applications. In certain situations, the energy dependence is not negligible. In a series of two papers, we study the energy dependent RTE. In this first paper of the series, we focus on the well-posedness analysis and energy discretization. We use a mixed formulation so that the analysis covers both cases of non-vanishing absorption and vanishing absorption. We introduce a natural energy discretization scheme and derive an optimal order error estimate for the scheme. Angular discretization, spatial discretization and fully discrete schemes, as well as numerical simulation results, are the topics of the sequel. (C) 2017 Elsevier B.V. All rights reserved. This paper concerns the use of a point-value multiresolution algorithm and its extension to three-dimensional hyperbolic conservation laws. The proposed method is applied to a high-order finite-differences discretization with an explicit time integration. The fluxes are evaluated on the adaptive grid using a fifth-order high-resolution shock capturing scheme based on a WENO solver, while the time is advanced using a third-order Runge-Kutta scheme. The multiresolution prediction operators are presented for one-, two- and three-dimensional problems. To assess the efficiency and the accuracy of the method, a new tolerance-scale diagram is introduced. This diagram enables to properly choose the adequate value of the tolerance in order to maintain an optimal multiresolution quality. Numerical examples based on advection and Euler equations are carried out to show that the proposed method yields accurate results. (C) 2017 Elsevier B.V. All rights reserved. The implementation of Mobile Pedestrian Navigation and Augmented Reality in mobile learning contexts shows new forms of interaction when students are taught by means of learning activities in formal settings. This research presents the educational, quantitative, and qualitative evaluation of an Augmented Reality and Mobile Pedestrian Navigation app. The software was designed for mobile learning in an educational context, to evaluate its effectiveness when applied as a teaching tool, in comparison to similar tools such as those present in e-learning. A mixed-method analysis was used, with primary school students from Chile as subjects (n = 143). They were split into one control group and one experimental group. The control group worked in an e-learning environment, while the experimental group performed the activity as field work, making use of the app (m-learning). Students were evaluated pretest and posttest using an objective test to measure their level of learning. In parallel, a satisfaction survey was carried out concerning the use of these technologies, in addition to interviews with several students and teachers of the experimental group. Pretest-posttest results indicate that the experimental group outperformed the control group in their learning levels. The results of the interviews and the satisfaction survey show that these technologies, combined with fieldwork, increase the effectiveness of the teaching-learning processes. Further, they promote the interaction of students with contents for learning, and they improve students' performance in the educational process. The main goal is to provide a methodology for the analysis of an ad-hoc designed app. The app is intended to provide an m-learning process for subjects being taught about cultural heritage. The quantitative and qualitative results obtained show that it can be more effective than using similar technologies in e-learning contexts. (C) 2017 Elsevier Ltd. All rights reserved. Many studies have investigated the level of teacher ICI' integration proficiency, but few studies have examined changes in proficiency over time. This study describes the development of a scale for the purpose of evaluating changes over time in teachers' technology integration proficiency among Taiwanese grades 1-9 teachers. A follow-up and shortened scale was developed and administered to the same population after three years. A cross validation method which involved split-half exploratory and confirmatory factor analyses revealed that the six-factor scale structure was acceptable and stable (n = 5,938). An invariance test with structural equation modeling also revealed that the items in each scale were comparable. Thus, the follow-up scale was validated and also invariant to the first scale. The mean comparison of the 23 identical items included in invariance tests found little or no differences in most of the items. Only one item in the area of ethics and issues regarding Internet usage and health concern moderately increased (d = 0.68). The results suggest that teachers' pedagogical usage of technology may not have increased as sharply as the technology itself, though teachers' concerns about students' health and safety issues have been heightened regarding the Internet usage. (C) 2017 Elsevier Ltd. All rights reserved. Using robotics technologies in education is increasingly common and has the potential to impact students' learning. Educational robotics is a valuable tool for developing students' cognitive and social skills, and it has greatly attracted the interest of teachers and researchers alike, from pre-school to university. The purpose of this study is to understand the behavioral patterns of elementary students and teachers in one-to-one robotics instruction process. The participants were made up of 18 elementary school students and 18 preservice teachers. Quantitative content analysis and lag sequential analysis were used to analyze the student-teacher interactions. According to findings, the students' assembling bricks, sharing ideas and experiences, and the teachers' providing guidance and asking questions were the most frequent behaviors. Regarding behavioral sequences, the teachers' guidance significantly followed the students' behavior of expressing and sharing their ideas that followed the teachers' questions. The students also significantly tended to play with robots that they themselves designed. Moreover, the teacher-student interactions were discussed in detail in terms of gender differences and difficulty level of robotics activities. The results of this study can be taken into consideration in the design of learning environments with robotics activities. (C) 2017 Elsevier Ltd. All rights reserved. The purpose of this study is to explore some of the ways in which gameplay data can be analyzed to yield results that feed back into the learning ecosystem. There is a solid research base showing the positive impact that games can have on learning, and useful methods in educational data mining. However, there is still much to be explored in terms of what the results of gameplay data analysis can tell stakeholders and how those results can be used to improve learning. As one step toward addressing this, researchers in this study collected back-end data from high school students as they played an MMOG called The Radix Endeavor. Data from a specific genetics quest in the game were analyzed by using data mining techniques including the classification tree method. These techniques were used to examine the relationship between tool use and quest completion, how use of certain tools may influence content-related game choices, and the multiple pathways available to players in the game. The study identified that in this quest use of the trait examiner tool was most likely to lead to success, though a greater number of trait decoder tool uses could also lead to success, perhaps because in those cases players solving problems about genetic traits at an earlier point. These results also demonstrate the multiple strategies available to Radix players that provide different pathways to quest completion. Given these methods of analysis and quest-specific results, the study applies the findings to suggest ways to validate and refine the game design, and to provide useful feedback to students and teachers. The study suggests ways that analysis of gameplay data can be part of a feedback loop to improve a digital learning experience. (C) 2017 Elsevier Ltd. All rights reserved. In computer-assisted instruction, primary causes for the main problems are student's feeling lonely and lack of social learning environment. Pedagogical agents, which could enable students to use software, have been developed and integrated into instructional software to remove the obstacles mentioned. The main purpose of the study is to compare the effects of a fixed pedagogical agent and multi pedagogical agents left to students' choice on dependent variables. Additionally, it has also been examined if computer assisted instruction software without pedagogical agent has any effects on dependent variables. The study seeks to investigate both learners' agent preferences and the effects of pedagogical agents on learners' academic success, motivation and cognitive load. For the purpose of the study, four groups were formed. The first group used educational software via fixed agent, the second group used educational software with the option to choose among several agents whereas the third group used educational software without agent and the fourth group received the same education through traditional way. The academic success refers to MS Excel program literacy. The findings revealed no statistically significant difference between fixed and multi pedagogical agents in terms of dependent variables. However, the designs with agents were found to have positive effects on learners' motivation, academic success and cognitive load. The results also indicated that pedagogical agents should be used in all computer-assisted instruction software. Another finding of the study is that the students in younger age groups tend to prefer the same gender agent as theirs. Furthermore, feedback from participants showed that users of multi pedagogical agents would like to conceive their own pedagogical agents. Accordingly, it is suggested that learners should be provided with programs that can be personalized depending on learners' needs and preferences. (C) 2017 Elsevier Ltd. All rights reserved. This study investigates whether cognitive bias in judgment and decision making can be reduced by training, and whether the effects are affected by the nature of the training environment. Theory suggests that biases can be overcome by training in critical reflective thinking. In addition, applied research studies have suggested that game-based training is more effective at reducing bias than conventional forms of training, for example due to the interactive and dynamic nature of video-games. However, earlier studies have not always controlled systematically for the nature of the learning environment between conditions (e.g., providing different content and bias examples for instruction and training). We manipulated in a between-subjects study whether participants received critical-reflective thinking training (yes/no) and in what context they experienced this training (an interactive detective game, or a text-script of the game). Positive effects of training were found. However, the mitigating effects on bias depended upon the type of bias and when the effects were measured (near or far transfer). Surprisingly, the game group performed similar to the text-script group. This suggests that an interactive and dynamic training context (e.g., a game) is not necessarily more effective than non-dynamic contexts (e.g., a text) for bias-mitigation training. (C) 2017 Elsevier Ltd. All rights reserved. Explanations with others help students learn yet little is known about how technology can support and augment these benefits. This paper describes an experiment that compared the effects of mathematical discourse (i.e., explaining, justifying, and arguing) with peers either face-to-face or using technology (a blog) on fraction learning. We hypothesized that blogs may provide benefits beyond face-to-face collaborations because a record of explanations is accessible for subsequent reflection, which allows students to revisit and revise their explanations. A quasi-experimental design with 134 fifth grade students (ages 9-11) was used to investigate the change in conceptual and procedural knowledge of fractions measured as change from pretest to posttest and delayed posttest. The results indicated that students in the blog condition showed the largest gains in conceptual knowledge from pretest to posttest and at delayed posttest. There was no significant difference between groups on procedural knowledge. The results suggest that the use of blogs may provide unique supports for learning because students are provided opportunities to explain, justify, and argue their thinking, as well as critique the reasoning of others through an interactive learning environment that affords the opportunity to clarify understandings and misconceptions that may not otherwise exist in a traditional face-toface learning environment. (C) 2017 Elsevier Ltd. All rights reserved. Azerbaijan has successfully incorporated modern Information Communication Technologies (ICT) in the education system. The major goal is to raise the standard of education. The factors that affect university students' behavioral intention (BI) to use e-learning for educational purposes in Azerbaijan are worthy of study. This is an empirical study of the use of the General Extended Technology Acceptance Model for E-learning (GETAMEL) developed by Abdullah and Ward (2016) in order to determine the factors that affect undergraduate students' BI to use an e-learning system. The data was collected from 714 undergraduate and misters students using a convenient sampling technique and the responses were analyzed using Structural Equation Modeling (SEM). It is seen that the Subjective norm (SN), Experience (EXP) and Enjoyment (ENJOY) positively and significantly influence students' perceived usefulness (PU) of e-learning, while Computer anxiety (CA) has a negatively effect. EXP, ENJOY and Self-efficacy (SE) positively and significantly affect their perceived ease of use (PEOU) of e-learning. It is also seen that SN has a positive and significant impact on BI to use e-learning, while Technological innovation (TI) significantly moderates the relationship between SN and PU, PU and BI to use e-learning. This study is the first to determine a negative and significant relationship between CA and PU, in the context of students' e-learning. This study is also one of the very few that uses the GETAMEL model for e-learning settings. The results have significant practical implications for educational institutions and decision-makers, in terms of the design of the e-learning system in universities. (C) 2017 Elsevier Ltd. All rights reserved. As the most critical trading mechanism in supply chain management/operation management fields, Dutch auction theories and practices have been regarded as one of the key teaching subjects of many universities. The advancement of ubiquitous computing technologies has not only solved the technological problems of dealing with millions of simultaneous biddings in real practices, but also enabled students to learn elusive and complex knowledge in an interactive environment. However, little attention was paid to educational discussions and quantitative analyses when applying the ubiquitous learning (u-learning) system in auction classes. This quasi-experimental study was among the first to develop and evaluate a smart u-learning system that integrated Internet-of-Things (IoT) technologies to simulate real, authentic auction activities, while detecting learning behaviors of students. We also integrated and utilized the pedagogical approach of Teaching by Examples and Learning by Doing (TELD) in the presented system, which further strengthened dynamic interactions and timely teaching instructions for students. The data analysis showed that this innovative system had positive effects on students' learning outcomes. The results also revealed that applying the u-learning system in teaching procedural knowledge, rather than conceptual knowledge, was more resource effective and less time consuming. Moreover, students had high perceptions of learning content when the system was designed with efficient pedagogical assistance, interaction flexibility and user-friendly features. Critical practical implications were also summarized for teachers, system designers, researchers, and policymakers. (C) 2017 Published by Elsevier Ltd. The current paper presents a comparative analysis of forums and wikis as tools for online collaborative learning. The comparison was developed analyzing the data collected during a collaborative experience in an asynchronous e-learning environment. The activities lasted five weeks and consisted of forum discussions and designing a project in a wild environment. The research method included both quantitative and qualitative analyses. A quantitative comparison of forums and wikis was developed applying the coding scheme based on the following indicators: (1) inferencing, (2) producing, (3) developing, (4) evaluating, (5) summarizing, (6) organizing, and (7) supporting. The qualitative aspects were assessed using an open-ended questionnaire for collecting participants' perspectives on the functionality of the collaborative tools. Results provided evidence of the different processes during the forum and wild activities: processes such as inferencing, evaluating, organizing and supporting characterized forum discussions while wikis induced mainly processes of producing and developing. Different purposes were also evident: forums were useful for discussing, sharing ideas while wikis were used for developing a common collaborative document. In addition, the perceived time involved in performing the activities was different: forums were easier to access than wilds, while wikis required more time and were more difficult to use than forums. As a general conclusion it is not possible to state the superiority of one tool over another because each has its own characteristics and could be used with different purposes. Forums and wilds could have complementary functions and should be organized to complete each other for scaffolding students' self-regulated strategies and learning. The findings are discussed in the framework of designing collaborative virtual courses with proper tool selection. (C) 2017 Elsevier Ltd. All rights reserved. With the global diffusion of digital gaming, there is an increasing call to establish to what extent games and their elements could be harnessed for learning and education. Most research in this field has been conducted in more economically advanced and developed regions, and there is a paucity of research in emerging country contexts. It is argued that gamification can be effectively utilised in these contexts to address learner engagement and motivation. The study investigated the extent to which six determined predictors (perceptions about playfulness, curriculum fit, learning opportunities, challenge, self-efficacy and computer anxiety) influence the advocacy to accept a gamified application by South African tourism teachers. Tourism education was selected for empirical study because of its popularity in developing countries and where the economy heavily depends on the sector. However, it is a highly under researched area. Data was obtained from 209 tourism teachers, and was tested against the research model using a structural equation modelling approach. Findings reveal that the constructs of perceived playfulness, curriculum fit have a positive, direct impact on the construct of behavioural intention. The exogenous constructs of challenge, learning opportunities, self-efficacy and computer anxiety have an indirect effect on behavioural intention via perceived playfulness or curriculum fit. The study may prove useful to educators and practitioners in understanding which determinants may influence the introduction of gamification into formal secondary education. (C) 2017 Elsevier Ltd. All rights reserved. Sparse representation methods based on l(1) and/or l(2) regularization have shown promising performance in different applications. Previous studies show that the l(1) regularization based representation has more sparse property, while the l(2) regularization based representation is much simpler and faster. However, when dealing with noisy data, both naive l(1) and l(2) regularization suffer from the issue of unsatisfactory robustness. In this paper, we explore the method to implement an antinoise sparse representation method for robust face recognition based on a joint version of l(1) and l(2) regularization. The contributions of this paper are mainly shown in the following aspects. First, a novel objective function combining both l(1) and l(2) regularization is proposed to implement an antinoise sparse representation. An iterative fitting operation via l(1) regularization is integrated with l(2) norm minimization, to obtain an antinoise classification. Second, the rationale how the proposed method produces promising discriminative and antinoise performance for face recognition is analyzed. The l(2) regularization enhances robustness and runs fast, and l(1) regularization helps cope with the noisy data. Third, the classification robustness of the proposed method is demonstrated by extensive experiments on several benchmark facial datasets. The method can be considered as an option for the expert systems for biometrics and other recognition problems facing unstable and noisy data. (C) 2017 Elsevier Ltd. All rights reserved. Land consolidation is an important tool to prevent land fragmentation and enhance agricultural productivity. Land partitioning is one of the most significant problems within the land consolidation process. This process is related to the subdivision of a block having non-uniform geometric shapes. Land partitioning determines the location of new land parcels and is a complex problem containing many conflicting demands, so conventional programming techniques are not sufficient for this NP optimization problem. Therefore, it is necessary to have an intelligent system with a standard decision-making mechanism capable of processing many criteria simultaneously and evaluating a number of different solutions in a short time. To overcome this problem and accelerate the land partitioning process, we proposed automated land partitioning using a genetic algorithm (ALP-GA). Besides the parcel's size, shape and land value, the proposed method evaluates fixed facilities, and the degree and location of cadastral parcels to generate a land partitioning plan. The proposed method automated the land partitioning process using an intelligent system and was implemented over a real project area, Experimental study shows that the proposed method is more successful and efficient than the designer with respect to the results meeting the objective function. In addition, the land partition process is greatly simplified by the proposed method. (C) 2017 Elsevier Ltd. All rights reserved. Financial time series are notoriously difficult to analyze and predict, given their non-stationary, highly oscillatory nature. In this study, we evaluate the effectiveness of the Ensemble Empirical Mode Decomposition (EEMD), the ensemble version of Empirical Mode Decomposition (EMD), at generating a representation for market indexes that improves trend prediction. Our results suggest that the promising results reported using EEMD on financial time series were obtained by inadvertently adding look-ahead bias to the testing protocol via pre-processing the entire series with EMD, which affects predictive results. In contrast to conclusions found in the literature, our results indicate that the application of EMD and EEMD with the objective of generating a better representation for financial time series is not sufficient to improve the accuracy or cumulative return obtained by the models used in this study. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents an improved version of a recent state-of-the-art texture descriptor called Gaussians of Local Descriptors (GOLD), which is based on a multivariate Gaussian that models the local feature distribution that describes the original image. The full rank covariance matrix, which lies on a Riemannian manifold, is projected on the tangent Euclidean space and concatenated to the mean vector for representing a given image. In this paper, we test the following features for describing the original image: scale-invariant feature transform (SIFT), histogram of gradients (HOG), and weber's law descriptor (WLD). To improve the baseline version of GOLD, we describe the covariance matrix using a set of visual features that are fed into a set of Support Vector Machines (SVMs). The SVMs are combined by sum rule. The scores obtained by an SVM trained using the original GOLD approach and the SVMs trained with visual features are then combined by sum rule. Experiments show that our proposed variant outperforms the original GOLD approach. The superior performance of the proposed system is validated across a large set of datasets. Particularly interesting is the performance obtained in two widely used person re-identification datasets, CAVIAR4REID and IAS, where the proposed GOLD variant is coupled with a state-of-the-art ensemble to obtain an improvement of performance on these two datasets. Moreover, we performed further tests that combine GOLD with non-binary features (local ternary/quinary patterns) and deep transfer learning. The fusion among SVMs trained with deep features and the SVMs trained using the ternary/quinary coding ensemble is demonstrated to obtain a very high performance across datasets. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available(1) to other researchers for future comparisons. (C) 2017 Elsevier Ltd. All rights reserved. Learning from imbalanced datasets is challenging for standard algorithms, as they are designed to work with balanced class distributions. Although there are different strategies to tackle this problem, methods that address the problem through the generation of artificial data constitute a more general approach compared to algorithmic modifications. Specifically, they generate artificial data that can be used by any algorithm, not constraining the options of the user. In this paper, we present a new oversampling method, Self-Organizing Map-based Oversampling (SOMO), which through the application of a Self Organizing Map produces a two dimensional representation of the input space, allowing for an effective generation of artificial data points. SOMO comprises three major stages: Initially a Self-Organizing Map produces a two-dimensional representation of the original, usually high-dimensional, space. Next it generates within-cluster synthetic samples and finally it generates between cluster synthetic samples. Additionally we present empirical results that show the improvement in the performance of algorithms, when artificial data generated by SOMO are used, and also show that our method outperforms various oversampling methods. (C) 2017 Elsevier Ltd. All rights reserved. This paper proposes an enhanced Dominance Network (DN) to assess the technical, economic and allocative efficiency of a set of Decision Making Units (DMUs). In a DN, the nodes represent DMUs and the arcs correspond to dominance relationships between them. Two types of dominance relationship are considered: technical and economic. The length of a technical dominance arc between two nodes is a weighted measure of the input and output differences between the two DMUs. The length of an economic dominance arc between two nodes corresponds to the cost, revenue or profit difference between them (depending on whether only unit input prices, unit output prices or both are known). The proposed dominance network is a multiplex network with two relations; the structure of both relations is similar. Thus, both of them are layered and their arcs have transitivity and additivity properties. However, since technical dominance implies economic dominance but not the reverse, economic dominance is more common and has a deeper structure. It may also have an underlying potential field so that the length of the arcs between any two nodes depends on the difference in their potentials and the direction of the arcs depends on the sign of that difference. Allocative inefficiencies can also be gauged on this DN. Complex network measures can be used to characterize and study this type of DN. (C) 2017 Elsevier Ltd. All rights reserved. In this study, a maximal covering location problem is investigated. In this problem, we want to maximize the demand of a set of customers covered by a set of p facilities located among a set of potential sites. It is assumed that a set of facilities that belong to other firms exists and that customers freely choose allocation to the facilities within a coverage radius. The problem can be formulated as a bilevel mathematical programming problem, in which the leader locates facilities in order to maximize the demand covered and the follower allocates customers to the most preferred facility among those selected by the leader and facilities from other firms. We propose a greedy randomized adaptive search procedure (GRASP) heuristic and a hybrid GRASP-Tabu heuristic to find near optimal solutions. Results of the heuristic approaches are compared to solutions obtained with a single-level reformulation of the problem. Computational experiments demonstrate that the proposed algorithms can find very good quality solutions with a small computational burden. The most important feature of the proposed heuristics is that, despite their simplicity, optimal or near-optimal solutions can be determined very efficiently. (C) 2017 Elsevier Ltd. All rights reserved. Classifiers deployed in the real world operate in a dynamic environment, where the data distribution can change over time. These changes, referred to as concept drift, can cause the predictive performance of the classifier to drop over time, thereby making it obsolete. To be of any real use, these classifiers need to detect drifts and be able to adapt to them, over time. Detecting drifts has traditionally been approached as a supervised task, with labeled data constantly being used for validating the learned model. Although effective in detecting drifts, these techniques are impractical, as labeling is a difficult, costly and time consuming activity. On the other hand, unsupervised change detection techniques are unreliable, as they produce a large number of false alarms. The inefficacy of the unsupervised techniques stems from the exclusion of the characteristics of the learned classifier, from the detection process. In this paper, we propose the Margin Density Drift Detection (MD3) algorithm, which tracks the number of samples in the uncertainty region of a classifier, as a metric to detect drift. The MD3 algorithm is a distribution independent, application independent, model independent, unsupervised and incremental algorithm for reliably detecting drifts from data streams. Experimental evaluation on 6 drift induced datasets and 4 additional datasets from the cybersecurity domain demonstrates that the MD3 approach can reliably detect drifts, with significantly fewer false alarms compared to unsupervised feature based drift detectors. At the same time, it produces performance comparable to that of a fully labeled drift detector. The reduced false alarms enables the signaling of drifts only when they are most likely to affect classification performance. As such, the MD3 approach leads to a detection scheme which is credible, label efficient and general in its applicability. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents a complete natural feature based tracking system that supports the creation of augmented reality applications focused on the automotive sector. The proposed pipeline encompasses scene modeling, system calibration and tracking steps. An augmented reality application was built on top of the system for indicating the location of 3D coordinates in a given environment which can be applied to many different applications in cars, such as a maintenance assistant, an intelligent manual, and many others. An analysis of the system was performed during the Volkswagen/ISMAR Tracking Challenge 2014, which aimed to evaluate state-of-the-art tracking approaches on the basis of requirements encountered in automotive industrial settings. A similar competition environment was also created by the authors in order to allow further studies. Evaluation results showed that the system allowed users to correctly identify points in tasks that involved tracking a rotating vehicle, tracking data on a complete vehicle and tracking with high accuracy. This evaluation allowed also to understand the applicability limits of texture based approaches in the textureless automotive environment, a problem not addressed frequently in the literature. To the best of the authors' knowledge, this is the first work addressing the analysis of a complete tracking system for augmented reality focused on the automotive sector which could be tested and validated in a major benchmark like the Volkswagen/ISMAR Tracking Challenge, providing useful insights on the development of such expert and intelligent systems. (C) 2017 Elsevier Ltd. All rights reserved. Recommendation Systems (RS) are gaining popularity and they are widely used for dealing with information on education, e-commerce, travel planning, entertainment etc. Recommender Systems are used to recommend items to user(s) based on the ratings provided by the other users as well as the past preferences of the user(s) under consideration. Given a set of items from a group of users, Group Recommender Systems generate a subset of those items within a given group budget (i.e. the number of items to have in the final recommendation). Recommending to a group of users based on the ordered preferences provided by each user is an open problem. By order, we mean that the user provides a set of items that he would like to see in the generated recommendation along with the order in which he would like those items to appear. We design and implement algorithms for computing such group recommendations efficiently. Our system will recommend items based on modified versions of two popular Recommendation strategies-Aggregated Voting and Least Misery. Although the existing versions of Aggregated Voting (i.e. Greedy Aggregated Method) and Least Misery perform fairly well in satisfying individuals in a group, they fail to gain significant group satisfaction. Our proposed Hungarian Aggregated Method and Least Misery with Priority improves the overall group satisfaction at the cost of a marginal increase in time complexity. We evaluated the scalability of our algorithms using a real-world dataset. Our experimental results evaluated using a self-established metric substantiates that our approach is significantly efficient. (C) 2017 Elsevier Ltd. All rights reserved. Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing. (C) 2017 Elsevier Ltd. All rights reserved. Finger-vein verification has drawn increasing attention because it is highly secured and private biometric in practical applications. However, as the imaging environment is affected by many factors, the captured image contains not only the vein pattern but also the noise and irregular shadowing which can decrease the verification accuracy. To address this problem, in this paper, we proposed a new finger-vein extraction approach which detects the valley-like structures using the curvatures in Radon space. Firstly, given a pixel, we obtain eight patches centered on it by rotating a window along eight different orientations and project the resulting patches into Radon space using the Radon transform. Secondly, the vein patches create prominent valleys in Radon space. The vein patterns are enhanced according to the curvature values of the valleys. Finally, the vein network is extracted from the enhancing image by a binarization scheme and matched for personal verification. The experimental results on both contacted and contactless finger-vein databases illustrate that our approach can significantly improve the accuracy of the finger-vein verification system. (C) 2017 Elsevier Ltd. All rights reserved. Modern organizations execute processes to deliver product and services, whose enactment needs to adhere to laws, regulations and standards. Conformance checking is the problem of pinpointing where deviations are observed. This paper shows how instances of the conformance checking problem can be represented as planning problems in PDDL (Planning Domain Definition Language) for which planners can find a correct solution in a finite amount of time. If conformance checking problems are converted into planning problems, one can seamlessly update to the recent versions of the best performing automated planners, with evident advantages in term of versatility and customization. The paper also reports on results of experiments conducted on two real-life case studies and on eight larger synthetic ones, mainly using the FAST-DOWNWARD planner framework to solve the planning problems due to its performances. Some experiments were also repeated though other planners to concretely showcase the versatility of our approach. The results show that, when process models and event logs are of considerable size, our approach outperforms existing ones even by several orders of magnitude. Even more remarkably, when process models are extremely large and event log traces very long, the existing approaches are unable to terminate because they run out of memory, while our approach is able to properly complete the alignment task. (C) 2017 Elsevier Ltd. All rights reserved. This work describes a novel methodology to characterize voice diseases by using nonlinear dynamics, considering different complexity measures that are mainly based on the analysis of the time delay embedded space. The feature space is represented with a DHMM and a further transformation of the DHMM states to a hyperdimensional space is performed. The discrimination between healthy and pathological speech signals is peformed by using a RBF-SVM which is trained following a K-fold cross-validation strategy. Results of around 99% of accuracy are obtained for three different voice disorders, disphonia due to laryngeal pathologies, hypernasality due to cleft lip and palate, and dysarthria due to Parkinson's disease. (C) 2017 Elsevier Ltd. All rights reserved. This publication presents a computer method allowing river channels to be segmented based on SAR polarimetric images. Solutions have been proposed which are based on a morphological approach using the watershed segmentation and combining regions by maximising the average contrast. The image processing methods were developed so that their computational complexity is as low as possible, which is of particular importance in analysing high resolution SAR/polarimetric SAR images, where it has a measurable impact on the total segmentation time. What is more, compared to the existing solutions known from the literature review: (1) in the proposed approach, there is no need to execute further steps necessary to eliminate objects (i.e. background components) located outside the river channel from the image as a result of the segmentation carried out, (2) there is no need to sample the entire image and carry out a pixel-wise classification to prepare the segmentation process. If the steps listed in items (1) - (2) are performed, they can, unfortunately, extend the segmentation time. The experiments completed on images acquired from the ALOS PALSAR satellite for different regions of the world have shown a high quality of the segmentations carried out and a high computational efficiency compared to state-of-the art methods. Consequently, the proposed method can be used as a useful tool for monitoring changes in river courses and adopted in expert and intelligent systems used for analysing remote sensing data. (C) 2017 Elsevier Ltd. All rights reserved. A well-performed demand forecasting can provide outpatient department (OPD) managers with essential information for staff scheduling and rostering, considering the non-reservation policy of OPD in China. Based on the results reported by relevant studies, most approaches have focused on forecasting the overall amount of patient flow and ignored the demand for other key resources in OPD or similar department. Moreover, few studies have conducted feature selection before training a forecast model, which is a significant pre-processing operation of data mining and widely applied for knowledge discovery in expert and intelligent system. This study develops a novel hybrid methodology to forecast the patients' demand for different key resources in OPD, by combining a new feature selection method and a deep learning approach. A modified version of genetic algorithm (MGA) is proposed for feature selection. The key operators of normal genetic algorithm are redesigned to utilize useful information provided by filter-based feature selection and feature combinations. A feedforward deep neural network is introduced as the forecast model, and the initial parameter set is generated from a stacked autoencoder-based pre-training process to overcome the optimization challenges in constructing deep architectures. In order to evaluate the performance of our methodology, it is applied to an OPD located at Northeast China. The results are compared with those obtained from combinations of other feature selection methods and demand forecasting models. Compared with GA and PCA, MGA improves the quality and efficiency of feature selection, with less selected features to get higher forecast accuracy. Pre-trained DNN optimally strengthens the advantage of MGA, compared with MLR, ARIMAX and SANN. The combination of MGA and pre-trained DNN possesses strongest predictive power among all involved combinations. Furthermore, the results of proposed methodology are crucial prerequisites for staff scheduling and resource allocation in OPD. Elite features obtained by MGA can provide practical insights on potential association between manifold feature combinations and demand variance. (C) 2017 Elsevier Ltd. All rights reserved. From their origins in the sociological field, memes have recently become of interest in the context of 'viral' transmission of basic information units (memes) in online social networks. However, much work still needs to be done in terms of metrics and practical data processing issues. In this paper we define a theoretical basis and processing system for extracting and matching memes from free format text. The system facilitates the work of a text analyst in extracting this type of data structures from online text corpuses and n performing empirical experiments in a controlled manner. The general aspects related to the solution are the automatic processing of unstructured text without need for preprocessing (such as labelling and tagging), identification of co-occurences of concepts and corresponding relations, construction of semantic networks and selecting the top memes. The system integrates these processes which are generally separate in other state of the art systems. The proposed system is important because unstructured online text content is growing at a greater rate than other content (e.g. semi-structured, structured) and integrated and automated systems for knowledge extraction from this content will be increasingly important in the future. To illustrate the method and metrics we process several real online discussion forums, extracting the principal concepts and relations, building the memes and then identifying the key memes for each document corpus using a sophisticated matching process. The results show that our method can automatically extract coherent key knowledge from free text, which is corroborated by benchmarking with a set of other text analysis approaches, as well as a user study evaluation. (C) 2017 Elsevier Ltd. All rights reserved. Facial beauty analysis has been an emerging subject of multimedia and biometrics. This paper aims at exploring the essence of facial beauty from the viewpoint of geometric characteristic toward an interactive attractiveness assessment (IAA) application. As a result, a geometric facial beauty analysis method is proposed from the perspective of machine learning. Due to the troublesome and subjective beauty labeling, the accurately labeled data scarcity is caused, and result in very few labeled data. Additionally, facial beauty is related to several typical features such as texture, color, etc., which, however, can be easily deformed by make-up. For addressing these issues, a semi-supervised facial beauty analysis framework that is characterized by feeding geometric feature into the intelligent attractiveness assessment system is proposed. For experimental study, we have established a geometric facial beauty (GFB) dataset including Asian male and female faces. Moreover, an existing multi-modal beauty ((MB)-B-2) database including western and eastern female faces is also tested. Experiments demonstrate the effectiveness of the proposed method. Some new perspectives on the essence of beauty and the topic of facial aesthetic are revealed. The impact of this work lies in that it will attract more researchers in related areas for beauty exploration by using intelligent algorithms. Also, the significance lies in that it should well promote the diversity of expert and intelligent systems in addressing such challenging facial aesthetic perception and rating issue. (C) 2017 Elsevier Ltd. All rights reserved. Recently, fingerprint crowdsourcing from pedestrian movement trajectories has been promoted to alleviate the site survey burden for radio map construction in fingerprinting-based indoor localization. Indoor corners, as one of the most common indoor landmarks, play an important role in movement trajectory analysis. This paper studies the problem of indoor corner recognition in crowdsourced movement trajectories. In a movement trajectory, smartphone internal sensor measurements experience some signal changes when passing by a corner. However, the state-of-the-art Solutions based on signal change detection cannot well deal with the fake corner problem and pose diversity problem in most practical movement trajectories. In this paper, we study the corner recognition problem from an expert system viewpoint by applying machine learning techniques. In particular, we extract recognition features from both the time and frequency domain and propose a hierarchical corner recognition scheme consisting of three classifiers. The first pose classifier is to classify various poses into only two groups according to whether or not a smartphone is kept in a fixed position relative to a user upper body when collecting sensor measurements. Feature selection is then applied to train two corner classifiers each for one pose group. Field experiments are conducted to compare our proposed scheme with three state-of-the-art algorithms. In all cases, our scheme outperforms the best of these algorithms in terms of much higher Fl-measure and precision for corner recognition. The results also provide insights on the potentials of using more advanced techniques from expert systems in indoor localization. (C) 2017 Elsevier Ltd. All rights reserved. Distributing loan using group lending method is one of the unique features in microfinance, as it utilises peer monitoring and dynamic incentive to lower credit risks in extending collateral-free loan to the poor. However, many microfinance institutions (MFIs) eventually perceive it to be costly and restricting loan growth thereby resorted to individual lending method to enhance profitability. On the other hand, village banking method was developed to boost outreach and to create self-sustaining village microbanks. We thus seek to empirically observe the loan method- efficiency relationship and to examine the best loan method regionally; focusing on not-for-profit MFIs that are widely regarded as best microfinance provider. Non-oriented Data Envelopment Analysis with regional meta-frontier approach is used for efficiency assessment of 628 MFIs from 87 countries in 6 regions, followed by Tobit regression. We also investigated factors affecting efficiencies such as borrowings, total donation, cost per borrower (CPB), portfolio at risk (PAR), interest rates, MFI age, regulation status, and legal format. The results support our argument that appropriate performance analysis should best be performed on regional basis separately as we find different results for different region. Crown Copyright (C) 2017 Published by Elsevier Ltd. In this paper, a feedback neural network model is proposed to compute the solution of the mathematical programs with equilibrium constraints (MPEC). The MPEC problem is altered into an identical one-level non-smooth optimization problem, then a sequential dynamic scheme that progressively approximates the non-smooth problem is presented. Besides asymptotic stability, it is proven that the limit equilibrium point of the suggested dynamic model is a solution for the original MPEC problem. Numerical simulation of various types of MPEC problems shows the significance of the results. Moreover, the scheme is applied to compute the Stackelberg-Cournot-Nash equilibria. (C) 2017 Elsevier Ltd. All rights reserved. Autonomous vehicles are soon to become ubiquitous in large urban areas, encompassing cities, suburbs and vast highway networks. In turn, this will bring new challenges to the existing traffic management expert systems. Concurrently, urban development is causing growth, thus changing the network structures. As such, a new generation of adaptive algorithms are needed, ones that learn in real-time, capture the multivariate nonlinear spatio-temporal dependencies and are easily adaptable to new data (e.g. weather or crowdsourced data) and changes in network structure, without having to retrain and/or redeploy the entire system. We propose learning Topology-Regularized Universal Vector Autoregression (TRU-VAR) and examplify deployment with of state-of-the-art function approximators. Our expert system produces reliable forecasts in large urban areas and is best described as scalable, versatile and accurate. By introducing constraints via a topology-designed adjacency matrix (TDAM), we simultaneously reduce computational complexity while improving accuracy by capturing the non-linear spatio-temporal dependencies between timeseries. The strength of our method also resides in its redundancy through modularity and adaptability via the TDAM, which can be altered even while the system is deployed. The large-scale network-wide empirical evaluations on two qualitatively and quantitatively different datasets show that our method scales well and can be trained efficiently with low generalization error. We also provide a broad review of the literature and illustrate the complex dependencies at intersections and discuss the issues of data broadcasted by road network sensors. The lowest prediction error was observed for TRU-VAR, which outperforms ARIMA in all cases and the equivalent univariate predictors in almost all cases for both datasets. We conclude that forecasting accuracy is heavily influenced by the TDAM, which should be tailored specifically for each dataset and network type. Further improvements are possible based on including additional data in the model, such as readings from different metrics. (C) 2017 Elsevier Ltd. All rights reserved. Managerial decisions should be made by taking into account the priorities and objectives of different stakeholders' groups. Their preferences are usually expressed in words and are fuzzy concepts. This article analyses the peculiarities of companies' work and decision-making within a fuzzy market situation. It also presents a developed fuzzy multi-criteria group decision-making model for practical problem solving by taking into account cost-effective management. This case study presents a selection of rational criteria set to use in the weighted cost-effectiveness analysis for facilities management strategies, in which integrated fuzzy multi-criteria decision-making methods are applied. The main findings are: the model is adopted to real-life; the main criteria groups are identified by a three-step Delphi technique; a rational strategy is determined and integrated in one model by the concept of Minkowski distance and fuzzy TOPSIS method, ARAS-F and fuzzy weighted product method. The proposed model is versatile and therefore can be applied for various problems were the experts' knowledge needed for decision-making. (C) 2017 Elsevier Ltd. All rights reserved. This paper presents the first population-based path relinking algorithm for solving the NP-hard vertex separator problem in graphs. The proposed algorithm employs a dedicated relinking procedure to generate intermediate solutions between an initiating solution and a guiding solution taken from a reference set of elite solutions (population) and uses a fast tabu search procedure to improve some selected intermediate solutions. Special care is taken to ensure the diversity of the reference set. Dedicated data structures based on bucket sorting are employed to ensure a high computational efficiency. The proposed algorithm is assessed on four sets of 365 benchmark instances with up to 20,000 vertices, and shows highly comparative results compared to the state-of-the-art methods in the literature. Specifically, we report improved best solutions (new upper bounds) for 67 instances which can serve as reference values for assessment of other algorithms for the problem. (C) 2017 Elsevier Ltd. All rights reserved. In allusion to dynamic intuitionistic normal fuzzy multi-attribute decision-making (MADM) problems with unknown time weight, a MADM method based on dynamic intuitionistic normal fuzzy aggregation (DINFA) operators and VIKOR method with time sequence preference was presented. In this method, two information aggregating operators were first proposed and proved, including dynamic intuitionistic normal fuzzy weighted arithmetic average (DINFWAA) operator and dynamic intuitionistic normal fuzzy weighted geometric average (DINFWGA) operator. Meanwhile, we constructed a multi-target nonlinear programming model, which fused time degree theory that was based on subjective preference and information entropy principle based on objective preference, to obtain time weight, Based on which, according to the algorithm of intuitionistic normal fuzzy number, intuitionistic normal fuzzy information under different time sequences were aggregated by using the proposed DINFA operators, and formed a dynamic intuitionistic normal fuzzy comprehensive decision-making matrix; then, obtained the optimal solution that was the closest to ideal solution via VIKOR method. Finally, the feasibility and significance of the presented method over existing methods were verified via analysis of numerical examples. (C) 2017 Elsevier Ltd. All rights reserved. Software fault prediction using different techniques has been done by various researchers previously. It is observed that the performance of these techniques varied from dataset to dataset, which make them inconsistent for fault prediction in the unknown software project. On the other hand, use of ensemble method for software fault prediction can be very effective, as it takes the advantage of different techniques for the given dataset to come up with better prediction results compared to individual technique. Many works are available on binary class software fault prediction (faulty or non-faulty prediction) using ensemble methods, but the use of ensemble methods for the prediction of number of faults has not been explored so far. The objective of this work is to present a system using the ensemble of various learning techniques for predicting the number of faults in given software modules. We present a heterogeneous ensemble method for the prediction of number of faults and use a linear combination rule and a non-linear combination rule based approaches for the ensemble. The study is designed and conducted for different software fault datasets accumulated from the publicly available data repositories. The results indicate that the presented system predicted number of faults with higher accuracy. The results are consistent across all the datasets. We also use prediction at level I (Pred(I)), and measure of completeness to evaluate the results. Pred(1) shows the number of modules in a dataset for which average relative error value is less than or equal to a threshold value 1. The results of prediction at level I analysis and measure of completeness analysis have also confirmed the effectiveness of the presented system for the prediction of number of faults. Compared to the single fault prediction technique, ensemble methods produced improved performance for the prediction of number of software faults. Main impact of this work is to allow better utilization of testing resources helping in early and quick identification of most of the faults in the software system. (C) 2017 Elsevier Ltd. All rights reserved. Automated Text Simplification (ATS) aims to transform complex texts into their simpler variants which are easier to understand to wider audiences and easier to process with natural language processing (NLP) tools. While simplification can be applied on lexical, syntactic, and discourse level, all previously proposed ATS systems only operated on the first two levels, thus failing at simplifying texts on the discourse level. We present a semantically-motivated ATS system which is the first system that is applied on the discourse level. By exploiting the state-of-the-art event extraction system, it is the first ATS system able to eliminate large portions of irrelevant information from texts, by maintaining only those parts of the original text that belong to factual event mentions. A few handcrafted rules ensure that the output of the system is syntactically simple, by placing each factual event mention in a separate short sentence, while the state-of-the-art unsupervised lexical simplification module, based on using word embeddings, replaces complex and infrequent words with their simpler variants. We perform a thorough evaluation, both automatic and manual, showing that our system produces more readable and simpler texts than the state-of-the-art ATS systems. Our newly proposed post-editing evaluation further reveals that our system requires less human effort for correcting grammaticality and meaning preservation on news articles than the state-of-the-art ATS system. (C) 2017 Elsevier Ltd. All rights reserved. To describe the prevalence and incidence density of hospital-acquired unavoidable pressure sores among patients aged >= 65 years admitted to acute medical units. A secondary analysis of longitudinal study data collected in 2012 and 2013 from 12 acute medical units located in 12 Italian hospitals was performed. Unavoidable pressure ulcers were defined as those that occurred in haemodynamically unstable patients, suffering from cachexia and/or terminally ill and were acquired after hospital admission. Data at patient and at pressure ulcer levels were collected on a daily basis at the bedside by trained researchers. A total of 1464 patients out of 2080 eligible (70.4%) were included. Among these, 96 patients (6.5%) hospital acquired a pressure ulcer and, among 19 (19.7%) were judged as unavoidable. The incidence of unavoidable pressure ulcer was 8.5/(100) in hospital-patient days. No statistically significant differences at patient and pressure ulcers levels have emerged between those patients that acquired unavoidable and avoidable pressure sores. Although limited; evidence on unavoidable pressure ulcer is increasing. More research in the field is recommended to support clinicians, managers and policymakers in the several implications of unavoidable pressure ulcers both at the patient and at the system levels. (C) 2017 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Aim of the study: To examine biophysical skin properties in the sacral region in spinal cord injury (SCI) patients suffering from a grade 1 pressure ulcer (PU) defined as non-blanchable erythema (SCI/PU), SCI patients in the post-acute phase (SCI/PA) and able-bodied participants (CON). Also, for SCI/PU patients, both the affected skin and healthy skin close to the PU were examined. Study design: An experimental controlled study with a convenience sample. Setting: A Swiss acute care and rehabilitation clinic specializing in SCIs. Materials and methods: We determined hydration, redness, elasticity and perfusion of the unloaded skin in the sacral region of 6 SCI/PU patients (affected and healthy skin), 20 SCI/PA patients and 10 able-bodied controls. These measures were made by two trained examiners after the patients were lying in the supine position. Results: The affected skin of SCI/PU patients showed elevated redness: median 595.5 arbitrary units (AU) (quartiles 440.4; 631.6) and perfusion: 263.0 AU (104.1; 659.4), both significantly increased compared to the healthy skin in SCI/PA patients and CON (p < 0.001). Similarly, healthy skin of SCI/PA patients showed elevated redness (p = 0.016) and perfusion (p < 0.001) compared to CON. On the other hand, differences in redness and perfusion between the affected and unaffected skin in SCI/PU patients were not significant. The results for skin hydration and skin elasticity were similar in all groups. Conclusions: Skin perfusion and redness were significantly increased in grade 1 PUs and for healthy skin in both SCI/PA patients and CON participants; thus, these are important in understanding the pathophysiology of PUs and skin in SCI. (C) 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Critical limb ischemia (CLI) with distal leg necrosis in lung transplant recipients (LTR) is associated with a high risk for systemic infection and sepsis. Optimal management of CLI has not been defined so far in LTR. In immunocompetent individuals with leg necrosis, surgical amputation would be indicated and standard care. We report on the outcome of four conservatively managed LTR with distal leg necrosis due to peripheral arterial disease (PAD) with medial calcification of the distal limb vessels. Time interval from lung transplantation to CLI ranged from four years (n = 1) to more than a decade (n = 3). In all cases a multimodal therapy with heparin, acetylsalicylic acid, iloprost and antibiotic therapy was performed, in addition to a trial of catheter-based revascularization. Surgical amputation of necrosis was not undertaken due to fear of wound healing difficulties under long-term immunosuppression and impaired tissue perfusion. Intensive wound care and selective debridement were performed. Two patients developed progressive gangrene followed by auto-amputation during a follow-up of 43 and 49 months with continued ambulation and two patients died of unrelated causes 9 and 12 months after diagnosis of CLI. In conclusion, we report a conservative treatment strategy for distal leg necrosis in LTR without surgical amputation and recommend this approach based on our experience. (C) 2017 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Background: Surgical wounds healing by secondary intention (SWHSI) are often difficult and costly to treat. There is a dearth of clinical and research information regarding SWHSI. The aim of this survey was to estimate the prevalence of SWHSI and to characterise the aetiology, duration and management of these wounds. Methods: Anonymised data were collected from patients with SWHSI receiving treatment in primary, secondary and community settings. Over a two weeks period, data were collected on the patients, their SWHSI, clinical and treatment details. Results: Data were collected from 187 patients with a median age of 58.0 (95% CI = 55 to 61) years. The prevalence of SWHSI was 0.41 (95% CI = 0.35 to 0.47) per 1000 population. More patients with SWHSI were being treated in community (109/187, 58.3%) than in secondary (56/187, 29.9%) care settings. Most patients (164/187, 87.7%) had one SWHSI and the median duration of wounds was 28.0 (95% CI = 21 to 35) days. The most common surgical specialities associated with SWHSI were colorectal (80/187, 42.8%), plastics (24/187, 12.8%) and vascular (22/187, 11.8%) surgery. Nearly half of SWHSI were planned to heal by secondary intention (90/187, 48.1%) and 77/187 (41.2%) were wounds that had dehisced. Dressings were the most common single treatment for SWHSI, received by 169/181 (93.4%) patients. Eleven (6.1%) patients were receiving negative pressure wound therapy. Conclusions: This survey provides a previously unknown insight into the occurrence, duration, treatment and types of surgery that lead to SWHSI. This information will be of value to patients, health care providers and researchers. (C) 2017 The Authors. Published by Elsevier Ltd on behalf of Tissue Viability Society. Aim: to estimate the direct variable costs of the topical treatment of stages III and IV pressure injuries of hospitalized patients in a public university hospital, and assess the correlation between these costs and hospitalization time. Materials and methods: Forty patients of both sexes who had been admitted to the Sao Paulo Hospital, Sao Paulo, SP, Brazil, from 2011 to 2012, with pressure injuries in the sacral, ischial or trochanteric region were included. The patients had a total of 57 pressure injuries in the selected regions, and the lesions were monitored daily until patient release, transfer or death. The quantities and types of materials, as well as the amount of professional labor time spent on each procedure and each patient were recorded. The unit costs of the materials and the hourly costs of the professional labor were obtained from the hospital's purchasing and human resources departments, respectively. Spearman's correlation coefficient and the Mann-Whitney and Kruskal-Wallis tests were used for the statistical analyses. Results: The mean topical treatment costs for stages III and IV PIs were significantly different (US$ 854.82 versus US$ 1785.35; p = 0.004). The mean topical treatment cost of stages III and IV pressure injuries per patient was US$ 1426.37. The mean daily topical treatment cost per patient was US$ 40.83. There was a significant correlation between hospitalization time and the total costs of labor and materials (p < 0.05). There was no significant difference between hospitalization time periods for stages III and IV pressure injuries (40.80 days and 45.01 days, respectively; p = 0.834). Conclusion: The mean direct variable cost of the topical treatment for stages III and IV pressure injuries per patient in this public university hospital was US$ 1426.37. (C) 2016 Published by Elsevier Ltd on behalf of Tissue Viability Society. Aim: To translate into Brazilian Portuguese and cross-culturally adapt the Cardiff Wound Impact Schedule, a specific measure of health-related quality of life (HRQoL) for patients with chronic wounds. Chronic wounds have a relevant impact on the HRQoL of patients. However, there are few instruments cross-culturally adapted and validated in Brazil to assess HRQoL in patients with wounds. Methods: A descriptive cross-sectional study was conducted following six steps: (1) translation of the original instrument into Brazilian-Portuguese by two independent translators; (2) construction of a consensus version based on both translations; (3) two independent back-translations into English of the consensus version; (4) review by an expert committee and construction of the pre-final version; (5) testing of the pre-final version on patients with chronic wounds; and (6) construction of the final version. The psychometric properties of the instrument were tested on 30 patients with chronic wounds of the lower limb; 76.7% were men, 70.0% had traumatic wounds, and 43.3% had the wound for more than I year. Participants were recruited from an outpatient wound care clinic in Sao Paulo, Brazil. Results: The final version approved by the expert committee was well understood by all patients who participate in the study and had satisfactory face validity, content validity, and internal consistency, with Cronbach's alpha coefficients ranging from 0.681 to 0.920. Conclusion: The cross-culturally adapted Brazilian-Portuguese version of the instrument showed satisfactory face and content validity, good internal consistency, and was named Cardiff Wound Impact Schedule-Federal University of Sao Paulo School of Medicine or CWIS-UNIFESP/EPM. (C) 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Background: Understanding the biological processes underlying Pressure Ulcer (PU) is an important strategy to identify new molecular targets. Bioinformatics has emerged as an important screening tool for a broad range of diseases. Objective: This study aim of the current study is to investigate the protein-protein interaction in the PU context by bioinformatics. Methods: We performed a search in gene databases, and bioinformatics algorithms were used to generate molecular targets for PU based in silico investigation. Interactions networks between protein coding genes were built and compared to skin. Results: TNFA, MMP9, and IL10 genes have higher disease-related connectivity than a connectivity general global. MAGOH, UBC, and PTCH1 as were leader genes related to skin. Ontological analysis demonstrated different mechanisms associated, such as response to oxidase stress. Conclusion: TNFA, MMP9, and IL10 are possible therapeutic targets for pressure ulcer. Additional investigation of cell post -transcriptional machinery should be investigated in PU. (C) 2017 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Diabetic wound healing is a complicated process. In all over the world 15% of 200 million diabetic people suffer from diabetic foot problems. Mast cells are known to participate in three phases of wound healing: the inflammatory reaction, angiogenesis and extracellular-matrix reabsorption. The inflammatory reaction is mediated by released histamine and arachidonic acid metabolites. Omega-3 fatty acids alter proinflammatory cytokine production during wound healing which affects the presence of inflammatory cells in wound area as well, but how this events specifically influences the presence of mast cells in wound healing is not clearly understood. This study is conducted to determine the effect of Omegaven, eicosapentaenoic (EPA) and docosahexaenoic (DHA) on pattern of presence of mast cells in diabetic wound area. Diabetic male wistar rats were euthanized at 1, 3, 5, 7 and 15 days after the excision was made. To estimate the number of mast cells histological sections were provided from wound area and stained with toluidine blue. In this relation wound area (8400 microscopic field, 45.69 mm(2)) were examined by stereological methods by light microscope. We found that comparing experimental and control group, omega-3 fatty acids significantly decreased wound area in day 7 and also the number of grade three mast cells in day 3 and 5. We also found that wound strength has significantly increased in experimental group at day 15. 2016 Published by Elsevier Ltd on behalf of Tissue Viability Society. It has been reported that carbohydrates confer physicochemical properties to the wound environment that improves tissue repair. We evaluated in vitro and in vivo wound healing during maltodextrin/ascorbic acid treatment. In a fibroblast monolayer scratch assay, we demonstrated that maltodextrin/ascorbic acid stimulated monolayer repair by increasing collagen turnover coordinately with TGF-beta 1 expression (rising TGF-beta 1 and MMP-1 expression, as well as gelatinase activity, while TIMP-1 was diminished), similar to in vivo trends. On the other hand, we observed that venous leg ulcers treated with maltodextrin/ascorbic acid diminished microorganism population and improved wound repair during a 12 week period. When maltodextrin/ascorbic acid treatment was compared with zinc oxide, almost four fold wound closure was evidenced. Tissue architecture and granulation were improved after the carbohydrate treatment also, since patients that received maltodextrin/ascorbic acid showed lower type I collagen fiber levels and increased extracellular alkaline phosphatase activity and blood vessels than those treated with zinc oxide. We hypothesize that maltodextrin/ascorbic acid treatment stimulated tissue repair of chronic wounds by changing the stage of inflammation and modifying collagen turnover directly through fibroblast response. (C) 2017 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Objectives: The aim of this study was to analyse the efficacy and safety of using platelet rich in growth factor (PRGF) as a local treatment for venous ulcers. Methods: In a clinical trial 102 venous ulcers (58 patients) were randomly assigned to the study group (application of PRGF) or the control group (standard cure with saline). For both groups the healed area was calculated before and after the follow up period (twenty-four weeks). The Kundin method was used to calculate the healed area (Area = Length x Width x 0.785). Pain was measured at the start and end of treatment as a secondary variable for each group by record obtained by means of self-evaluation visual analogue scale. Results: The average percentage healed area in the platelet rich plasma group was 67.7 +/- 41.54 compared to 11.17 +/- 24.4 in the control group (P = 0.001). Similarly, in the experimental group a significant reduction in pain occurred on the scale (P = 0.001). No adverse effects were observed in either of the two treatment groups. Conclusions: The study results reveal that application of plasma rich in platelets is an effective and safe method to speed up healing and reduce pain in venous ulcers. (C) 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. Aim of the study: The aim of the study was to evaluate the effect of WaterCell((R)) Technology on pressure redistribution and self-reported comfort and discomfort scores of adults with mobility problems who remain seated for extended periods of time. Methods: Twelve participants, were recruited and ranged in gender, age, height, weight, and body mass index. Five were male, seven were female, and five were permanent wheelchair users. Each participant was randomly allocated a chair, whose seat comprised of visco-elastic memory foam, high-elastic reflex foam, and watercells, to trial for a week. Data collected at day one and day seven included: interface pressure measurements taken across the gluteal region (peak and average); physiological observations of respiratory rate, pulse rate, and blood pressure; skin inspection and comfort and discomfort scores. Results: Watercell (R) technology was found to offer lower average pressures than those reported to cause potential skin injury. Peak pressure index findings were comparative to other studies. No correlation was found between discomfort intensity rating and pressure redistribution. Discomfort intensity rating was low for all participants and general discomfort ranged from very low to medium. Physiological observations decreased for 50% of participants over the seven days. Conclusion: From our study we have found that WaterCell (R) technology offers comparable pressure redistribution for people with a disability who need to sit for prolonged periods of time and the chairs were found to be comfortable. Crown Copyright (C) 2016 Published by Elsevier Ltd on behalf of Tissue Viability Society. All rights reserved. Background: Pressure Ulcers (PUs) are a severe form of skin and soft tissue lesions, caused by sustained deformation. PU development is complex and depends on different factors. Skin structure and function change during prolonged loading on PU predilection sites and surfaces being in direct contact with skin are likely to have an impact as well. Little is known about the influence of fabrics on skin function under pressure conditions. Objectives: To investigate skin responses to sustained loading in a sitting position and possible differences between two fabrics. Methods: Under controlled conditions 6 healthy females (median age 65.0 (61.0-67.8) years) followed a standardized immobilization protocol of a sitting position for 45 min on a spacer and on a cotton fabric. Before and after the loading period skin surface temperature, stratum corneum hydration, trans epidermal water loss (TEWL), erythema, skin elasticity and 'relative elastic recovery' were measured at the gluteal areas. Results: A 45 min sitting period caused increases of skin surface temperature and erythema independent of the fabric. Loading on spacer fabric showed a two times higher increase of TEWL compared to cotton. Stratum corneum hydration showed slight changes after loading, skin elasticity and 'relative elastic recovery' remained stable. Conclusions: Sitting on a hard surface causes skin barrier changes at the gluteal skin in terms of stratum corneum hydration and TEWL. These changes are influenced by the fabric which is in direct contact to the skin. There seems to be a dynamic interaction between skin and fabric properties especially in terms of temperature and humidity accumulation and transport. (C) 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved. CoQ(10) is ubiquitously present in eukaryotic cells. It acts as electron carrier in the electron transport chain of the inner membrane of the mitochondria to facilitate aerobic cellular respiration. A highly stable lipid nanodispersion formulation containing CoQ(10) (BPM31510) is currently in clinical investigation for treatment of cancer. This study was designed to determine whether biophysical interactions between CoQ(10) and lipid, in part, explain the observed stability and cellular accumulation of CoQ(10) in cells and tissues. A lipid monolayer at the air-water interface was used as an experimental membrane model to measure CoQ(10) penetration and solubility. Lipid monolayers with varying proportions of CoQ(10) were laterally compressed to measure CoQ(10) miscibility and lateral organization. Additionally, lipid monolayers with varying lateral packing densities were spread at the air-water interface and CoQ(10) was injected in proximity to measure its rate of penetration. Our results demonstrate that CoQ(10) selectively penetrates into lipid monolayers with a lower lateral packing density, and is excluded by monolayers of higher packing densities. Data also indicates that CoQ(10)-lipid mixing is non-ideal. CoQ(10) presence in lipid monolayers is biphasic, with one phase occupying the interstitial space between the DMPC lipids, and the other phase is present as pure CoQ(10) domains. This work provides further insight into mechanism of action of CoQ(10) based formulations that can significantly increase intracellular CoQ(10) concentration to show pleotropic effects on cellular functions. (C) 2017 BERG, LLC. Published by Elsevier B.V. beta-Barrelmembrane proteins (beta MPs) form barrel-shaped pores in the outer membrane of Gram-negative bacteria, mitochondria, and chloroplasts. Because of the robustness of their barrel structures, beta MPs have great potential as nanosensors for single-molecule detection. However, natural beta MPs currently employed have inflexible biophysical properties and are limited in their pore geometry, hindering their applications in sensing molecules of different sizes and properties. Computational engineering has the promise to generate beta MPs with desired properties. Here we report a method for engineering novel beta MPs based on the discovery of sequence motifs that predominantly interact with the cell membrane and appear in more than 75% of transmembrane strands. By replacing beta 1-beta 6 strands of the protein OmpF that lack these motifs with beta 1-beta 6 strands of OmpG enriched with these motifs and computational verification of increased stability of its transmembrane section, we engineered a novel beta MP called OmpGF. OmpGF is predicted to form a monomer with a stable transmembrane region. Experimental validations showed that OmpGF could refold in vitro with a predominant beta-sheet structure, as confirmed by circular dichroism. Evidence of OmpGF membrane insertion was provided by intrinsic tryptophan fluorescence spectroscopy, and its pore-forming property was determined by a dye-leakage assay. Furthermore, single-channel conductance measurements confirmed that OmpGF function as a monomer and exhibits increased conductance than OmpG and OmpF. These results demonstrated that a novel and functional beta MP can be successfully engineered through strand replacement based on sequence motif analysis and stability calculation. (C) 2017 Elsevier B.V. All rights reserved. Using a combination of coarse-grained and atomistic molecular dynamics simulations we have investigated the membrane binding and folding properties of the membrane lytic peptide of Flock House virus (FHV). FHV is an animal virus and an excellent model system for studying cell entry mechanisms in non-enveloped viruses. FHV undergoes a maturation event where the 44 C-terminal amino acids are cleaved from the major capsid protein, forming the membrane lytic (gamma) peptides. Under acidic conditions, gamma is released from the capsid interior allowing the peptides to bind and disrupt membranes. The first 21 N-terminal residues of gamma, termed gamma(1), have been resolved in the FHV capsid structure and gamma(1) has been the subject of in vitro studies. gamma(1) is structurally dynamic as it adopts helical secondary structure inside the capsid and on membranes, but it is disordered in solution. In vitro studies have shown the binding free energies to POPC or POPG membranes are nearly equivalent, but binding to POPC is enthalpically driven, while POPG binding is entropically driven. Through coarse-grained and multiple microsecond all-atom simulations the membrane binding and folding properties of gamma(1) are investigated against homogeneous and heterogeneous bilayers to elucidate the dependence of the microenvironment on the structural properties of gamma(1). Our studies provide a rationale for the thermodynamic data and suggest binding of gamma(1) to POPG bilayers occurs in a disordered state, but gamma(1) must adopt a helical conformation when binding POPC bilayers. (C) 2017 Elsevier B.V. All rights reserved. The human phospholipid scramblase 1 (SCR) distributes lipids non-selectively between the cellular membrane leaflets. SCR has long been thought to be mostly localized in the cytoplasm (amino acids 1-287) and anchored to the membrane via the insertion of a 19 amino acid long transmembrane C-terminal helix (CTH, 288-306), which further extends to the exoplasmic side with a 12 amino acid long tail (307-318). Little is known about the structure of this protein, but recent experimental data on two CTH peptides (288-306 and 288-318) show that they insert through phospholipid bilayers and that the presence of cholesterol improves their affinity for lipid vesicles. Yet the sequence of the CTH ((288)KMKAVMIGACFLIDFMFFE(306)) contains an aspartic acid (D301), which is not exactly a prototypical amino acid for single-pass transmembrane helices. In this study, we investigate how the polar aspartate residue is accommodated in lipid bilayers containing POPC with and without cholesterol, using all-atom molecular dynamics simulations. We identify two cholesterol-binding sites: (i) A291, F298 and L299 and (ii) L299, F302 and E306 and suggest that cholesterol plays a role in stabilizing the helix in a transmembrane position. We suggest that the presence of the aspartate could be functionally relevant for the scramblase protein activity. (C) 2017 Elsevier B.V. All rights reserved. Upon uptake of Hg and Cd into living systems, possible targets for metal induced toxicity include the membranes surrounding nervous, cardiovascular and renal cells. To further our understanding of the interactions of Hg and Cd with different lipid structures under physiologically relevant chloride and pH conditions (100 mM NaCl pH 7.4), we used fluorescence spectroscopy and dynamic light scattering to monitor changes in membrane fluidity and phase transition and liposome size. The metal effects were studied on zwitterionic, cationic and anionic lipids to elucidate electrostatically driven metal-lipid interactions. The effect of Hg-catalyzed cleavage of the vinyl ether bond in plasmalogens on these aforementioned properties was studied in addition to a thermodynamic characterization of this interaction by Isothermal Titration Calorimetry. The negatively charged Hg-chloride complexes formed under our experimental conditions induce membrane rigidity in membranes containing cationic lipids and plasmalogens while this effect is heavily reduced and entirely absent with zwitterionic and anionic lipids respectively. The K-D for the interaction of Hg with plasmalogen containing liposomes was between 4-30 mu M. Furthermore, the presence of Cd affected the interaction of Hg with plasmalogen when negatively charged PS was also present. In this case, even the order of the metal addition was important. (C) 2017 Elsevier B.V. All rights reserved. The bilayer phase transitions of four diacylphosphatidylethanolamines (PEs) with matched saturated acyl chains (C-n = 12, 14, 16 and 18) and two PEs with matched unsaturated acyl chains containing a different kind of double bonds were observed by differential scanning calorimetry under atmospheric pressure and light-transmittance measurements under high pressure. The temperature-pressure phase diagrams for these PE bilayer membranes were constructed from the obtained phase-transition data. The saturated PE bilayer membranes underwent two different phase transitions related to the liquid crystalline (L-alpha) phase, the transition from the hydrated crystalline (L-c) phase and the chain melting (gel (L-beta) to L-alpha) transition, depending on the thermal history. Pressure altered the gel-phase stability of the bilayer membranes of PEs with longer chains at a low pressure. Comparing the thermodynamic quantities of the saturated PE bilayer membranes with those of diacylphosphatidylcholine (PC) bilayer membranes, the PE bilayer membranes showed higher phase-transition temperatures and formed more stable L-c phase, which originates from the strong interaction between polar head groups of PE molecules. On the other hand, the unsaturated PE bilayer membranes underwent the transition from the L-alpha phase to the inverted hexagonal (H-II) phase at a high temperature and this transition showed a small transition enthalpy but high pressure-responsivity. It turned out that the kind of double bonds markedly affects both bilayer-bilayer and bilayer-nonbilayer transitions and the L-alpha/H-II transition is a volume driven transition for the reconstruction of molecular packing. Further, the phase-transition behavior was explained by chemical potential curves of bilayer phases. (C) 2017 Elsevier B.V. All rights reserved. G-protein gated inwardly rectifying potassium (GIRK or Kir3) channels play a major role in the control of the heart rate, and require the membrane phospholipid phosphatidylinositol-bis-phosphate (PI(4,5)P-2) for activation. Recently, we have shown that the activity of the heterotetrameric Kir3.1/Kir3.4 channel that underlies atrial K-ACh currents was enhanced by cholesterol. Similarly, the activities of both the Kir3.4 homomer and its active pore mutant Kir3.4* (Kir3.4_S143T) were also enhanced by cholesterol. Here we employ planar lipid bilayers to investigate the crosstalk between PI(4,5)P-2 and cholesterol, and demonstrate that these two lipids act synergistically to activate Kir3.4* currents. Further studies using the Xenopus oocytes heterologous expression system suggest that PI(4,5)P-2 and cholesterol act via distinct binding sites. Whereas PI(4,5)P-2 binds to the cytosolic domain of the channel, the putative binding region of cholesterol is located at the center of the transmembrane domain overlapping the central glycine hinge region of the channel. Together, our data suggest that changes in the levels of two key membrane lipids - cholesterol and PI(4,5)P-2 - could act in concert to provide fine-tuning of Kir3 channel function. (C) 2017 Elsevier B.V. All rights reserved. Nonspecific interactions between lipids and fluorophores can alter the outcomes of-single-molecule spectroscopy of membrane proteins in live cells, liposomes or lipid nanodiscs and of cytosolic proteins encapsulated in liposomes or tethered to supported lipid bilayers. To gain insight into these effects, we examined interactions between 9 dyes that are commonly used as labels for single-molecule fluorescence (SMF) and 6 standard lipids including cationic, zwitterionic and anionic types. The diffusion coefficients of dyes in the absence and presence of set amounts of lipid vesicles were measured by fluorescence correlation spectroscopy (FCS). The partition coefficients and the free energies of partitioning for different fluorophore-lipid pairs were obtained by global fitting of the titration FCS curves. Lipids with different charges, head groups and degrees of chain saturation were investigated, and interactions with dyes are discussed in terms of hydrophobic, electrostatic and steric contributions. Fluorescence imaging of individual fluorophores adsorbed on supported lipid bilayers provides visualization and additional quantification of the strength of dye-lipid interaction in the context of single-molecule measurements. By dissecting fluorophore-lipid interactions, our study provides new insights into setting up single-molecule fluorescence spectroscopy experiments with minimal interference from interactions between fluorescent labels and lipids in the environment. (C) 2017 Elsevier B.V. All rights reserved. Staphylococcus epidermidis is the most frequent cause of biofilm mediated implant-associated infections. Extra cellular polymeric substance (EPS) is a key component of most biofilms and in pathogens it specifically protects the entrenched-bacterial cells from antibiotics and hosts immune response, and thereby makes the infection ineradicable. Recently, the prominence of cyclic dipeptides in interfering with biofilms and the associated virulence factors of pathogens has offered an alternative to eliminate difficult-to-treat infections. Therefore, we assessed the effect of a potent antibiofilm agent cyclic dipeptide, cyclo(L-leucyl-L-prolyl) (CLP), on the EPS modification of S. epidermidis. The non-bactericidal antibiofilm efficacy of CLP against S. epidermidis was affirmed through quantitative (crystal violet and XTT assays) and qualitative (confocal and scanning electron microscopes) analyses. Notably, CLP was potent enough to reduce all the EPS components viz. polysaccharides, proteins and eDNA to a significant level. Substantial difference in the atomic composition and functionality of CLP treated EPS was evident through X-ray photoelectron spectroscopy. Furthermore, CLP dehydrated the S. epidermidis-EPS and altered the acetylated sugars as well as a-glycosidic linkage in it. The results of cyclic voltammetry (CV) indicate the decrease of total negative charge of EPS upon CLP treatment, which goes well in accordance with the decrease of eDNA. Thus, antibiofilm efficacy of CLP lies in its potency to alter the intrinsic functional groups and charge of secreted EPS. (C) 2017 Elsevier B.V. All rights reserved. Saponins are a diverse family of naturally occurring plant triterpene or steroid glycosides that have a wide range of biological activities. They have been shown to permeabilize membranes and in some cases membrane disruption has been hypothesized to involve saponin/cholesterol complexes. We have examined the interaction of steroidal saponin 1688-1 with lipid membranes that contain cholesterol and have a mixture of liquid-ordered (L-o) and liquid-disordered (L-d) phases as a model for lipid rafts in cellular membranes. A combination of atomic force microscopy (AFM) and fluorescence was used to probe the effect of saponin on the bilayer. The results demonstrate that saponin forms defects in the membrane and also leads to formation of small aggregates on the membrane surface. Although most of the membrane damage occurs in the liquid-disordered phase, fluorescence results demonstrate that saponin localizes in both ordered and disordered membrane phases, with a modest preference for the disordered regions. Similar effects are observed for both direct incorporation of saponin in the lipid mixture used to make vesicles/bilayers and for incubation of saponin with preformed bilayers. The results suggest that the initial sites of interaction are at the interface between the domains and surrounding disordered phase. The preference for saponin localization in the disordered phase may reflect the ease of penetration of saponin into a less ordered membrane, rather than the actual cholesterol concentration in the membrane. Dye leakage assays indicate that a high concentration of saponin is required for membrane permeabilization consistent with the supported lipid bilayer experiments. Crown Copyright (C) 2017 Published by Elsevier B.V. All rights reserved. Electric field pulses of nano- and picosecond duration are a novel modality for neurostimulation, activation of Ca2+ signaling, and tissue ablation. However it is not known how such brief pulses activate voltage-gated ion channels. We studied excitation and electroporation of hippocampal neurons by 200-ns pulsed electric field (nsPEF), by means of time-lapse imaging of the optical membrane potential (OMP) with FluoVolt dye. Electroporation abruptly shifted OMP to a more depolarized level, which was reached within <1 ms. The OMP recovery started rapidly (T = 8-12 ms) but gradually slowed down (to T> 10 s), so cells remained above the resting OMP level for at least 20-30 s. Activation of voltage-gated sodium channels (VGSC) enhanced the depolarizing effect of electroporation, resulting in an additional tetrodotoxin-sensitive OMP peak in 4-5 ms after nsPEF. Omitting Ca2+ in the extracellular solution did not reduce the depolarization, suggesting no contribution of voltage gated calcium channels (VGCC). In 40% of neurons, nsPEF triggered a single action potential (AP), with the median threshold of 3 kV/cm (range: 1.9-4 kV/cm); no APs could be evoked by stimuli below the electroporation threshold (1.5-1.9 kV/cm). VGSC opening could already be detected in 0.5 ms after nsPEF, which is too fast to be mediated by the depolarizing effect of electroporation. The overlap of electroporation and AP thresholds does not necessarily reflect the causal relation, but suggests a low potency of nsPEF, as compared to conventional electrostimulation, for VGSC activation and AP induction. (C) 2017 Elsevier B.V. All rights reserved. In this paper a simple prediction method for the bipolar pulse cancellation effect is proposed, based on the frequency analysis of the TMP spectra of a single cell and the computed relative global spectral content up to a defined frequency threshold. We present a spectral analysis of pulses applied in experiments, and we extract the induced TMP from a microdosimetric model of the cell. The induced TMP computation is carried out on a hemispherical multi-layered cell model in the time domain. The analysis is presented for a variety of unipolar and bipolar input signals in the nanosecond and the microsecond time scales. Our evaluations are in good agreement with experimental results for bipolar pulse cancellation of electropermeabilization-induced Ca2+ influx using 300 ns, 750 kV/m pulses and with other results reported in recent literature. (C) 2017 Elsevier B.V. All rights reserved. The final topology of membrane proteins is thought to be dictated primarily by the encoding sequence. However, according to the Charge Balance Rule the topogenic signals within nascent membrane proteins are interpreted in agreement with the Positive Inside Rule as influenced by the protein phospholipid environment. The role of long-range protein-lipid interactions in establishing a final uniform or dual topology is unknown. In order to address this role, we determined the positional dependence of the potency of charged residues as topological signals within Escherichia coli sucrose permease (CscB) in cells in which the zwitterionic phospholipid phosphatidylethanolamine (PE), acting as topological determinant, was either eliminated or tightly titrated. Although the position of a single or paired oppositely charged amino acid residues within an extramembrane domain (EMD), either proximal, central or distal to a transmembrane domain (TMD) end, does not appear to be important, the oppositely charged residues exert their topogenic effects separately only in the absence of PE. Thus, the Charge Balance Rule can be executed in a retrograde manner from any cytoplasmic EMD or any residue within an EMD most likely outside of the translocon. Moreover, CscB is inserted into the membrane in two opposite orientations at different ratios with the native orientation proportional to the mol % of PE. The results demonstrate how the cooperative contribution of lipid-protein interactions affects the potency of charged residues as topological signals, providing a molecular mechanism for the realization of single, equal or different amounts of oppositely oriented protein within the same membrane. (C) 2017 Elsevier B.V. All rights reserved. Three - dimensional (3D) electrodes are successfully used to overcome the limitations of the low space - time yield and low normalized space velocity obtained in electrochemical processes with two - dimensional electrodes. In this study, we developed a three - dimensional reticulated vitreous carbon - gold (RVC-Au) sponge as a scaffold for enzymatic fuel cells (EFC). The structure of gold and the real electrode surface area can be controlled by the parameters of metal electrodeposition. In particular, a 3D RVC-Au sponge provides a large accessible surface area for immobilization of enzyme and electron mediators, moreover, effective mass diffusion can also take place through the uniform macro - porous scaffold. To efficiently bind the enzyme to the electrode and enhance electron transfer parameters the gold surface was modified with ultrasmall gold nanoparticles stabilized with glutathione. These quantum sized nanoparticles exhibit specific electronic properties and also expand the working surface of the electrode. Significantly, at the steady state of power generation, the EFC device with RVC-Au electrodes provided high volumetric power density of 1.18 +/- 0.14 mW cm(-3) (41.3 +/- 3.8 W cm(-2)) calculated based on the volume of electrode material with OCV 0.741 +/- 0.021 V. These new 3D RVC-Au electrodes showed great promise for improving the power generation of EFC devices. In this work, we designed highly sensitive and selective luminescent detection method for alkaline phosphatase using bovine serum albumin functionalized gold nanoclusters (BSA-AuNCs) as the nanosensor probe and pyridoxal phosphate as the substrate of alkaline phosphatase. We found that pyridoxal phosphate can quench the fluorescence of BSA-AuNCs and pyridoxal has little effect on the fluorescence of BSA-AuNCs. The proposed mechanism of fluorescence quenching by PLP was explored on the basis of data obtained from high-resolution transmission electron microscopy (HRTEM), dynamic light scattering (DLS), UV-vis spectrophotometry, fluorescence spectroscopy, fluorescence decay time measurements and circular dichroism (CD) spectroscopy. Alkaline phosphatase catalyzes the hydrolysis of pyridoxal phosphate to generate pyridoxal, restoring the fluorescence of BSA-AuNCs. Therefore, a recovery type approach has been developed for the sensitive detection of alkaline phosphatase in the range of 1.0-200.0 U/L (R-2 =0.995) with a detection limit of 0.05 U/L. The proposed sensor exhibit excellent selectivity among various enzymes, such as glucose oxidase, lysozyme, trypsin, papain, and pepsin. The present switch-on fluorescence sensing strategy for alkaline phosphatase was successfully applied in human serum plasma with good recoveries (100.60-104.46%), revealing that this nanosensor probe is a promising tool for ALP detection. A thionine (TH)-doped mesoporous silica nanosphere (MSN)/polydopamine (PDA) nanocomposite was synthesized for developing a new signal transduction strategy of electrochemical immunoassay. This nanocomposite was synthesized through one-pot loading of TH and in situ formation of PDA-coating on MSN. After antibody labeling, the obtained nanoprobe was used for the signal tracing of sandwich immunoassay at a magnetic bead-assay platform. Based on the specific capture of the MSN-TH/PDA nanoprobes through sandwich immunoreaction to form a magnetic immunocomplex and the following treatment of the immunocomplex with a NaOH solution, the PDA film coated on MSN was destroyed to release TH tags from the nanoprobe. The released TH was then electrochemically measured at a carbon nanotube (CNT)-modified electrode for the signal transduction of the method. Due to the high loading of TH on the nanoprobe for tag release and effective electrochemical signal enhancement by the electrode modification of CNTs, ultrahigh sensitivity was achieved. Using human IgG as a model analyte, this method showed a wide linear range over four orders of magnitude and a low detection limit of 5.8 pg/mL. Additionally, the method has excellent specificity, satisfactory reproducibility and stability as well as acceptable reliability. Due to the simple preparation of the nanoprobe and the low cost, convenient operation of the detection strategy, this nonenzymatic immunoassay method possesses great potentials for practical applications. An efficient near-infrared fluorescence probe has been developed for the sequential detection of Cu2+, pyrophosphate (P2O74-, PPi), and alkaline phosphatase (ALP), which is based on the "off-on-off" fluorescence switch of branched polyethyleneimine (PEI)-capped NaGdF4:Yb/Tm upconversion nanoparticles (UCNPs). The fluorescence is quenched via energy transfer from UCNPs to Cu2+ for the coordination of PEI with Cu2+. The strong affinity between Cu2+ and PPi leads to the formation of Cu2+-PPi complex and results in the detachment of Cu2+ from the surface of UCNPs, thus the fluorescence is triggered on. ALP-directed hydrolysis of PPi causes the disassembly of Cu2+-PPi complex and re -conjugation between Cu2+ with PEI, which leads to the switch-off fluorescence of UCNPs. The system allows sequential analysis of Cu2+, PPi, and ALP by modulating the switch of the fluorescence of UCNPs with detection limits of 57.8 nM, 184 nM, and 0.019 U/mL for Cu2+, PPi, and ALP, respectively. By virtue of the MR feature and excellent biocompatibility, the UCNPs-based probes are suitable for bioimaging. Taking Cu2+ visualization as a model, the nanoprobes have been successfully applied for intracellular imaging of Cu2+ in living cells. Here, a novel potential-resolved "in-electrode" type electrochemiluminescence (ECL) immunosensor was fabricated based on two different types of luminant Ru-NH2 and AuNPs/g-C3N4 to realize simultaneous detection of dual targets. In this strategy, anti-CA125(1) and anti-SCCA(1) were immobilized on bare gold electrode as capture probes, which could catch the two corresponding target CA125 and SCCA, and the immobilization of the signal tags was allowed via the interaction between antigen and antibody. In this process, (Ru & anti-CA125(2))@GO and anti-SCCA(2)-AuNPs/g-C3N4 could exhibit two strong and stable ECL emissions at 1.25 V and -1.3 V respectively, which could be used as effective signal tags. Taking advantage of "in-electrode" type ECL immunosensor, all the electrochemiluminophores near the outer Helmholtz plane are "effective" in participating in the electrochemical reactions and emitting ECL signals. Therefore, the dual targets CA125 and SCCA could be detected within the linear ranges of 0.001-100 U/mL and 0.001-100 ng/mL, with detection limits of 0.4 mU/mL and 0.33 pg/mL, respectively. All these results demonstrated that the present potential-resolved "in-electrode" type electrochemiluminescence approach provided a promising analytical method for dual targets analysis with the advantages of simple analytical procedure, small sample volume and lower cost, which made the proposed method potential for clinical detection. A new photoelectrochemical (PEC) immunosensor based on Mn-doped CdS quantum dots (CdS:Mn QDs) on g-C3N4 nanosheets was developed for the sensitive detection of prostate specific antibody (PSA) in biological fluids. The signal derived from the as-synthesized Cd:Mn QDs-functionalized g-C3N4 nanohybrids via a hydrothermal method and was amplified through DNAzyme concatamers on gold nanoparticles accompanying enzymatic biocatalytic precipitation. Experimental results by UV-vis absorption spectra and photoluminescence revealed that CdS:Mn QDs/g-C3N4 nanohybrids exhibited higher photocurrent than those of CdS:Mn QDs and g-C3N4 alone. Upon addition of target PSA, a sandwich-type immunoreaction was carried out between capture antibodies and the labeled detection antibodies. Accompanying introduction of gold nanoparticles, the labeled initiator strands on the AuNPs triggered hybridization chain reaction and the formation of DNAzyme concatamers in the presence of hemin. The formed DNAzyme catalyzed 4-chloro-1-naphthol (4-CN) to produce an insoluble/insulating precipitate on the Mn:CdS QDs/g-C3N4, and blocked the light harvesting of Mn:CdS QDs/g-C3N4, thus resulting in the decreasing photocurrent. Under optimal conditions, the immunosensor exhibited good photocurrent responses for determination of target PSA, and allowed detection of PSA at a concentration as low as 3.8 pg mL(-1). The specificity, reproducibility and precision of this system were acceptable. Significantly, this methodology was further evaluated for analyzing human serum samples, giving well-matched results with referenced PSA enzyme-linked immunosorbent assay (ELISA) method. Quantum dots. (QDs) have attracted extensive attention in biomedical applications, because of their broad excitation spectra, narrow and symmetric emission peaks etc. Furthermore, near-infrared (NIR) QDs have further advantages including low autofluorescence, good tissue penetration and low phototoxicity. In this work, the electrostatic interaction and fluorescence resonance energy transfer (FRET) between NIR CdTe/CdS QDs and cationic conjugated polymer (CCP) was studied for the first time. Based on the newly discovered phenomena and the result that hydrogen peroxide (H2O2) can efficiently quench the fluorescence of NIR CdTe/CdS QDs, a novel NIR ratiometric fluorescent probe for determination of H2O2 and glucose was developed. Under the optimized conditions, the detection limit of H2O2 and glucose assay were 0.1 mM and 0.05 mM (S/N=3), with a linear range of 0.2-4 mM and 0.1-5 mM, respectively. Because of the NIR spectrum, this ratiometric probe can be also applied for the determination of glucose in whole blood samples directly, providing a valuable platform for glucose sensing in clinic diagnostic and drug screening. In this, work, a Love wave biosensing platform is described for detecting cancer-related biomarker carcinoembryonic antigen (CEA). An ST 90 degrees-X quartz Love "wave device with a layer of SiO2 waveguide was combined with gold nanoparticles (Au NPs) to amplify the mass loading effect of the acoustic wave sensor to achieve a limit of detection of 37 pg/mL. The strategy involves modifying the Au NPs with anti-CEA antibody conjugates to form nanoprobes in a sandwich immunoassay. The unamplified detection limit of the Love wave biosensor is 9.4 ng/mL. This 2-3 order of magnitude reduction in the limit of detection brings the SAW platform into the range useful for clinical diagnosis. Measurement electronics and microfluidics are easily constructed for acoustic wave biosensors, such as the Love wave device described here, allowing for robust platforms for point of care applications for cancer biomarkers in general. Clozapine is one of the most promising medications for managing schizophrenia but it is under-utilized because of the challenges of maintaining serum levels in a safe therapeutic range (1-3 pM). Timely measurement of serum clozapine levels has been identified as a barrier to the broader use of clozapine, which is however challenging due to the complexity of serum samples. We demonstrate a robust and reusable electrochemical sensor with graphene-chitosan composite for rapidly measuring serum levels of clozapine. Our electrochemical measurements in clinical serum from clozapine-treated and clozapine-untreated schizophrenia groups are well correlated to centralized laboratory analysis for the readily detected uric acid and for the clozapine which is present at 100-fold lower concentration. The benefits of our electrochemical measurement approach for serum clozapine monitoring are: (i) rapid measurement (approximate to 20 min) without serum pretreatment; (ii) appropriate selectivity and sensitivity (limit of detection 0.7 mu M); (iii) reusability of an electrode over several weeks; and (iv) rapid reliability testing to detect common error-causing problems. This simple and rapid electrochemical approach for serum clozapine measurements should provide clinicians with the timely point-of-care information required to adjust dosages and personalize the management of schizophrenia. Neuron-specific enolase (NSE) had clinical significance on diagnosis, staging, monitoring effect and judging prognosis of small cell lung cancer. Thus, there had a growing demand for the on-site testing of NSE. Here, a wireless point-of-care testing (POCT) system with electrochemical measurement for NSE detection was developed and verified. The wireless POCT system consisted of microfluidic paper-based analytical devices (PADs), electrochemical detector and Android's smartphone. Differential pulse voltammetry (DPV) measurement was adopted by means of electrochemical detector which including a potentiostat and current-to-voltage converter. pPADs were modified with nanocomposites synthesized by Amino functional graphene, thionine and gold nanoparticles (NH2-G/Thi/AuNPs) as immunosensors for NSE detection. Combined with PADs, the performance of the wireless POCT system was evaluated. The peak currents showed good linear relationship of the logarithm of NSE concentration ranging from 1 to 500 ng mL(-1) with the limit of detection (LOD) of 10 pg mL(-1). The detection results were automatically stored in EEPROM memory and could be displayed on Android's smartphone through Bluetooth in real time. The detection results were comparable to those measured by a commercial electrochemical workstation. The wireless POCT system had the potential for on-site testing of other tumor markers. Rapid and reliable diagnosis of methicillin-resistant Staphylococcus aureus (MRSA) is crucial for guiding effective patient treatment and preventing the spread of MRSA infections. Nonetheless, further simplification of MRSA detection procedures to shorten detection time and reduce labor relative to that of conventional methods remains a challenge. Here, we have demonstrated a Clustered regularly interspaced palindromic repeats (CRISPR)-mediated DNA-FISH method for the simple, rapid and highly sensitive detection of MRSA; this method uses CRISPR associated protein 9/single-guide RNA (dCas9/sgRNA) complex as a targeting material and SYBR Green I (SG I) as a fluorescent probe. A dCas9/sgRNA-SG I based detection approach has advantages over monoclonal antibody in conventional immunoassay systems due to its ability to interact with the target gene in a sequence-specific manner. The detection limit of MRSA was as low as 10 cfu/ml and was found to be sufficient to effectively detect MRSA. Unlike conventional gene diagnosis methods in which PCR must be accompanied or genes are isolated and analyzed, the target gene can be detected within 30 min with high sensitivity without performing a gene separation step by using cell lysates. We showed that the fluorescence signal of the MRSA cell lysate was more than 10-fold higher than that of methicillin-susceptible S. aureus (MSSA). Importantly, the present approach can be applied to any target other than MRSA by simply changing the single-guide RNA (sgRNA) sequence. Because dCas9/sgRNA-SG I based detection approach has proved to be easy, fast, sensitive, and cost-efficient, it can be applied directly at the point of care to detect various pathogens as well as MRSA in this study. Genetically Modified Organisms, have been entered our food chain and detection of these organisms in market products are still the main challenge for scientists. Among several developed detection/quantification methods for detection of these organisms, the electrochemical nanobiosensors are the most attended which are combining the advantages of using nanomaterials, electrochemical methods and biosensors. In this research, a novel and sensitive electrochemical nanobiosensor for detection/quantification of these organisms have been developed using nanomaterials; Exfoliated Graphene Oxide and Gold Nano-Urchins for modification of the screen-printed carbon electrode, and also applying a specific DNA probe as well as hematoxylin for electrochemical indicator. Application time period and concentration of the components have been optimized and also several reliable methods have been used to assess the correct assembling of the nanobiosensor e.g. field emission scanning electron microscope, cyclic voltammetry and electrochemical impedance spectroscopy. The results shown the linear range of the sensor was 40.0-1100.0 femtomolar and the limit of detection calculated as 13.0 femtomolar. Besides, the biosensor had good selectivity towards the target DNA over the non-specific sequences and also it was cost and time-effective and possess ability to be used in real sample environment of extracted DNA of Genetically Modified Organism products. Therefore, the superiority of the aforementioned specification to the other previously published methods was proved adequate. Thiophenol is a highly toxic compound which is essential in the field of organic synthesis and drug design. However, the accumulation of thiophenols in the environment may cause serious health problems for human bodies ultimately. Therefore, it is critical to develop efficient methods for visualization of thiophenol species in biological samples. In this work, an innovative two-photon fluorescent turn-on probe FR-TP with far-red emission for thiophenols based on FR-NH2 fluorophore and 2,4-dinitrophenylsulfonyl recognition site was reported. The new probe can be used for thiophenol detection with large far-red fluorescence enhancement (about 155-fold), rapid response (completed within 100 s), excellent sensitivity (DL 0.363 mu M), high selectivity, and lower cellular auto-fluorescence interference. Importantly, the probe FR-TP can be successfully employed to visualize thiophenols not only in the living HeLa cells but also in living liver tissues. In addition, through two photon tissue imaging, the probe was used to monitor and investigate biological thiophenol poisoning in the animal model of thiophenol inhalation for the first time. Accurate values of tumor markers in blood play an especially important role in the diagnosis of illness. Here, based on the combination of three techniques include anticoagulant technology, nanotechnology and biosensing technology, a sensitive label-free immunosensor with anti-biofouling electrode for detection alpha-Fetoprotein (AFP) in whole blood was developed by anticoagulating magnetic nanoparticles. The obtained products of Fe3O4-epsilon-PL-Hep nanoparticles were characterized by fourier transform infrared (FT-IR) spectra, transmission electron microscopy (TEM), zeta-potential and vibrating sample magnetometry (VSM). Moreover, the blood compatibility of anticoagulating magnetic nanoparticles was characterized by in vitro coagulation tests, hemolysis assay and whole blood adhesion tests. Combining the anticoagulant property of heparin,(Hep) and the good magnetism of Fe3O4, the Fe3O4-epsilon-PL-Hep nanoparticles could improve not only the anti-biofouling property of the electrode surface when they contact with whole blood, but also the stability and reproducibility of the proposed immunosensor. Thus, the prepared anticoagulating magnetic nanoparticles modified immunosensor for the detection of AFP showed excellent electrochemical properties with a wide concentration range from 0.1 to 100 ng/mL and a low detection limit of 0.072 ng/mL. Furthermore, five blood samples were assayed using the developed immunosensor. The results showed satisfactory accuracy with low relative errors. It indicated that our developed immunoassay was competitive and could be potentially used for the detection of whole blood samples directly. O-linked N-acetylglucosamine (O-GlcNAc) transferase (OGT) plays a critical role in modulating protein function in many cellular processes and human diseases such as Alzheimer's disease and type II diabetes, and has emerged as a promising new target. Specific inhibitors of OGT could be valuable tools to probe the biological functions of O-GlcNAcylation, but a lack of robust nonradiometric assay strategies to detect glycosylation, has impeded efforts to identify such compounds. Here we have developed a novel label-free electrochemical biosensor for the detection of peptide O-GleNAcylation using protease-protection strategy and electrocatalytic oxidation of tyrosine mediated by osmium bipyridine as a signal reporter. There is a large difference in the abilities of proteolysis of the glycosylated and the unglycosylated peptides by protease, thus providing a sensing mechanism for OGT activity. When the O-GlcNAcylation is achieved, the glycosylated peptides cannot be cleaved by proteinase K and result in a high current response on indium tin oxide (ITO) electrode. However, when the O-GlcNAcylation is successfully inhibited using a small molecule, the unglycosylated peptides can be cleaved easily and lead to low current signal. Peptide O-GlcNAcylation reaction was performed in the presence of a well-defined small-molecule OGT inhibitor. The results indicated that the biosensor could be used to screen the OGT inhibitors effectively. Our label-free electrochemical method is a promising candidate for protein glycosylation pathway research in screening small-molecule inhibitors of OGT. C-reactive protein (CRP) is a widely accepted biomarker of cardiovascular disease and inflammation. In this study, a RNA aptamer-based electrochemical sandwich type aptasensor for CRP detection was described using the functionalized silica microspheres as immunoprobes. Silica microspheres (Si MSs), which have good monodispersity and uniform shape, were firstly synthesized. The silica microspheres functionlized with gold nanoparticles (Au NPs) provided large surface area for immobilizing signal molecules (Zinc ions, Zn2+) and antibodies (Ab). RNA aptamers, which were specific recognized to CRP, were assembled on the surface of Au NPs modified electrode via gold-sulfur affinity. In the presence of CRP, a sandwich structure of aptamer-CRP-immunoprobe was formed. Square wave voltammetry (SWV) was employed to record the sensing signal, and a clearly reductive peak corresponding to Zn2+ at about -1.16 V (vs. SCE) was obtained. Under optimal conditions, the aptasensor showed wide linear range (0.005 ng mL(-1) to 125 ng mL(-1)) and low detection limit (0.0017 ng mL(-1) at a signal-to-noise ratio of 3). Some possible interfering substance was also investigated, and the results obtained showed that the aptasensor possessed good selectivity. When the aptasensor was applied to real serum samples analysis, the satisfied results were obtained, indicating that the aptasensor possessed potential real application ability. The present work describes an ultrasensitive electrochemical aptamer-based assay for detection of human epidermal growth factor receptor 2 protein (HER2) cancer biomarker as a model analyte. Results show that the reduced graphene oxide-chitosan (rGO-Chit) film as a suitable electrode material possesses great favorable properties including high homogeneity, good stability, large surface area and high fraction of amine groups as aptamer binding sites. Various steps of aptasensor fabrication were characterized using microscopic, energy dispersive X-ray spectroscopy (EDAX), Fourier transform infrared (FTIR) spectroscopy and electrochemical techniques. Using methylene blue (MB) as an electrochemical probe and differential pulse voltammetry (DPV) technique, two linear concentration ranges of 0.5-2 ng ml(-1) and 2-75 ng ml(-1) were obtained with a high sensitivity of 0.14 mu A ng(-1) ml and a very low detection limit of 0.21 ng ml(-1) (very lower than the clinical cutoff). The fabricated aptasensor showed excellent selectivity for detection of HER2 in complex matrix of human serum samples. The sensitive detection of HER2 can be attributed to the multiple signal amplification of MB during its accumulation to the modified electrode surface via both affinity interaction to aptamer molecules and electrostatic adsorption to the HER2 analyte as well as high charge transfer kinetic properties of the applied rGO-Chit film. The rapid and simple preparation of the proposed aptasensor as well as its high selectivity, stability and reproducibility provided a promising protocol for non-invasive diagnosis for various points of care application. The proposed aptasensor showed excellent analytical performance in comparison with current HER2 biosensors. N-6-methyladenosine (m(6)A) is an enigmatic and abundant internal modification in eukaryotic messenger RNA (mRNA), which could affect various aspects of RNA metabolism and mRNA translation. Herein, a novel photoelectrochemical (PEC) immunosensor was constructed for m(6)A detection based on the inhibition of Cu2+ to the photoactivity of g-C3N4/CdS quantum dots (g-C3N4/CdS) heterojunction, where g-C3N4/CdS heterojunction was used as photoactive material, anti-m(6)A antibody as recognition unit for m(6)A-containing RNA, Phos-tag-biotin as link unit and avidin functionalized CuO as PEC signal indicator. When CuO was captured on electrode through biotin-avidin affinity reaction and then treated with HCl, Cu2+ could be released and CuxS would be formed based on the selective interaction between CdS and Cu2+, leading the photocurrent obviously decreased. Under the optimal detection conditions, the PEC biosensor displayed a linear range of 0.01-10 nM and a low detection limit of 3.53 pM for methylated RNA determination. Furthermore, the developed method could also be used to detect the expression level of m(6)A methylated RNA in serum samples of breast cancer patient before and after operative treatment. The proposed assay strategy has a great potential for detecting the expression methylation level of RNA in real sample. In this work, we report a durable and sensitive 11202 biosensor based on boronic acid functionalized metal organic frameworks (denoted as MIL-100(Cr)-B) as an efficient immobilization matrix of horseradish peroxidase (HRP). MIL-100(Cr)-B features a hierarchical porous structure, extremely high surface area, and sufficient recognition sites, which can significantly increase HRP loading and prevent them from leakage and deactivation. The H2O2 biosensor can be easily achieved without any complex processing. Meanwhile, the immobilized HRP exhibited enhanced stability and remarkable catalytic activity towards H2O2 reduction. Under optimal conditions, the biosensor showed a fast response time (less than 4 s) to H2O2 in a wide linear range of 0.5-3000 mu M with a low detection limit of 0.1 mu M, as well as good anti-interference ability and long-term storage stability. These excellent performances substantially enable the proposed biosensor to be used for the real-time detection of H2O2 released from living cells with satisfactory results, thus showing the potential application in the study of H2O2-involved dynamic pathological and physiological process. Novel woven fiber organic electrochemical transistors based on polypyrrole (PPy) nanowires and reduced graphene oxide (rGO) have been prepared. SEM revealed that the introduction of rGO nanosheets could induce the growth and increase the amount of PPy nanowires. Moreover, it could enhance the electrical performance of fiber transistors. The hybrid transistors showed high on/off ratio of 10(2), fast switch speed, and long cycling stability. The glucose sensors based on the fiber organic electrochemical transistors have also been investigated, which exhibited outstanding sensitivity, as high as 0.773 NCR/decade, with a response time as fast as 0.5 s, a linear range of 1 nM to 5 mu M, a low detection concentration as well as good repeatability. In addition, the glucose could be selectively detected in the presence of ascorbic acid and uric acid interferences. The reliability of the proposed glucose sensor was evaluated in real samples of rabbit blood. All the results indicate that the novel fiber transistors pave the way for portable and wearable electronics devices, which have a promising future for healthcare and biological applications. The ability to dissect cell-to-cell variations of microRNA (miRNA) expression with single-cell resolution has become a powerful tool to investigate the regulatory function of miRNAs in biological processes and the pathogenesis of miRNA-related diseases. Herein, we have developed a novel scheme for digital detection of miRNA in single cell by using the ligation-depended DNA polymerase colony (polony). Firstly, two simply designed target-specific DNA probes were ligated by using individual miRNA as the template. Then the ligated DNA probe acted as polony template that was amplified by PCR process in the thin polyacrylamide hydrogel. Due to the covalent attachment of a PCR primer on polyacrylamide matrix and the retarding effect of the polyacrylamide hydrogel matrix itself, as the polony reaction proceeds, the PCR products diffused radially near individual template molecule to form a bacteria colony-like spots of DNA molecules. The spots can be counted after staining the polyacrylamide gel with SYBR Green I and imaging with a microarray scanner. Our polony-based method is sensitive enough to detect 60 copies of miRNA molecules. Meanwhile, the new strategy has the capability of distinguishing singe-base difference. Due to its high sensitivity and specificity, the proposed method has been successfully applied to analysis of the expression profiling of miRNA in single cell. Sensitive and rapid detection of platelet-derived growth factor BB (PDGF-BB), a cancer-related protein, could help early diagnosis, treatment, and prognosis of cancers. Although some methods have been developed to detect PDGF-BB, few can provide quantitative results using an affordable and portable device that is suitable for home use or field application. In this work, we report the first use of a portable kind of personal glucose meter (PGM) combining a catalytic and molecular beacon (CAMB) system with a cation exchange reaction (CX reaction) for ultrasensitive PDGF-BB assay. It realized the amplification of the detection in three ways, including greater aptamer payload on nanoparticles, CX reaction releasing thousands of Zn2+ and the cycle by the catalyzing cleavage of 8-17 DNAzyme. In the process, with the addition of PDGF-BB into the aptasensor, the specific recognition between aptamer and protein was initiated resulting in the combination of ZnS NNC for further CX reaction to release thousands of Zn2+, which could cleave the substrate DNA in the CAMB system realizing multiple cycle. The cleaved DNA fragment was designed with invertase-labeled could convert sucrose into glucose which could be detected and quantified by PGM accompanying with the change of color of the control window from yellow to green. The enhanced signal of the PGM has a relationship with the concentration of PDGF-BB in the range of 3.16x10(-16) M to 3.16x10(-12) M, and the detection limit is 0.11 fM. Moreover, the catalytic and cleavage activities of 8-17 DNAzyme can be achieved in solution; thus, no enzyme immobilization is needed for detection. The triply amplified strategy showed high selectivity, stability, and applicability for detecting the desired protein. An optically transparent patterned indium tin oxide (ITO) three-electrode sensor integrated with a microfluidic channel was designed for label-free immunosensing of prostate-specific membrane antigen (PSMA), a prostate cancer (PCa) biomarker, expressed on prostate tissue and circulating tumor cells but also found in serum. The sensor relies on cysteamine capped gold nanoparticles (N-AuNPs) covalently linked with anti-PSMA antibody (Ab) for target specificity. A polydimethylsiloxane (PDMS) microfluidic channel is used to efficiently and reproducibly introduce sample containing soluble proteins/cells to the sensor. The PSMA is detected and quantified by measuring the change in differential pulse voltammetry signal of a redox probe ([Fe(CN)(6)](3-)/[Fe(CN)(6)](4-)) that is altered upon binding of PSMA with PSMA-Ab immobilized on N-AuNPs/ITO. Detection of PSMA expressing cells and soluble PSMA was tested. The limit of detection (LOD) of the sensor for PSMA-based PCa cells is 6/40 mu L (i.e., 150 cells/mL) (n=3) with a linear range of 15-400 cells/40 mu L (i.e., 375-10,000 cells/mL), and for the soluble PSMA is 0.499 ng/40 mu L (i.e., 12.5 ng/mL) (n=3) with the linear range of 0.75-250 ng/40 mu L (i.e., 19-6250 ng/mL), both with an incubation time of 10 min. The results indicate that the sensor has a suitable sensitivity and dynamic range for routine detection of PCa circulating tumor cells and can be adapted to detect other biomarkers/cancer cells. Supported binary liposome mixture of cationic liposome N-[1-(2,3-Dioleoyloxy)propyl]-N,N,N-trimethylammonium propane (DOTAP) and the zwitterionic liposome 1,2-Dioleoyl-sn-Glycero-3-Phosphoethanolamine (DOPE) were tethered on thiol monolayers in the absence and presence of gold nanoparticle to enhance sensor stability and sensitivity for label free DNA and protein sensing for the first time. Cysteamine hydrochloride (Cyst), 3-Mercaptopropionic acid (MPA), 11-Mercaptoundecanoic acid (MUDA) and 11-amino-1-undecane thiol (AUT) monolayers were used as tethers on gold surfaces. Electrochemical studies in the presence of [Fe(CN)(6)](3-/4-) indicate that the presence of both DOPE and AuNP decreases the electrostatic interaction between DOTAP and MPA layer during the formation of DOPE-DOTAP-AuNP (DDA) whereas they enhance the repulsive force on the Cyst and AUT monolayers. In the thiol monolayer supported DDA, the gelation of neutral lipid DOPE by the AuNP is disfavored which inturn promotes stability of vesicle structure. The membrane protein melittin's interaction with the DDA indicates the presence of intact vesicle by showing decreased charge transfer for the MUDA and AUT in the presence of [Fe(CN)(6)](3-/4-). On the contrary, the presence of the bilayer and semi circled DDA on the MPA and cysteamine layers were confirmed by the increased redox reaction. Atomic Force Microscopic (AFM) and Transmission Electron Microscopic (TEM) images support the presence of an array like semi circled DDA on the MPA and well separated DDA vesicles on the MUDA with variable sizes. Dynamic Light Scattering (DLS) and Fourier Transform Infrared spectroscopy (FTIR) suggest effective coordination between DOPE, DOTAP and AuNP. Label free DNA hybridization sensing in presence of the negatively charged [Fe(CN)(6)](3-/4-) indicates the lowest DNA detection limit of 1x10(-14) M with linearity range 1x10(-13) to 1x10(-9) M. Similarly, streptavidin sensing shows the lowest detection of 1 ng ml(-1) with a linear range 100 ng to 1 mu g due to the increased reactive sites and distance. The proof of concept of utilizing a microfluidic dielectrophoresis (DEP) chip was conducted to rapidly detect a dengue virus (DENV) in vitro based on the fluorescence immunosensing. The mechanism of detection was that the DEP force was employed to capture the modified beads (mouse anti-flavivirus monoclonal antibody -coated beads) in the microfluidic chip and the DENV modified with fluorescence label, as the detection target, can be then captured on the modified beads by immunoreaction. The fluorescent signal was then obtained through fluorescence microscopy, and then quantified by ImageJ freeware. The platform can accelerate an immunoreaction time, in which the on-chip detection time was 5 min, and demonstrating an ability for DENV detection as low as 104 PFU/mL. Furthermore, the required volume of DENV samples dramatically reduced, from the commonly used similar to 50 mu L, to similar to 15 mu L, and the chip was reusable ( > 50x). Overall, this platform provides a rapid detection (5 min) of the DENV with a low sample volume, compared to conventional methods. This proof of concept with regard to a microfluidic dielectrophoresis chip thus shows the potential of immunofluorescence based -assay applications to meet diagnostic needs. Convenient biosensor for simultaneous multi-analyte detection was increasingly required in biological analysis. A novel flower-like silver (FLS)-enhanced fluorescence/visual bimodal platform for the ultrasensitive detection of multiple miRNAs was successfully constructed for the first time based on the principle of multi-channel microfluidic paper-based analytical devices (mu PADs). Fluorophore-functionalized DNA(1) (DNA(1)-N-CDs) was combined with FLS, which was hybridized with quencher-carrying strand (DNA(2)-CeO2) to form FLS-enhanced fluorescence biosensor. Upon the addition of the target miRNA, the fluorescent intensity of DNA(1)-N-CDs within the proximity of the FLS was strengthened. The disengaged DNA/CeO2 complex could result in color change after joining H2O2, leading to real-time visual detection of miRNA firstly. If necessary, then the fluorescence method was applied for a accurate determination. In this strategy, the growth of FLS in mu PADs not only reduced the background fluorescence but also provided an enrichment of "hot spots" for surface enhanced fluorescence detection of miRNAs. Results also showed versatility of the FLS in the enhancement of sensitivity and selectivity of the miRNA biosensor. Remarkably, this biosensor could detect as low as 0.03 fM miRNA210 and 0.06 fM miRNA21. Interestingly, the proposed biosensor also possessed good capability of recycling in three cycles upon change of the supplementation of DNA(2)-CeO2 and visual substitutive device. This method opened new opportunities for further studies of miRNA related bioprocesses and will provide a new instrument for simultaneous detection of multiple low-level biomarkers. Isolating real roots of a square-free polynomial in a given interval is a fundamental problem in computational algebra. Subdivision based algorithms are a standard approach to solve this problem. For instance, Sturm's method, or various algorithms based on the Descartes's rule of signs. For the benchmark problem of isolating all the real roots of a polynomial of degree n and root separation a, the size of the subdivision tree of most of these algorithms is bounded by O (log 1/sigma) (assume sigma < 1). Moreover, it is known that this is optimal for subdivision algorithms that perform uniform subdivision, i.e., the width of the interval decreases by some constant. Recently Sagraloff (2012) and Sagraloff-Mehlhorn (2016) have developed algorithms for real root isolation that combine subdivision with Newton iteration to reduce the size of the subdivision tree to O (n(log(n log 1/sigma))). We describe a subroutine that reduces the size of the subdivision tree of any subdivision algorithm for real root isolation. The subdivision tree size of our algorithm using predicates based on either the Descartes's rule of signs or Sturm sequences is bounded by O (n logn), which is close to the optimal value of O (n). The corresponding bound for the algorithm EVAL, which uses certain interval-arithmetic based predicates, is O (n(2) log n). Our analysis differs in two key aspects from earlier approaches. First, we use the general technique of continuous amortization from Burr Krahmer Yap (2009), and extend it to handle non-uniform subdivisions; second, we use the geometry of clusters of roots instead of root bounds. The latter aspect enables us to derive a bound on the subdivision tree that is independent of the root separation sigma. The number of Newton iterations is bounded by O (n loglog(1/sigma)). (C) 2016 Elsevier Ltd. All rights reserved. The so-called Berlekamp-Massey-Sakata algorithm computes a Grobner basis of a 0-dimensional ideal of relations satisfied by an input table. It extends the Berlekamp-Massey algorithm to n-dimensional tables, for n > 1. We investigate this problem and design several algorithms for computing such a Grobner basis of an ideal of relations using linear algebra techniques. The first one performs a lot of table queries and is analogous to a change of variables on the ideal of relations. As each query to the table can be expensive, we design a second algorithm requiring fewer queries, in general. This FGLM-like algorithm allows us to compute the relations of the table by extracting a full rank submatrix of a multi-Hankel matrix (a multivariate generalization of Hankel matrices). Under some additional assumptions, we make a third, adaptive, algorithm and reduce further the number of table queries. Then, we relate the number of queries of this third algorithm to the geometry of the final staircase and we show that it is essentially linear in the size of the output when the staircase is convex. As a direct application to this, we decode n-cyclic codes, a generalization in dimension n of Reed Solomon codes. We show that the multi-Hankel matrices are heavily structured when using the LEX ordering and that we can speed up the computations using fast algorithms for quasi-Hankel matrices. Finally, we design algorithms for computing the generating series of a linear recursive table. (C) 2016 Elsevier Ltd. All rights reserved. The diagonal of a multivariate power series F is the univariate power series Diag F generated by the diagonal terms of F. Diagonals form an important class of power series; they occur frequently in number theory, theoretical physics and enumerative combinatorics. We study algorithmic questions related to diagonals in the case where F is the Taylor expansion of a bivariate rational function. It is classical that in this case DiagF is an algebraic function. We propose an algorithm that computes an annihilating polynomial for DiagF. We give a precise bound on the size of this polynomial and show that, generically, this polynomial is the minimal polynomial and that its size reaches the bound. The algorithm runs in time quasi-linear in this bound, which grows exponentially with the degree of the input rational function. We then address the related problem of enumerating directed lattice walks. The insight given by our study leads to a new method for expanding the generating power series of bridges, excursions and meanders. We show that their first N terms can be computed in quasi-linear complexity in N, without first computing a very large polynomial equation. (C) 2016 Elsevier Ltd. All rights reserved. Polyzetas, indexed by words, satisfy shuffle and quasi-shuffle identities. In this respect, one can explore the multiplicative and algorithmic (locally finite) properties of their generating series. In this paper, we construct pairs of bases in duality on which polyzetas are established in order to compute local coordinates in the infinite dimensional Lie groups where their non-commutative generating series live. We also propose new algorithms leading to the ideal of polynomial relations, homogeneous in weight, among polyzetas (the graded kernel) and their explicit representation (as data structures) in terms of irreducible elements. (C) 2016 Elsevier Ltd. All rights reserved. We present here a new approach for computing Groebner bases for bilateral modules over an effective ring. Our method is based on Weispfenning notion of restricted Groebner bases and related multiplication. (C) 2016 Elsevier Ltd. All rights reserved. In various occasions the conjugacy problem in finitely generated amalgamated products and HNN extensions can be decided efficiently for elements which cannot be conjugated into the base groups. Thus, the question arises "how many" such elements there are. This question can be formalized using the notion of strongly generic sets and lower bounds can be proven by applying the theory of amenable graphs. In this work we examine Schreier graphs of amalgamated products and HNN extensions. For an amalgamated product G = H*AK with [H : A] >= [K : A] >= 2, the Schreier graph with respect to H or K turns out to be non-amenable if and only if [H : A] >= 3. Moreover, for an HNN extension of the form G = (H, b vertical bar bab(-1) = phi(a), a is an element of A), we show that the Schreier graph of G with respect to the subgroup H is non-amenable if and only if A not equal H not equal phi(A). As application of these characterizations we show that the conjugacy problem in fundamental groups of finite graphs of groups with finitely generated free abelian vertex groups can be solved in polynomial time on a strongly generic set. Furthermore, the conjugacy problem in groups with more than one end can be solved with a strongly generic algorithm which has essentially the same time complexity as the word problem. These are rather striking results as the word problem might be easy, but the conjugacy problem might be even undecidable. Finally, our results yield a new proof that the set where the conjugacy problem of the Baumslag group G(1,2) is decidable in polynomial time is also strongly generic. (C) 2016 Elsevier Ltd. All rights reserved. Extending Eulerian polynomials and Faulhaber's formula,(1) we study several combinatorial aspects of harmonic sums and polylogarithms at non-positive multi-indices as well as their structure. Our techniques are based on the combinatorics of noncommutative generating series in the shuffle Hopf algebras giving a global process to renormalize the divergent polyzetas at non-positive multi-indices. (C) 2016 Published by Elsevier Ltd. The row (resp. column) rank profile of a matrix describes the staircase shape of its row (resp. column) echelon form. We describe a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We show that this normal form exists and is unique over a field but also over any principal ideal domain and finite chain ring. We then explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through the corresponding PLUQ decomposition. This enlarges the set of known elimination variants that compute row or column rank profiles. As a consequence a new Crout base case variant significantly improves the practical efficiency of previously known implementations over a finite field. With matrices of very small rank, we also generalize the techniques of Storjohann and Yang to the computation of the rank profile matrix, achieving an (r(omega) mn)(1+o(1)) time complexity for an in m x n matrix of rank r, where omega is the exponent of matrix multiplication. Finally, we give connections to the Bruhat decomposition, and several of its variants and generalizations. Consequently, the algorithmic improvements made for the PLUQ factorization, and their implementation, directly apply to these decompositions. In particular, we show how a PLUQ decomposition revealing the rank profile matrix also reveals both a row and a column echelon form of the input matrix or of any of its leading sub-matrices, by a simple post-processing made of row and column permutations. (C) 2016 Elsevier Ltd. All rights reserved. Cyclic structures on convolutional codes are modeled using an Ore extension A[z; sigma] of a finite semisimple algebra A over a finite field F. In this context, the separability of the ring extension F[z] subset of A[z; sigma] implies that every ideal code is a split ideal code. We characterize this separability by means of a being a separable automorphism of the F-algebra A. We design an algorithm that decides if such a given automorphism a is separable. In addition, it also computes a separability element of F[z] subset of A[z; sigma], which is important because it can be used to find an idempotent generator of each ideal code with sentence-ambient A[z; sigma]. (C) 2016 Elsevier Ltd. All rights reserved. This paper presents two new constructions related to singular solutions of polynomial systems. The first is a new deflation method for an isolated singular root. This construction uses a single linear differential form defined from the Jacobian matrix of the input, and defines the deflated system by applying this differential form to the original system. The advantages of this new deflation is that it does not introduce new variables and the increase in the number of equations is linear in each iteration instead of the quadratic increase of previous methods. The second construction gives the coefficients of the so-called inverse system or dual basis, which defines the multiplicity structure at the singular root. We present a system of equations in the original variables plus a relatively small number of new variables that completely deflates the root in one step. We show that the isolated simple solutions of this new system correspond to roots of the original system with given multiplicity structure up to a given order. Both constructions are "exact" in that they permit one to treat all conjugate roots simultaneously and can be used in certification procedures for singular roots and their multiplicity structure with respect to an exact rational polynomial system. (C) 2016 Published by Elsevier Ltd. We present two algorithms for computing hypergeometric solutions of second order linear differential operators with rational function coefficients. Our first algorithm searches for solutions of the form exp(integral r dx) . F-2(1) (a(1), a(2); b(1); f) (1) where r, f is an element of <(Q(x))over bar> and a(1), a(2), b(1) is an element of Q. It uses modular reduction and Hensel lifting. Our second algorithm tries to find solutions in the form exp(integral r dx) . (r(0) . F-2(1) (a(1), a(2); b(1); f) +r(1) . F-2(1)'(a(1), a(2); b(1); f)) (2) where r(0), r(1) is an element of <(Q(x))over bar>, as follows: It tries to transform the input equation to another equation with solutions of type (1), and then uses the first algorithm. (C) 2016 Elsevier Ltd. All rights reserved. We consider the problem of computing univariate polynomial matrices over a field that represent minimal solution bases for a general interpolation problem, some forms of which are the vector M-Pade approximation problem in Van Barel and Bultheel (1992) and the rational interpolation problem in Beckermann and Labahn (2000). Particular instances of this problem include the bivariate interpolation steps of Guruswami-Sudan hard-decision and Kotter-Vardy soft-decision decodings of Reed-Solomon codes, the multivariate interpolation step of list-decoding of folded Reed-Solomon codes, and Hermite-Pade approximation. In the mentioned references, the problem is solved using iterative algorithms based on recurrence relations. Here, we discuss a fast, divide-and-conquer version of this recurrence, taking advantage of fast matrix computations over the scalars and over the polynomials. This new algorithm is deterministic, and for computing shifted minimal bases of relations between m vectors of size a it uses (O (-)(m(omega-1) (sigma + vertical bar s vertical bar) field operations, where omega is the exponent of matrix multiplication, and Is' is the sum of the entries of the input shift s, with min(s) = 0. This complexity bound improves in particular on earlier algorithms in the case of bivariate interpolation for soft decoding, while matching fastest existing algorithms for simultaneous Hermite-Pade approximation. (C)2016Elsevier Ltd. All rights reserved. We generalise the notion of a Gobner fan to ideals in R [t] [x(1) . . . , x(n)] for certain classes of coefficient rings R and give a constructive proof that the Grobner fan is a rational polyhedral fan. For this we introduce the notion of initially reduced standard bases and show how these can be computed in finite time. We deduce algorithms for computing the Grobner fan, implemented in the computer algebra system SINGULAR. The problem is motivated by the wish to compute tropical varieties over the p-adic numbers. (C) 2016 Elsevier Ltd. All rights reserved. An algebraic approach to the maximum likelihood estimation problem is to solve a very structured parameterized polynomial system called likelihood equations that have finitely many complex (real or non-real) solutions. The only solutions that are statistically meaningful are the real solutions with positive coordinates. In order to classify the parameters (data) according to the number of real/positive solutions, we study how to efficiently compute the discriminants, say data-discriminants (DO), of the likelihood equations. We develop a probabilistic algorithm with three different strategies for computing DDs. Our implemented probabilistic algorithm based on Maple and FGb is more efficient than our previous version (Rodriguez and Tang, 2015) and is also more efficient than the standard elimination for larger benchmarks. By applying RAGlib to a DD we compute, we give the real root classification of 3 by 3 symmetric matrix model. (C) 2016 Published by Elsevier Ltd. This paper proposes a new approach for studying the dexterous grasping mechanisms via parallel manipulation analogy. The approach exploits the theories already developed for the dexterous robotic hands and the parallel manipulators. It also proposes an innovative conceptual design algorithm for dexterous grasping mechanisms with desired "dexterity" characteristics: mobility, connectivity, overconstraint, and redundancy. The provided quick mobility calculation formula is valid for all the grasping mechanisms whereas the other quick mobility calculation formulas are not. The proposed conceptual design algorithm is supported by example syntheses of a 3 dof translational motion dexterous grasping mechanism, a 3 dof (2 translational and 1 rotational) planar motion dexterous grasping mechanism and a 6 dof (3 translational and 3 rotational) spatial motion dexterous grasping mechanism. In this work, we present a generic approach to optimize the design of a parametrized robot gripper including both selected gripper mechanism parameters, and parameters of the finger geometry. We suggest six gripper quality indices that indicate different aspects of the performance of a gripper given a CAD model of an object and a task description. These quality indices are then used to learn task-specific finger designs based on dynamic simulation. We demonstrate our gripper optimization on a parallel finger type gripper described by twelve parameters. We furthermore present a parametrization of the grasping task and context, which is essential as an input to the computation of gripper performance. We exemplify important aspects of the indices by looking at their performance on subsets of the parameter space by discussing the decoupling of parameters and show optimization results for two use cases for different task contexts. We provide a qualitative evaluation of the obtained results based on existing design guidelines and our engineering experience. In addition, we show that with our method we achieve superior alignment properties compared to a naive approach with a cutout based on the "inverse of an object". Furthermore, we provide an experimental evaluation of our proposed method by verifying the simulated grasp outcomes through a real-world experiment. A novel 3D compliant manipulator for micromanipulation is introduced based on pantograph linkage. The proposed manipulator provides decoupled 3DOF translational motions. The key design feature is the use of parallelograms, which maintain the orientation of the end-effector fixed. The proposed manipulator provides advantages over its counterparts in the literature. It has significantly higher workspace to size ratio if its pantograph acts as a magnification device. On the other hand, it has higher resolution if its pantograph acts as a miniaturizing device. This provides great flexibility in the design process to account for the limited variety of the micro-actuators and the large variety of the micro-scale tasks in terms of workspace and resolution. Thus, the proposed system possesses the characteristics of gearing (speed up or speed down). A suitable choice of flexure hinges and material is done. The position and velocity kinematic analysis are carried out. Analytical expressions are derived for singularity-free-workspace boundaries in terms of physical constraints of the flexure joints. Dexterity analysis is performed to evaluate the design performance. A synthesis methodology of the proposed manipulator is developed. A finite element analysis is carried out and a prototype is manufactured to validate the conceptual design. Simulation and experimental results have successfully demonstrated the linearity and consistency between input and output displacements with acceptable parasitic motions. Moreover, the manipulability of the proposed manipulator is found to be configuration independent. Also, the manipulator could have isotropic performance over its workspace for certain actuator setup. On-line walking speed control in humanpowered exoskeleton systems is a big challenge, the translations of human intention to increase or decrease walking speed in maneuverable human exoskeleton systems is still complex field. In this paper, we propose a novel sensing technique to control the walking speed of the system according to the pilot intentions and to minimize the interaction force. We introduce a new sensing technology "Dual Reaction Force (DRF)" sensors, and explain the methodology of using it in the investigation of walking speed changing intentions. The force signals mismatch successfully applied to control the walking speed of the exoskeleton system according to the pilot intentions. Typical issues on the implementation of the sensory system are experimentally validated on flat terrain walking trails. We developed an adaptive trajectory frequency control algorithm to control the walking speed of HUman-powered Augmentation Lower Exoskeleton (HUALEX) within the human wearer intended speed. Based on the mismatch of DRF sensors, we proposed a new control methodology for walking speed control. Human intention recognition and identification through an sensorized footboard and smart shoe is achieved successfully in this work, the new term heel contact time H-CT is main feedback signal for the control algorithm. From the experimental walking trails we found that, the HCT during flat walking ranges from 0.69 +/- 0.05 sec and 0.41 +/- 0.07 sec while walking speed varies between 1m/s and 2.5m/s. The proposed algorithm used an Adaptive Central Pattern Generators (ACPGs) applied to control joint trajectory frequency, the different walking speeds associated with different functioning of human body CPGs frequency. We validated the proposed control algorithm by simulations on single Degree of Freedom (1-DoF) exoskeleton platform, the simulation results show the efficiency and validated that the proposed control algorithm will provides a good walking speed control for the HUALEX exoskeleton system. Model predictive control strategies refer to a set of methods relying on a process model to determine an optimal control signal by minimising a cost function. This paper reports on the application of predictive control strategies to a wheeled mobile robot. As a first step, friction forces originating from the motor gearboxes and wheels were estimated and a feedforward compensation was applied. Step response tests were then carried out to identify a linear model to design several simple control strategies, such as the Proportional-Integral-Derivative (PID) controller. The PID response constitutes the reference to assess the efficiency of two predictive control strategies: the generalised predictive control (GPC) and the linear quadratic model predictive control (LQMPC) algorithms. These control strategies were tested in simulation with Matlab and EasyDyn (a C++ library for multibody system simulations) and in real life experiments. All three control strategies offer satisfactory reference tracking but MPC allows a reduction of the energy consumption of up to 70 % as a result of setpoint anticipation. LQMPC is the best in terms of input activity reduction. New market-based decentralized algorithms are proposed for the task assignment of multiple unmanned aerial vehicles in dynamic environments with a limited communication range. In particular, a cooperative timing mission that cannot be performed by a single vehicle is considered. The baseline algorithms for a connected network are extended to deal with time-varying network topology including isolated subnetworks due to a limited communication range. The mathematical convergence and scalability analyses show that the proposed algorithms have a polynomial time complexity, and numerical simulation results support the scalability of the proposed algorithm in terms of the runtime and communication burden. The performance of the proposed algorithms is demonstrated via Monte Carlo simulations for the scenario of the suppression of enemy air defenses. In this study, a wheeled mobile robot navigation toolbox for Matlab is presented. The toolbox includes algorithms for 3D map design, static and dynamic path planning, point stabilization, localization, gap detection and collision avoidance. One can use the toolbox as a test platform for developing custom mobile robot navigation algorithms. The toolbox allows users to insert/remove obstacles to/from the robot's workspace, upload/save a customized map and configure simulation parameters such as robot size, virtual sensor position, Kalman filter parameters for localization, speed controller and collision avoidance settings. It is possible to simulate data from a virtual laser imaging detection and ranging (LIDAR) sensor providing a map of the mobile robot's immediate surroundings. Differential drive forward kinematic equations and extended Kalman filter (EKF) based localization scheme is used to determine where the robot will be located at each simulation step. The LIDAR data and the navigation process are visualized on the developed virtual reality interface. During the navigation of the robot, gap detection, dynamic path planning, collision avoidance and point stabilization procedures are implemented. Simulation results prove the efficacy of the algorithms implemented in the toolbox. During last decade the scientific research on Unmanned Aerial Vehicless (UAVs) increased spectacularly and led to the design of multiple types of aerial platforms. The major challenge today is the development of autonomously operating aerial agents capable of completing missions independently of human interaction. To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills. The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends. These applications are sorted in different categories according to the research topics among various research groups. More specifically vision based position-attitude control, pose estimation and mapping, obstacle detection as well as target tracking are the identified components towards autonomous agents. Aerial platforms could reach greater level of autonomy by integrating all these technologies onboard. Additionally, throughout this article the concept of fusion multiple sensors is highlighted, while an overview on the challenges addressed and future trends in autonomous agent development will be also provided. We present a new image based visual servoing (IBVS) approach for control of micro aerial vehicles (MAVs) in indoor environments. Specifically, we show how a MAV can be stabilized and guided using only corridor lines viewed on a front facing camera and angular velocity measurements. Since the suggested controller does not include explicit attitude feedback it does not require the use of accelerometers which are susceptible to vibrations, nor complex attitude estimation algorithms. The controller also does not require direct velocity measurements which are difficult to obtain in indoor environments. The paper presents the new method, stability analysis, simulations and experiments. This paper combines fault-dependent control allocation with three different control schemes to obtain fault tolerance in the longitudinal control of unmanned aerial vehicles. The paper shows that fault-dependent control allocation is able to accommodate actuator faults that would otherwise be critical and it makes a performance assessment for the different control algorithms: an adaptive backstepping controller; a robust sliding mode controller; and a standard PID controller. The actuator faults considered are the partial to total loss of the elevator, which is a critical component for the safe operation of unmanned aerial vehicles. During nominal operation, only the main actuator, namely the elevator, is active for pitch control. In the event of a partial or total loss of the elevator, fault-dependent control allocation is used to redistribute control to available healthy actuators. Using simulations of a Cessna 182 aircraft model, controller performance and robustness are evaluated by metrics that assess control accuracy and energy use. System uncertainties are investigated over an envelope of pertinent variation, showing that sliding mode and adaptive backstepping provide robustness, where PID control falls short. Additionally, a key finding is that the fault-dependent control allocation is instrumental when handling actuator faults. Background: The U.S. Department of Health and Human Services' initiative Healthy People 2020 targets tobacco use, including smoking during pregnancy, as a continuing major health concern in this country. Yet bringing the U.S. Public Health Service's 2008 clinical practice guideline, Treating Tobacco Use and Dependence, into routine prenatal care remains challenging. Our previous nurse-managed intervention study of rural pregnant women found no significant cessation effect and significant discordance between self-reported smoker status and urinary cotinine levels. Purpose: The overall purpose of this follow-up study was to increase our understanding of the experiences of pregnant smokers and their providers. No qualitative studies could be found that simultaneously explored the experiences of both groups. Design and methods: This qualitative descriptive study used focus group methodology. Nine focus groups were held in two counties in upper New York State; six groups consisted of providers and three consisted of pregnant women. Four semistructured questions guided the group discussions, which were audiotaped and transcribed verbatim. Transcripts were read and coded independently by six investigators. Themes were identified using constant comparative analysis and were validated using the consensus process. Results: The total sample consisted of 66 participants: 45 providers and 21 pregnant women. Most of the providers were white (93%) and female (93%). A majority worked as RNs (71%); the sample included perinatal and neonatal nursery nurses, midwives, and physicians. The pregnant women were exclusively white (reflecting the rural demographic); the average age was 24 years. All the pregnant women had smoked at the beginning of their pregnancies. Four common themes emerged in both the provider and the pregnant women groups: barriers to quitting, mixed messages, approaches and attitudes, and program modalities. These themes corroborate previous findings that cigarette smoking is used for stress relief, especially when pregnancy itself is a stressor, and that pregnant women may feel guilty but don't want to be nagged or preached to. Conclusions: These results have implications for how smoking cessation programs for pregnant women should be designed. Health care providers need to be cognizant of their approaches and attitudes when addressing the subject of smoking cessation. Specific educational suggestions include "putting a face" to the issue of tobacco use during pregnancy. More research is needed on how best to implement the 2008 clinical practice guideline in specific populations. Antipsychotic medications are primarily used to manage various symptoms of psychosis. In recent years, more adults-and teenagers-are taking at least one type of psychotropic medication, the majority of which are prescribed by primary care and family physicians. Because nurses are now caring for people of varying ages, and with varying diagnoses, who are taking these types of medications, they need to develop a working knowledge of the agents available and know when it's appropriate to prescribe them for mental health disorders as well as for disorders unrelated to mental health. This article is the first in a series on commonly used psychotropic medications. Irritable bowel syndrome (IBS) is a common, chronic gastrointestinal (GI) condition characterized by disturbances in bowel habits and abdominal pain in the absence of known organic pathology. IBS reduces quality of life and is costly to treat. It is diagnosed using the symptom-based Rome criteria for functional GI disorders, which was recently updated and released as Rome IV. Both physiologic and psychological variables play a role in the etiology of IBS and perpetuate symptoms. Although research has shed light on IBS pathophysiology, therapeutic interventions remain symptom driven, employing both pharmacologic and nonpharmacologic approaches. Here, the authors review the epidemiology and pathophysiology of IBS, summarize diagnostic and treatment strategies, and discuss implications for nursing practice. This series on palliative care is developed in collaboration with the Hospice and Palliative Nurses Association (HPNA; http://hpna.advancingexpertcare.org). The HPNA aims to guide nurses in preventing and relieving suffering and in giving the best possible care to patients and families, regardless of the stage of disease or the need for other therapies. The HPNA offers education, certification, advocacy, leadership, and research. This is the fourth and final article in a series to help nurses share their knowledge, skills, and insight through writing for publication. Nurses have something important to contribute no matter what their nursing role. This series will help nurses develop good writing habits and sharpen their writing skills. It will take nurses step by step through the publication process, highlighting what gets published and why, how to submit articles and work with editors, and common pitfalls to avoid. For the previous articles in this series, see http://bit.ly/2lhnYKJ. This article analyzes the narrative experiences of Hmong American adolescent males who were labeled at risk or high risk for academic failure or underperformance by their predominantly White school counselors and teachers. Additional data sources included classroom observations at two racially diverse public high schools and semi-structured interviews with two White American female classroom teachers to ascertain how the at-risk label manifested in everyday practices ranging from classroom management/discipline methods, instructional decisions, interpersonal interactions, referrals, and tracking practices. The findings will highlight how the at-risk label along with a range of other deficit-based expectations intersected with several problematic assumptions about Asian American masculinities and Hmong American culture that suggested that in general, White school personnel were not aware of how their understandings of racial deviance and difference shaped how they assessed, diagnosed, and interacted with these students. Critically, the at-risk label had direct implications for tracking the youth participants into non-college-preparatory tracks including pathways toward alternative, remedial, and special education, or in one case, juvenile detention. Implications are offered for practice and theory. Neo-liberal ideologies have given parents influence over education. This requires teachers to find ways to engage with parents and use resources for dealing with them. Following Bourdieu's notion of field, in which different groups struggle over resources to maintain their social position, we examine the relations between teachers' attitudes toward parents and possession of feminine, social, and cultural capital. The sample comprised 605 who worked in 32 randomly selected schools located in two districts in Israel. Analyzing teachers answered to a questionnaire reveled that teachers' relations with parents are diverse and include threat and collaboration. Different capitals underpin these relations. This paper is concerned with the existence and continuous dependence of mild solutions to stochastic differential equations with non-instantaneous impulses driven by fractional Brownian motions. Our approach is based on a Banach fixed point theorem and Krasnoselski-Schaefer type fixed point theorem. In this paper, we study the dynamical bifurcation and final patterns of a modified Swift-Hohenberg equation(MSHE). We prove that the MSHE bifurcates from the trivial solution to an S-1-attractor as the control parameter alpha passes through a critical number (alpha) over cap . Using the center manifold analysis, we study the bifurcated attractor in detail by showing that it consists of finite number of singular points and their connecting orbits. We investigate the stability of those points. We also provide some numerical results supporting our analysis. Backward compact dynamics is deduced for a non-autonomous Benjamin-Bona-Mahony (BBM) equation on an unbounded 3D-channel. A backward compact attractor is defined by a time-dependent family of backward compact, invariant and pullback attracting sets. The theoretical existence result for such an attractor is derived from the backward flattening property, and this property is proved to be equivalent to the backward asymptotic compactness in a uniformly convex Banach space. Finally, it is shown that the BBM equation has a backward compact attractor in a Sobolev space under some suitable assumptions, such as, backward translation boundedness and backward small-tail. Both spectrum decomposition and cut-off technique are used to give all required backward uniform estimates. The problem of the existence of complex l(1) solutions of two difference equations with exponential nonlinearity is studied, one of which is nonautonomous. As a consequence, several information are obtained regarding the asymptotic stability of their equilibrium points, as well as the corresponding generating function and z-transform of their solutions. The results, which are obtained using a general theorem based on a functional-analytic technique, provide also a rough estimate of the region of attraction of each equilibrium point for the autonomous case. When restricted to real solutions, the results are compared with other recently published results. A risk-minimizing approach to pricing contingent claims in a general non-Markovian, regime-switching, jump-diffusion model is discussed, where a convex risk measure is used to describe risk. The pricing problem is formulated as a two-person, zero-sum, stochastic differential game between the seller of a contingent claim and the market, where the latter may be interpreted as a "fictitious" player. A backward stochastic differential equation (BSDE) approach is applied to discuss the game problem. Attention is given to the entropic risk measure, which is a particular type of convex risk measures. In this situation, a pricing kernel selected by an equilibrium state of the game problem is related to the one selected by the Esscher transform, which was introduced to the option-pricing world in the seminal work by [38]. In this paper, we study the dynamic behavior of a stochastic reaction-diffusion equation with dynamical boundary condition, where the non-linear terms f and h satisfy the polynomial growth condition of arbitrary order. Some higher-order integrability of the difference of the solutions near the initial time, and the continuous dependence result with respect to initial data in H-1 (O) x H-1/2 (Gamma) were established. As a direct application, we can obtain the existence of pullback random attractor A in the spaces L-p (O) x L-p (Gamma) and H-1 (O) x H-1/2 (Gamma) immediately. This paper presents the analysis of the conditions which lead the stochastic predator-prey model with Allee effect on prey population to extinction. In order to find these conditions we first prove the existence and uniqueness of global positive solution of considered model using the comparison theorem for stochastic differential equations. Then, we establish the conditions under which extinction of predator and prey populations occur. We also find the conditions for parameters of the model under which the solution of the system is globally attractive in mean. Finally, the numerical illustration with real life example is carried out to confirm our theoretical results. We study the existence and stability of periodic solutions of a differential equation that models the planar oscillations of a satellite in an elliptic orbit around its center of mass. The proof is based on a suitable version of Poincare-Birkhoff theorem and the third order approximation method. M. Budyko and W. Sellers independently introduced seminal energy balance climate models in 1969, each with a goal of investigating the role played by positive ice albedo feedback in climate dynamics. In this paper we replace the relaxation to the mean horizontal heat transport mechanism used in the models of Budyko and Sellers with diffusive heat transport. We couple the resulting surface temperature equation with an equation for movement of the edge of the ice sheet (called the ice line), recently introduced by E. Widiasih. We apply the spectral method to the temperature-ice line system and consider finite approximations. We prove there exists a stable equilibrium solution with a small ice cap, and an unstable equilibrium solution with a large ice cap, for a range of parameter values. If the diffusive transport is too efficient, however, the small ice cap disappears and an ice free Earth becomes a limiting state. In addition, we analyze a variant of the coupled diffusion equations appropriate as a model for extensive glacial episodes in the Neoproterozoic Era. Although the model equations are no longer smooth due to the existence of a switching boundary, we prove there exists a unique stable equilibrium solution with the ice line in tropical latitudes, a climate event known as a Jormungand or Water-belt state. As the systems introduced here contain variables with differing time scales, the main tool used in the analysis is geometric singular perturbation theory. This paper is devoted to the chemotaxis system {u(t) = Delta u - chi del . (u del v), x is an element of Omega, t > 0 r(vt) = Delta v - v + w, x is an element of Omega, t > 0 w(t) = Delta w - xi del . (w del z), x is an element of Omega, t > 0 r(zt) = Delta z - z + u, x is an element of Omega, t > 0 which models the interaction between two species in presence of two chemicals, where x, xi is an element of R, Omega subset of R-2 are bounded domains with smooth boundary. It is shown that under the homogeneous Neumann boundary conditions the system possesses a unique global classical solution which is bounded whenever both integral Omega u(0)dx and integral Omega w(0)dx wodx are appropriately small. In particular, we extend the recent results obtained by Tao and Winkler (2015, Disc. Cont. Dyn. Syst. B) to the fully parabolic case, i.e., the case of T = 1. Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. To explore the impact of media coverage and spatial heterogeneity of environment on the prevention and control of infectious diseases, a spatial-temporal SIS reaction-diffusion model with the nonlinear contact transmission rate is proposed. The nonlinear contact transmission rate is spatially dependent and introduced to describe the impact of media coverage on the transmission dynamics of disease. The basic reproduction number associated with the disease in the heterogeneous environment is established. Our results show that the degree of mass media attention plays an important role in preventing the spreading of infectious diseases. Numerical simulations further confirm our analytical findings. We consider the no-flux initial-boundary value problem for Keller-Segel-type chemotaxis growth systems of the form {u(t) = Delta u - chi del . (u del v) + rho u - rho u(2), x is an element of Omega, t > 0, v(t) = Delta v - v + u, x is an element of Omega, t > 0, in a ball Omega subset of R-n, n >= 3, with parameters chi > 0; rho >= 0 and mu > 0. By means of an argument based on a conditional quasi-energy inequality, it is firstly shown that if chi = 1 is fixed, then for any given K > 0 and T > 0 one can find radially symmetric initial data, possibly depending on K and T, such that for arbitrary mu is an element of (0, 1) the corresponding local-in-time classical solution (u, v) satisfies u(x, t) > K/mu with some x is an element of Omega and t is an element of (0, T); in fact, this growth phenomenon is actually identified as being generic in the sense that the set of all initial data having this property is dense in the set of all suitably regular radial initial data in a certain topology. Secondly, turning a focus on possible effects of large chemotactic sensitivities, on the basis of the above it is shown that when rho >= 0 and mu > 0 are fixed, then for all L > 0, T > 0 and chi > mu one can fix radial initial data (u(0,chi), v(0,chi)) which decay in L-infinity (Omega) x W-1,W-infinity 1 (Omega) as chi -> infinity, and which are such that for the respective solution (u(chi), v(chi)) there exist x is an element of Omega and t is an element of (0, T) fulfilling u chi(x,t) > L. In this paper, we investigate the global asymptotic stability of multi-group SIR and SEIR age-structured models. These models allow the infectiousness and the death rate of susceptible individuals to vary and depend on the susceptibility, with which we can consider the heterogeneity of population. We establish global dynamics and demonstrate that the heterogeneity does not alter the dynamical structure of the basic SIR and SEIR with age dependent susceptibility. Our results also demonstrate that, for age structured multi-group models considered, the graph-theoretic approach can be successfully applied by choosing an appropriate weighted matrix as well. This work deals with the properties of the traveling wave solutions of a double degenerate cross-diffusion model partial derivative b/partial derivative t = D-b del . {n(p)b(1 - b)del b} + n(q)b(l), partial derivative n/partial derivative t = D-n del(2)n - n(q)b(l), where p >= 0, q > 1, l > 1. This system accounts for degenerate diffusion at the population density n = b = 0 and b = 1 modeling the growth of certain bacteria colony with volume filling. The existence of the finite traveling wave solutions is proven which provides partial answers to the spatial patterns of the colony. In order to overcome the difficulty of traditional phase plane analysis on higher dimension, we use Schauder fixed point theorem and shooting arguments in our paper. New error estimates are established for Pian and Sumihara's (PS) 4-node assumed stress hybrid quadrilateral element [T.H.H. Pian, K. Sumihara, Rational approach for assumed stress finite elements, Int. J. Numer. Methods Engrg., 20 (1984), 1685-1695], which is widely used in engineering computation. Based on an equivalent displacement-based formulation to the PS element, we show that the numerical strain and a postprocessed numerical stress are uniformly convergent with respect to the Lame constant lambda on the meshes produced through the uniform bisection procedure. Within this analysis framework, we also show that both the numerical strain and stress are uniformly convergent on meshes which are stable for the Q(1) - P-0 Stokes element. Two semi-implicit numerical methods are proposed for solving the surface Allen-Cahn equation which is a general mathematical model to describe phase separation on general surfaces. The spatial discretization is based on surface finite element method while the temporal discretization methods are first-and second-order stabilized semi-implicit schemes to guarantee the energy decay. The stability analysis and error estimate are provided for the stabilized semi-implicit schemes. Furthermore, the first-and second-order operator splitting methods are presented to compare with stabilized semi-implicit schemes. Some numerical experiments including phase separation and mean curvature flow on surfaces are performed to illustrate stability and accuracy of these methods. A mathematical model describing the propagation of fungal diseases in plants is proposed. The model takes into account both chronological age and age since infection. We investigate and fully characterize the large time behaviour of the solutions. Existence of a unique endemic stationary state is ensured by a threshold condition: R-0 > 1. Then using Lyapounov arguments, we prove that if R-0 <= 1 the disease free stationary state is globally stable while when R-0 > 1, the unique endemic stationary state is globally stable with respect to a suitable set of initial data. The paper concerns the existence of affine-periodic solutions for affine-periodic (functional) differential systems, which is a new type of quasi-periodic solutions if they are bounded. Some more general criteria than LaSalle's one on the existence of periodic solutions are established. Some applications are also given. This work focuses on a class of retarded stochastic differential equations that need not satisfy dissipative conditions. The principle technique of our investigation is to use variation-of-constants formula to overcome the difficulties due to the lack of the information at the current time. By using variation-of-constants formula and estimating the diffusion coefficients we give sufficient conditions for p-th moment exponential stability,almost sure exponential stability and convergence of solutions from different initial value. Finally,we provide two examples to illustrate the effectiveness of the theoretical results In this paper, we consider a problem of minimizing the carbon abatement cost of a country. Two models are built within the stochastic optimal control framework based on two types of abatement policies. The corresponding HJB equations are deduced, and the existence and uniqueness of their classical solutions are established by PDE methods. Using parameters in the models obtained from real data, we carried out numerical simulations via semi-implicit method. Then we discussed the properties of the optimal policies and minimal costs. Our results suggest that a country needs to keep a relatively low economy and population growth rate and keep a stable economy in order to reduce the total carbon abatement cost. In the long run, it's better for a country to seek for more efficient carbon abatement techniques and an environmentally friendly way of economic development. An ordinary differential equation model describing interaction of water and plants in ecosystem is proposed. Despite its simple looking, it is shown that the model possesses surprisingly rich dynamics including multiple stable equilibria, backward bifurcation of positive equilibria, supercritical or subcritical Hopf bifurcations, bubble loop of limit cycles, homoclinic bifurcation and Bogdanov-Takens bifurcation. We classify bifurcation diagrams of the system using the rain-fall rate as bifurcation parameter. In the transition from global stability of bare-soil state for low rain-fall to the global stability of high vegetation state for high rain-fall rate, oscillatory states or multiple equilibrium states can occur, which can be viewed as a new indicator of catastrophic environmental shift. In this paper, we mainly discuss the existence and asymptotic stability of traveling fronts for the nonlocal evolution equations. With the monostable assumption, we obtain that there exists a constant c* > 0, such that the equation has no traveling fronts for 0 < c < c* and a traveling front for each c >= c*. For c > c*, we will further show that the traveling front is globally asymptotic stable and is unique up to translation. If we applied to some differential equations or integro-differential equations, our results recover and/or complement a number of existing ones. OBJECTIVE: The purpose of this study was to examine the importance of factors related to nurse retention. BACKGROUND: Retaining nurses within the healthcare system is a challenge for hospital administrators. Understanding factors important to nurse retention is essential. METHODS: Responses of nurses (n = 279) to the Baptist Health Nurse Retention Questionnaire (BHNRQ) at a 391-bed Magnet (R) redesignated community hospital were analyzed to explore differences in importance scores of bedside nurses. RESULTS: The results demonstrate that each of the 12 items on the BHNRQ was moderately to highly important. A multivariate analysis of variance based on generation, degree, unit, and experience revealed no significant differences on subscale scores (nursing practice, management, and staffing). Themes derived from the comment section on the BHNRQ were consistent with quantitative findings. CONCLUSION: Clinical and managerial competence, engagement with their employees, and presence on the unit are keys to retaining a satisfied nursing workforce. BACKGROUND: The Centers for Medicare and Medicaid Services Innovation Center introduced the Bundled Payments for Care Improvement (BPCI) initiative in 2011 as 1 strategy to encourage healthcare organizations and clinicians to improve healthcare delivery for patients, both when they are in the hospital and after they are discharged. Mercy Health Saint Mary's, a large urban academic medical center, engaged in BPCI primarily with a group of medical diagnosisrelated groups (DRGs). OBJECTIVES: In this article, we describe our experience creating a system of response for the diverse people and diagnoses that fall into the medical DRG bundles and specifically identify organizational factors for enabling successful implementation of bundled payments. RESULTS: Our experience suggests that interprofessional collaboration enabled program success. CONCLUSIONS: Although still in its early phases, observations from our program's strategies andtactics may provide potential insights for organizations considering engagement in the BPCI initiative. The most frequent cause of sentinel events is poor communication during the nurse-to-nurse handoff process. Standardized methods of handoff do not fit in every patient care setting. The aims of this quality improvement project were to successfully implement a modified bedside handoff model, with some report outside and some inside the patient's room, in a postpartum unit. A structured educational module and champion nurses were used. The new model was evaluated based on the change in compliance, patient satisfaction, and nursing satisfaction. Two months after implementation, there was an increase in nursing compliance in completing all aspects of the model as well as an increase in both patient and staff satisfactions of the process. Replicating this project may help other specialty units adhere to safety recommendations for handoff report. OBJECTIVE: This study reported the evolution of transformational leadership (TL) practices and behaviors across years of age, management experience, and professional nursing practice within a professional nursing leadership organization. BACKGROUND: Recent studies of CNO TL found valuations peak near age 60 years. This study reported on a wider range of management positions, correlating years of RN practice and management experience and age to TL metrics. METHOD: This study used Kouzes and Posner's Leadership Practices Inventory-Self-Assessment (LPI-S) to survey a nursing leadership organization, the Association of California Nurse Leaders (ACNL). Anonymous responses were analyzed to identify leadership trends in age and years of professional service. RESULTS: On average, LPI-S metrics of leadership skills advance through years of management, RN experience, and age. The TL scores are statistically higher in most LPI-S categories for those with more than 30 years of RN or management experience. Decade-averaged LPI-S TL metrics in the ACNL survey evolve linearly throughout age before peaking in the decade from age 60 to 69 years. A similar evolution of TL metrics is seen in decades of either years of management experience or years of RN experience. Transformational leadership increased with nursing maturity particularly for LPI-S categories of 'inspire a shared vision," "challenge the process," and 'enable others to act." CONCLUSION: In the ACNL population studied, decade-averaged leadership metrics advanced. Leadership evolution with age in the broader RN population peaked in age bracket 60 to 69 years. The LPI-S averages declined when older than 70 years, coinciding with a shift from full-time work toward retirement and part-time employment. An ultrasound-guided peripheral intravenous (UGPIV) quality improvement project occurred in an 849-bed tertiary care hospital with a goal to reduce the use of central lines, in particular, peripherally inserted central catheters (PICCs). Since implementation, PICCs have decreased by 46.7% overall, and 59 nurses in-hospital are competent in placing UGPIVs. Placement of UGPIVs by the bedside nurse is a key initiative in decreasing PICC use and, potentially, infections. Nurse leaders lack timely access to trended electronic health record (EHR) data to drive decision making. Robust nurse-sensitive patient outcome data are difficult to locate in EHRs and largely absent across entities. The Colorado Collaborative for Nursing Research is currently testing a federated data system to get nurse leaders the information they need, when they need it. Safety net settings care for a disproportionate share of low-resource patients often have fewer resources to invest in nursing research. To address this dilemma, an academic-clinical partnership was formed in an effort to increase nursing research capacity at a safety net setting. Penn Presbyterian Medical Center and the Center for Health Outcomes and Policy Research located at the University of Pennsylvania partnered researchers and baccalaureate-prepared nurses in an 18-month research skill development program. This article describes the programmatic design, conceptual framework, resource requirements, and effect on the institutional partners and participants. Si4+ is a smaller ion with higher charge compared to Zn2+. When incorporated in ZnO lattice, above a certain amount, the Si4+ ions create defects at the substituted sites. The Zn and O planes are dragged closer to each other by Si substitution while the lattice expansion along c-axis is more than that along a or b axes. However, the c/a ratio decreases continuously. The mechanical property deteriorates with substitution as well at par with the structural changes. Experimentally observed mechanical hardness of Si4+ substituted ZnO pellets decrease with incorporation of Si following the theoretical trend. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The effect of grain size on thermal hysteresis is studied in ribbons and thin strips obtained by rapid solidification techniques. Results show a strong increase of the hysteresis width when the grain size is below similar to 100 mu m. This effect is attributed to frictional work spent to accommodate the different martensitic variants in a zone close to grain boundaries, which constitutes an energy barrier proportional to the grain boundary area. A model for describing this effect based on thermodynamics and fitted experimental data is proposed. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Understanding dislocation behaviors in two-dimensional materials is crucial since it influences not only the structural stability of materials, but also functional properties. By combining in situ and aberration corrected transmission electron microscope characterizations in Sb2Te3, it was observed that hot deformation introduced large density of anti-phase boundaries, forcing dislocations to climb. Consequently, the glide of dislocations can break the constraint of localized planer slip in layered structure, increasing the possibility of dislocation interaction and multiplication. The deformability of hot-deformed material is strongly increased. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The densification and mechanical properties of B4C-TiB2 composites fabricated by reactive pulsed electric current sintering from a mixture of TiC and amorphous B powders were investigated. The excess of B was essential to remove the carbon produced by the reaction. The degassing process at 1900 degrees C before applying a 50 MPa external pressure greatly improved the densification of the composites. The B4C-41 vol% TiB2 composite obtained at the optimum condition had a high 3-point bending strength of 891 MPa, a Vickers hardness of 28 GPa and a fracture toughness of 4.4 MPa m(1/2), respectively. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. X-ray tomography is employed to observe the effects of intermetallic compound particles on the nucleation and growth of hydrogen micropores at high temperatures in Al-Zn-Mg-Cu aluminum alloys. Hydrogen micropores are heterogeneously nucleated on particles during exposure at 748 K. Growth and coalescence of the hydrogen micropores are observed with increasing exposure time. Interactions between hydrogen micropores and particles have a significant influence on the growth and coalescence of hydrogen micro pores. The growth speed of hydrogen micropores, nucleated on spherical, small particles is faster than those on other nucleation sites. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Fe65Ni17P11.5C6.5 bulk metallic glasses (BMGs) containing a different amount of 92, 181 and 252 at. ppm oxygen were prepared and their deformation behavior under compressive loading were studied, It was found that the increment of oxygen increased strength but significantly decreased plastic deformation capability. X-ray photoelectron spectrograph (XPS) valence-band spectrum confirmed that oxygen doping tends to transfer interatomic bonds from s-like to p-d hybrid bonds, leading to the lower compressive plasticity. Our findings indicate that room-temperature brittleness of Fe-based BMGs is closely related to the concentration of oxygen impurity and the preparation process has to be carefully controlled. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. An internal oxidation zone with (Mn1-x,Fe-x)O mixed oxide precipitates occurs after annealing a Fe -1.7 at% Mn steel at 950 degrees C in N-2 plus 5 vol% H-2 gas mixture with dew point of 10 degrees C. Local thermodynamic equilibrium in the internal oxidation zone is established during annealing of the Mn alloyed steel. As a result, the composition of (Mn1-x,Fe-x)O precipitates depends on the local oxygen activity. The oxygen activity decreases as a function of depth below steel surface, and consequently the concentration of Fe decreases in the (Mn1-x,Fe-x)O precipitates. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The microstructure in a Mn55.2Ga19.0Cu25.8 alloy that exhibits both martensitic and diffusional phase transformations was examined by electron microscopy, and the potential effects of these transformations on the large coercivity (similar to 2 T) were discussed. The alloy showed a cellular microstructure, composed of a cuboidal matrix enveloped by a boundary phase, and superposed on well-defined martensite plates. Since the boundary phase is enriched by the non-magnetic element Cu, it may serve as magnetic isolation for the matrix region with a face centered tetragonal lattice formed by the martensitic transformation, or offer potential pinning sites for magnetic domain walls. (C) 2017 Acta Materialia Inc Published by Elsevier Ltd. All rights reserved. The core structures and mobility of < c > dislocations in magnesium were predicted using both density functional theory and molecular dynamics simulations. The pure edge and screw cores are compact at 0 K. With increasing temperature up to 700 K, the edge dislocation dissociates into two < c >/2 partials on the basal plane, but remains immobile. The screw dislocation remains compact and continuously cross-slips between the three prismatic planes. At room temperature, the screw dislocation only glides after emitting a vacancy at 420 MPa. The Peierls stress of the dislocation after vacancy emission is 50 MPa. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Controlling and manipulating heterogeneity through spinodal decomposition is one of few effective ways in making better metallic glass matrix composites. Here using a finite element modeling we investigate how the geometry, orientation, size, and statistical and spatial distribution of varying free volume heterogeneities in spinodal decomposition affect mechanical properties of the composites. Among the plethora of factors, orientation and statistical distribution of free volumes in the spinodal microstructures are identified as two key ones critically influencing the mechanical responses. The size effect is also discovered and found to be governed by the minimum size in shear banding initiation and propagation. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The strength and fracture behavior of a directionally solidified Al2O3/Y3Al5O12 eutectic single-crystal were investigated. To identify and classify the toughening mechanism, fractography of the single-crystal was thoroughly analyzed. It was observed that the interfaces between Al2O3 and Y3Al5O12, which hindered the crack propagation, and nm-sized (111) cleavage planes appearing along the tearing ridge of Y3Al5O12 markedly improved the Al2O3/Y3Al5O12 eutectic single-crystal toughness. These results could improve understanding of the microstructure design and deformation mechanism of the Al2O3/Y3Al5O12 eutectic crystal. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Ceria doped zirconia has been shown to exhibit enhanced shape memory properties in small volume structures such as particles and micropillars. Here those properties are translated into macroscopic materials through the fabrication of zirconia foams by ice templating. Directional freezing is used to produce foams with micron sized pores and struts that can locally take advantage of the shape memory effect due to their fine scale. The foams are subjected to thermal cycling and x-ray diffraction analysis to evaluate the martensitic transformation that underlies shape memory properties, and are found to survive the transformation through multiple cycles. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The influence of hydrogen on the mechanical behavior of the CoCrFeMnNi high-entropy alloy (HEA) was examined through tensile and nanoindentation experiments on specimens hydrogenated via gaseous and electrochemical methods. Results show that the HEA's resistance to gaseous hydrogen embrittlement is better than that of two representative austenitic stainless steels, in spite of the fact that it absorbs a larger amount of hydrogen than the two steels. Reasons for this were discussed in terms of hydrogen-enhanced localized plasticity mechanism and the critical amount of hydrogen required for it These were further substantiated by additional experiments on electrochemically charged specimens. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. In this letter new Ti-Fe-Sn-Nb hypoeutectic alloy comprised of primary dendritic beta-Ti and ultrafine (beta-Ti + TiFe) eutectic was developed showing superior mechanical properties. The as-cast Ti67Fe27Sn3Nb3 (at.%) alloy exhibited exceptionally high yield stress (2.18 GPa) comparable to that of bulk metallic glasses. Most importantly, the sample presented significant strain-hardening (320 MPa) and enhanced plasticity. The slip deformation and dislocation accumulation in the beta-Ti dendrite contribute to the plasticity and the pronounced strain-hardening, whereas the high strength stems from the ultrafine (beta-Ti + TiFe) eutectic structure as well as the solution hardening in the multicomponent system. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The microstructure design strategy of laminated structure was applied to improve the mechanical properties of metal matrix composites (MMCs). The laminated composites were prepared by hot pressing and rolling of alternately stacked Ti foils and Sic/Al composite foils, and exhibited a superior strength-ductility synergy compared to SiCp/Al bulk MMCs. Coupling X-ray tomography and digital image correlation (DIC) revealed that the enhanced mechanical properties originated from the contribution of the interface on local stress/strain transfer behaviors, thus delaying the crack initiation and propagation within SiCp/Al layers. (C) 2017 Acta Materialia Inc Published by Elsevier Ltd. All rights reserved. The strain-rate sensitivity along the load-unload experiments on fine-grained pure Cu and Cu-Ni alloys are presented. For grain sizes lying in the range of 1-10 mu m in pure Cu, Cu16at%Ni and Cu50at%Ni, an elevated rate sensitivity at the first few percent of plastic strain is observed. At this elasto-plastic transition, a substantial hysteresis during unloading is developed corroborating to the Bauschinger effect in the fine-grained materials. The strain rate sensitivity results are analyzed along the load-unload response of the materials. A dislocation-based phenomenological model is developed to describe the experimental observations. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Polycrystalline bulks of Si-content tuned Ge-doped higher manganese silicides (HMSs) were fabricated to elucidate the effects of Si content on the phase formation behavior and thermoelectric properties. The phase formation and electronic transport characteristics of HMSs were significantly dependent on Si content. Improved power factor was obtained at higher Si contents because of an enhanced Seebeck coefficient due to the increase in density of states effective mass, maintaining electrical conductivity. Furthermore, the lattice thermal conductivity was reduced through Si-content tuning, which suppressed the formation of secondary phases. Thus, a maximum ZT of 0.61 at 823 K was obtained in MnSi1.77Ge0.027. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The kinetics of the reverse martensitic transformation in shape memory alloy wires is studied under conditions at which it is restricted neither by the rate of heat transfer nor by mechanical inertia. Two characteristic times for the transformation are identified and estimated. A model provides a universal expression that fits all experimental measurements performed at different temperatures. The kinetic law predicted by the model indicates that interface velocities are governed by viscous resistance and are thus much slower than the shear wave speed, even under very large driving force values. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Two temperature dependence dielectric anomalies are observed in the series of CoFe2O4 (CFO)/Bi3.15Nd0.85Ti3O12 (BNdT) composite ceramics arising from a periodic structural fluctuation found by high resolution transmission electron microscopy (HRTEM) in the BNdT phase. The latter dielectric anomaly upshifts from 580 degrees C to 760 degrees C with the increasing of CFO contents. Higher Curie temperatures are attributed to the larger orthorhombicity values with the effects of the different thermal expansion coefficients and the coherent interfaces between BNdT and CFO. The conductive mechanisms stemming from grain boundaries and plate boundaries were merged into one process approximately for proper CFO contents. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A characteristic creep behavior, with a transient minimum creep rate region, was exhibited in a Co-Al-W-Ta-Ti single-crystal superalloy at 900 degrees C and 420 MPa. Interrupted creep tests were performed at various creep regions, the investigation of substructural evolution indicated the configurations of stacking faults were different during various creep regions. The extension of stacking faults and the formation of Lomer-Cottrell locks are hypothesized to be responsible for the accelerating region and steady-state region, respectively. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The influence of the chemical compositions of FePt films on the surface segregations of Pt was investigated by cross-sectional high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM) of L1(0)-FePt granular films grown on single crystalline MgO substrates. Ab initio atomistic simulations revealed the surface segregation of Pt is detremental for the magentocrystalline anisotropy energy of the L1(0)-FePt grains. This detremental effect is pronounced for the grain size of less than 15 nm. We found that the Pt surface segregation can be suppressed by a trace addition of Ag to the FePt film. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The processing - phase transformation - deformation response relationship of Ni50Ti35Hf15 high temperature shape memory alloy was studied. As the material was hot extruded and further underwent wire drawing, the phase transformation temperatures decreased and the strength levels increased, yet the initial brittle response was replaced by near-perfect superelastic response above the austenite finish temperature. Moreover, the Ni50Ti35Hf15 wires exhibited a remarkably stable cyclic actuation response under 300 MPa, as compared to the poor cyclic stability of the hot extruded samples, which is beyond the operating stress levels for known shape memory actuators. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Additive manufacturing (AM) or 3D printing is well known for producing arbitrary shaped parts without any tooling required, offering a promising alternative to the conventional injection molding method to fabricate near-net-shaped magnets. In this viewpoint, we compare two 3D printing technologies, namely binder jetting and material extrusion, to determine their applicability in the fabrication of Nd-Fe-B bonded magnets. Prospects and challenges of these state-of-the-art technologies for large-scale industrial applications will be discussed. Published by Elsevier Ltd on behalf of Acta Materialia Inc. Intelligent application of materials with site-specific properties will undoubtedly allow more efficient components and use of resources. Despite such materials being ubiquitous in nature, human engineering structures typically rely upon monolithic alloys with discrete properties. Additive manufacturing, where material is introduced and bonded to components sequentially, is by its very nature a good match for the manufacture of components with changes in property built-in. Here, some of the recent progress in additive manufacturing of material with spatially varied properties is reviewed alongside some of the challenges facing and opportunities arising from the technology. (C) 2016 Acta Materialia Inc Published by Elsevier Ltd. Many additively manufactured (AM) materials have properties that are inferior to their wrought counterparts, which impedes industrial implementation of the technology. Bulk deformation methods, such as rolling, applied in-process during AM can provide significant benefits including reducing residual stresses and distortion, and grain refinement. The latter is particularly beneficial for titanium alloys where the normally seen large prior grains are converted to a fine equiaxed structure giving isotropic mechanical properties that can be better than the wrought material. The technique is also beneficial for aluminium alloys where it enables a dramatic reduction in porosity and improved ductility. (C) 2016 The Authors. Published by Elsevier Ltd on behalf of Acta Materialia Inc. Geometrical conformity, microstructure and properties of additively manufactured (AM) components are affected by the desired geometry and many process variables within given machines. Building structurally sound parts with good mechanical properties by trial and error is time-consuming and expensive. Today's computationally efficient, high-fidelity models can simulate the most important factors that affect the AM products' properties, and upon validation can serve as components of digital twins of 3D printing machines. Here we provide a perspective of the current status and research needs for the main building blocks of a first generation digital twin of AM from the viewpoints of researchers from several organizations. (C) 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. High-strain-rate deformation in ultrasonic additive manufacturing was analyzed by performing microstructural characterization via electron microscopy. The micro-asperities on the top tape surface, which were formed by contact with the sonotrode surface, underwent cyclic deformation in the shear direction at high strain rates during welding with an additional tape. This caused plastic flow and crushing of the micro-asperities, and a flattened interface was formed between the upper and lower tapes. Further, surface oxide films were fractured and dispersed by ultrasonic vibration, and metallurgical welding was achieved. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Additive manufacturing (AM) of metals is rapidly emerging as an established manufacturing process for metal components. Unlike traditional metals fabrication processes, metals fabricated via AM undergo localized thermal cycles during fabrication. As a result, AM presents the opportunity to control the liquid-solid phase transformation, i.e. material texture. However, thermal cycling presents challenges from the standpoint of solid-solid phase transformations. To be discussed are the opportunities and challenges in metals AM in the context of texture control and associated solid-solid phase transformations in Ti-6A1-4V and Inconel 718. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Based on our experience gained from uncertainty quantification (UQ) of traditional manufacturing, this paper discusses UQ for additive manufacturing with a focus on the prediction of material properties. Applications of UQ methods in traditional manufacturing are briefly summarized first. Based on that, we investigate how the state of the art UQtechniques can be applied to AM process to quantify the uncertainty in the material properties due to various sources of uncertainty. The UQ of ultimate tensile strength of a structure obtained from laser sintering of nanoparticles is used as an example to illustrate the proposed UQframework. (C) 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Additive manufacturing technology provides a revolutional way of producing engineering structures regardless of their geometric complexity. Hence, topology optimization technique is rapidly becoming a promising tool to design additive manufactured structures to reduce weight and achieve optimal performance simultaneously. However, a common feature of additive manufactured materials is their anisotropy arising from the design and manufacturing process. Therefore, this viewpoint paper aims at presenting an overview of the anisotropy of additive manufactured materials and providing some insights into the role of anisotropy in topology optimization of additive manufactured load-bearing structures. (C) 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. This essay challenges the prevailing view of progressive rationality and disenchantment as set out in Max Weber's social theory and reproduced in organizational neo-institutionalism. We observe that rationality and disenchantment cannot exist in the absence of magic, mystery and enchantment. We argue that the contemporary celebration of rationality and disenchantment is a modernist discourse that has marginalized equally compelling instances of re-enchantment. Drawing from the popular press and management research we identify five themes of re-enchantment in the world; the rise of populism, the return of tribalism, the resurgence of religion, the re-enchantment of science and the return to craft. We use these phenomena to elaborate four alternative constructs authenticity, reflexivity, mimesis and incantation - that counterbalance the over rationalized and paralyzing concepts in neo-institutionalism legitimacy, embeddedness, isomorphism and diffusion. (C) 2017 Elsevier Ltd. All rights reserved. This short paper responds to Apelt et al.'s (2017) comments on Ahrne et al.'s (2016) proposal of extending the concept of organization to any decided social orders and thereby putting organization studies at the centre of the social sciences. We highlight some misunderstandings about the aims of this proposal and discuss Apelt et al.'s own proposal of re-focusing organization studies on formal organizations based on Niklas Luhmann's systems theory. (C) 2017 Published by Elsevier Ltd. We explore how strategic initiatives emerge at the business unit level in the context of multi-business firms. Findings show that such initiatives create cross-business synergies in the absence of any direct intervention from the headquarters. Four factors appeared to foster the development of autonomous cross-business collaboration: a sense of urgency at the level of the firm, the existence of a few broad but strong corporate strategic guidelines, the existence of a set of cross-business integration mechanisms, and an organizational culture promoting collaboration. Our findings suggest that, in addition to developing and enforcing top-down cross-business initiatives, headquarters would benefit from acknowledging the importance of business units' local knowledge by creating an organizational environment characterized by the four conditions identified in this paper. (C) 2016 Elsevier Ltd. All rights reserved. Social media is embedded in today's internationalization strategy. Companies extend their reach into foreign countries by posting and tweeting. Firms also enhance their mobile capabilities in foreign markets (e.g., knowledge and reputation) through user-generated content in online social networks. Levering on the capabilities-based theory of the multinational enterprise, this paper builds upon a resource-based, industry/network-based, and institution-based view framework. The study provides a comprehensive conceptual and empirical model to explain the effect of,social networks on foreign direct investment. Empirical analysis in a global panel dataset of >4500 multinational enterprises suggests that online social networks' activity stimulates foreign capital expenditure and new affiliates. In addition, the article explores the relevance of customer capabilities along with sectoral and institutional moderating effects. (C) 2016 Elsevier Ltd. All rights reserved. Does advertising lead to higher profits? This question has preoccupied company executives and academic researchers for many decades. Arguments have been put forth in both directions, and evidence is mixed at best. In this article, we re-examine the question from a value creation and value capturing perspective, which allows us to re-interpret and reconcile the different views and empirically validate the resulting hypotheses. Using a database of the top 500 brands of established companies during the 2008-2015 period, we find that advertising spending has no significant impact on profitability, while both brand value and research and development (R&D) spending have a clearly positive effect. In addition, we observe a positive interaction effect between advertising spending and R&D spending and a negative interaction between brand value and R&D spending on profitability. These findings corroborate the view that advertising in and of itself does not improve profitability; rather, its effect is positive only when it acts in support of customer value creation as a result of R&D. (C) 2016 Published by Elsevier Ltd. This paper aims to provide a detailed analysis of the relationship between board leadership structures and executive compensation. According to agency theory, the combined position of CEO and Chairperson of the Board (COB) entails greater compensation for the CEO in order to reduce conflicts of interest. In the literature, combined board structure is generally considered to generate additional costs for companies. However, the choice of two separate structures implies the payment of incentive compensation for the COB in addition to that defined for the CEO. This paper investigates the financial cost of duality when compensation packages are set for both leaders. Our results suggest that although combined board structure is associated with higher incentive compensation for the CEO, the overall compensation cost to the company is no higher when the chairperson's compensation is considered. (C) 2017 Elsevier Ltd. All rights reserved. This study examines how behavioral processes among nominating committees, CEOs, and board chairs affect the comprehensiveness of non-executive director selection planning and evaluation. Building on a theory-building multiple-case study, our findings indicate that comprehensiveness is based on three key factors: (1) task-related mutual and collective interactions in nominating committees, (2) board chair leadership in structuring selection processes with high facilitation skills, and (3) the level and timing of information exchange between CEOs and board chairs. Furthermore, we highlight the interconnectedness and temporal embeddedness of these behavioral processes. Our study contributes to a more holistic understanding of non-executive director selections and provides new insights into the complex and interwoven social dynamics among nominating committees, CEOs, and board chairs. (C) 2016 Elsevier Ltd. All rights reserved. This study examines how the effect of CEO duality on firm performance is affected by two internal governance forces namely other executives in the top management team and blockholding outside directors. Results based on a longitudinal dataset from the U.S. computer industry were consistent with my hypotheses. Specifically, I found that the effect of CEO duality was negative when the CEO had dominant power relative to other executives and when the board had a blockholding outside director, but was nonsignificant otherwise. This study enriches our understanding of the effect of CEO duality, and helps reinforce the call for the nonduality structure as the default choice and put the burden of proof on those who wish to justify otherwise on special grounds. (C) 2016 Elsevier Ltd. All rights reserved. We introduce a recent development in the statistical analysis of relational data that offers rigorous discrimination of a variety of structural and behavioural effects of interest to management research. Exponential random graph models account for the highly interdependent nature of network data that are problematic for the predominant inferential statistical analysis used in management research. We illustrate the value of the approach with an application focused on executive recruitment by large UK firms, modelling migrations of managers among firms as a network of relationships. We find rigorous statistical support for the influences of industry origin in executive recruitment, particularly in relation to legal and accounting activities. The flexibility and sophisticated relational variables available in the models offer considerable analytical power of value to a wide range of management applications. (C) 2016 Elsevier Ltd. All rights reserved. Using an experimental condition that relaxed cognitive constraints in a behavior description interview (BDI), our results uncovered a pattern of low cognitive saturation in the traditional BDI format but significantly higher in the relaxed condition. Well over half of the time interviewees reported different experiences in the relaxed condition, and those experiences were rated higher by the interviewers and correlated more strongly with job performance. A potential implication is that inhibitive cognitive demands in traditionally administered BDIs result in a number of interviewees reporting relevant experiences that come to mind easily rather than ones that maximally portray their capabilities, thereby shifting BDIs more towards assessment of typical behavior. With reduced cognitive constraints, interviewees had greater opportunity to locate more maximally oriented experiences, with higher ability individuals benefitting the most. (C) 2016 Elsevier Ltd. All rights reserved. This study examines the mediating role of service recovery judgments between pre-recovery emotions and post-recovery satisfaction, and investigates the role of firm reputation in this mediation context. Using a moderated mediation framework, the authors test the model with data from 366 customers who experienced a banking service failure and complained to a third party. The results show that distributive, procedural, and interactional justice dimensions mediate the relationship between pre-recovery emotions and satisfaction. Firm reputation moderates the relationship between emotions and satisfaction via distributive and interactional justice, but not via procedural justice. This study provides evidence for the notion that pre-recovery emotion is an antecedent of service recovery process and firm reputation plays an essential role in this process. (C) 2016 Elsevier Ltd. All rights reserved. While the literature has indeed confirmed a general tendency linking small and medium enterprises (SMEs) to a dynamic of greater job creation, there is little available evidence on what has happened to job quality since the financial crisis. Through a representative sample of 5311 employees in 2008 (first year of job destruction) and 4925 employees in 2010 (last year for which data were available), and using a two stage structural equation model, this article empirically analyses the multidimensional determinants of job quality, by enterprise-size class, in Spain. The research has revealed three main results. First, job quality in Spain improved in all enterprises, regardless of their size, during the early years of the recession. Second, the greatest improvements were found in SMEs. Although job quality was already better in SMEs than in large enterprises in 2008, the differences between them subsequently widened. Third, this accelerated divergence was explained by the following dimensions: working conditions, work intensity, health and safety at work, and work life balance. These dimensions were much more positive in SMEs. Employment-related public policy should therefore focus more specifically on SMEs. There are two reasons for this. First, despite the recession, SMEs have shown themselves to be key factors in the explanation of job quality. Second, by making changes to their value generation model, they could continue to drive the creation of better quality jobs. (C) 2016 Elsevier Ltd. All rights reserved. The purpose of this paper is to investigate the relationship between the number of minority international joint ventures (MIJVs) formed and the level of internationalization attained in the context of small and medium-sized enterprises (SMEs). We argue that this is an inverted U-shaped relationship that is negatively moderated by a global (versus regional) focus. We test our hypotheses on a comprehensive sample of Spanish manufacturing SMEs from 2006 to 2013. Whereas our empirical analysis does not provide enough support for a curvilinear relationship, the results we obtained show a positive linear association between SMEs' number of MIJVs and internationalization and corroborate the negative moderation of a global focus. Thus, this study enhances our understanding of the specific impact of an internationalization strategy based on the formation of MIJVs in the context of SMEs. Moreover, it emphasizes the importance of considering contingencies at the regional frontier to understand the effect of SME5' foreign expansion strategies. (C) 2016 Elsevier Ltd. All rights reserved. Ethnopharmacological relevance: Jatropha neopauciflora Pax is an endemic species to Mexico, and its latex is used in traditional medicine to treat mouth infections when there are loose teeth and to heal wounds. In this research, we evaluated the antimicrobial activity, wound healing efficacy and chemical characterization of J. neopauciflora latex in a murine model. Materials and methods: The antibacterial activity was determined using Gram positive and negative strains, the antifungal activity was determined using yeast and filamentous fungi, and the wound healing efficacy of the latex was determined using the tensiometric method. The anti-inflammatory activity was evaluated using the plantar oedema model in rats, administering the latex orally and topically. Cytotoxic activity was determined in vitro in two different cell lines. Antioxidant capacity, total phenolics, total flavonoids, reducing carbohydrates and latex proteins were quantified. The latex analysis was performed by High Performance Liquid Chromatography (HPLC). Finally, molecular exclusion chromatography was performed. Results: The latex demonstrated antibacterial activity. The most sensitive strains were Gram positive bacteria, particularly S. aureus (MIC=2 mg/mL), and the latex had bacteriostatic activity. The latex did not show antifungal activity. The latex demonstrated a wound-healing efficacy, even the positive control (Recoveron). The orally administered latex demonstrated the best anti-inflammatory activity and was not toxic to either of the 2 cell lines. The latex had a high antioxidant capacity (SA(50)=5.4 mu g/mL), directly related to the total phenolic (6.9 mu g GAE/mL) and flavonoid (12.53 mu g QE/mL) concentration. The carbohydrate concentration was 18.52 mu g/mL, and fructose was the most abundantly expressed carbohydrate in the latex (14.63 mu g/mL, 79.03%). Additionally, the latex contained proteins (7.62 mu g/mL) in its chemical constitution. As secondary metabolites, the HPLC analysis indicated the presence of phenols and flavonoids. Conclusions: The J. neopauciflora latex promotes the wound healing process by avoiding microorganism infections, inhibiting inflammation and acting as an antioxidant. Ethnopharmacological relevance: Ginkgo biloba L. (Ginkgoaceae) has been widely used in traditional medicine for variety of neurological conditions particularly behavioral and memory impairments. Aim of the study: The present study was envisaged to explore the effect of a standardized fraction of Ginkgo biloba leaves (GBbf) in rat model of lithium-pilocarpine induced spontaneous recurrent seizures, and associated behavioral impairments and cognitive deficit. Materials and methods: Rats showing appearance of spontaneous recurrent seizures following lithium pilocarpine (LiPc)-induced status epilepticus (SE) were treated with different doses of GBbf or vehicle for subsequent 4 weeks. The severity of seizures and aggression in rats were scored following treatment with GBbf. Further, open field, forced swim, novel object recognition and Morris water maze tests were conducted. Histopathological, protein levels and gene expression studies were performed in the isolated brains. Results: Treatment with GBbf reduced seizure severity score and aggression in epileptic animals. Improved spatial cognitive functions and recognition memory, along with reduction in anxiety-like behavior were also observed in the treated animals Histopathological examination by Nissl staining showed reduction in neuronal damage in the hippocampal pyramidal layer. The dentate gyrus and Cornu Ammonis 3 regions of the hippocampus showed reduction in mossy fiber sprouting. GBbf treatment attenuated ribosomal S6 and pS6 proteins, and hippocampal mTOR, Rps6 and Rps6kb1 mRNA levels. Conclusions: The results of present study concluded that GBbf treatment suppressed lithium-pilocarpine induced spontaneous recurrent seizures severity and incidence with improved cognitive functions, reduced anxiety-like behavior and aggression. The effect was found to be due to inhibition of mTOR pathway hyperactivation linked with recurrent seizures. Ethnopharmacological relevance: Doliocarpus dentatus is a medicinal plant widely used in Mato Grosso do Sul State for removing the swelling pain caused by the inflammation process and for treating urine retention. Aim of the study: The genotoxic aspects and the anti-inflammatory and antimycobacterial activity of the ethanolic extract obtained from the leaves of D. dentatus (EEDd) were investigated. Materials and methods: The EEDd was evaluated against Mycobacterium tuberculosis, and the compound composition was evaluated and identified by nuclear magnetic resonance (NMR). The mice received oral administration of EEDd (30-300 mg/kg) in carrageenan models of inflammation, and EEDd (10-1000 mg/kg) was assayed by the comet, micronucleus, and phagocytosis tests and by the peripheral leukocyte count. Results: Phenols (204.04 mg/g), flavonoids (89.17 mg/g), and tannins (12.05 mg/g) as well as sitosterol-3-O-beta-D-glucopyranoside, kaempferol 3-O-alpha-L-rhamnopyranoside, betulinic acid and betulin were present in the EEDd. The value of minimal inhibitory concentration (MIC) of EEDd was 62.5 mu g/mL. The EEDd induced a significant decrease in the edema, mechanical hypersensitivity and leukocyte migration induced by carrageenan. The comet and micronucleus tests indicated that the EEDd was not genotoxic. The EEDd also did not change the phagocytic activity or the leukocyte perLipheral count. Conclusions: The EEDd does not display genotoxicity, phagocytosis and could act as an antimycobacterial and anti-inflammatory agent. This study should contribute to ensuring the safe use of EEDd. Ethnopharmacoligical relevance: Leaves of Crateva adansonii DC (Capparidaceae), a small bush found in Togo, are widely used in traditional medicine to cure infectious abscesses. Traditional healers of Lome harvest only budding leaves early in the morning, in specific area in order to prepare their drugs. Aim of the study: The main goal was to validate the ancestral picking practices, and to assess the activity of C. adansonii medicine towards infectious abscesses. Materials and methods: A phytochemical screening of various C. adansonii leaf samples was performed using an original HPTLC-densitometry protocol and major flavonoids were identified and quantified. C. adansonii samples were collected in different neighborhoods of Lome, at different harvesting-times and at different ages. Radical scavenging capacity, using DPPH assay, was used to quickly screen all extracts. Extracts were tested for anti-Staphylococcus aureus activity and anti-inflammatory effect on human primary keratinocytes infected by S. aureus. IL6, IL8 and TNF alpha expression and production were assessed by RT-PCR and ELISA assays. Results: Using antioxidant activity as selection criteria, optimal extracts were obtained with budding leaves, collected at 5:00 am in Djidjole neighborhood. This extract showed the strongest anti-inflammatory effect on S. aureus-infected keratinocytes by reducing IL6, IL8 and TNFa expression and production. None of the extracts inhibited the growth of S. aureus. Conclusions: Those results validate the traditional practices and the potential of C. adansonii as anti-inflammatory drug. Our findings suggest that traditional healers should add to C. adansonii leaves an antibacterial plant of Togo Pharmacopeia, in order to improve abscess healing. Ethnopharmacological relevance: Paeoniflorin and liquiritin are major constituents in some Chinese herbal formulas, such as Yiru Tiaojing (YRTJ) Granule (a hospitalized preparation) and Peony-Glycyrrhiza Decoction, used for hyperprolactinemia-associated disorders. Aim of the study: To investigate the effect of paeoniflorin and liquiritin on prolactin secretion. Materials and methods: The effect of YRTJ Granule on metoclopramide-induced hyperprolactinemia was tested in rats. Paeoniflorin and liquiritin in the YRTJ Granule extract were identified and quantified by HPLC. The effects of paeoniflorin and liquiritin on prolactin secretion were examined in prolactinoma cells that were identified morphologically and by Western blot. The concentration of prolactin was determined by ELISA. The gene expression was analyzed by Western blot. Results: YRTJ Granule ameliorated metoclopramide-induced hyperprolactinemia in rats. The contents of paeoniflorin and liquiritin in YRTJ Granule were 7.43 and 2.05 mg/g extract, respectively. Paeoniflorin, liquiritin and bromocriptine (a dopamine D-2 receptor (D2R) agonist) decreased prolactin concentration in MMQ cells expressing D2R. However, the effect of liquiritin and bromocriptine was abolished in GH3 cells lacking D2R expression. Interestingly, paeoniflorin still decreased prolactin concentration in GH3 cells in the same manner. Furthermore, paeoniflorin suppressed prolactin protein expression, and was without effect on D2R protein expression in both MMQ and GH3 cells. Conclusions: The present results suggest that paeoniflorin and liquiritin play a role in YRTJ Granule-elicited improvement of hyperprolactinemia. While the effect of liquiritin is D2R-dependent, paeoniflorin D2R-independently inhibits prolactin secretion in prolactinoma cells that may especially benefit the hyperprolactinemic patients who are refractory to dopaminergic therapies. Ethnopharmacological relevance: Traditional Chinese medicine Bu-Fei decoction (BFD) has been utilized to treat patients with Qi deficiency for decades, with the advantages of invigorating vital energy, clearing heat-toxin and moistening lung, etc. According to previous clinical experience and trials, BFD has been found to indeed improve life quality of lung cancer patients and prolong survival time. Nevertheless, little is known on its potential mechanisms so far. Being regarded as a pivotal cytokine in the tumor microenvironment, transforming growth factor beta (TGF-beta) stands out as a robust regulator of epithelial-mesenchymal transition (EMT), which is closely linked to tumor progression. Aim of the study: The present study was designed to explore whether BFD antagonized EMT via blocking TGF-beta 1-induced signaling pathway, and then help contribute to create a relatively steady microenvironment for confining lung cancer. Materials and methods: This experiment was performed in lung adenocarcinoma A549 cells both in vitro and in vivo. In detail, the influences mediated by TGF-beta 1 alone or in combination with different concentrations of BFD on migration were detected by wound healing and transwell assays, and the effects of BFD on cell viability were determined by cell counting kit-8 (CCK-8) assay. TGF-beta 1, EMT relevant proteins and genes were evaluated by western blotting, confocal microscopy, quantitative real-time polymerase chain reaction (qRT-PCR), immunohistochemistry (IHC) and enzyme-linked immuno sorbent assay (ELISA). Female BALB/C nude mice were subcutaneously implanted A549 cells and given BFD by gavage twice daily for 28 days. The tumor volume was monitored every 4 days to draw growth curve. The tumor weight, expression levels of EMT-related protein in tumor tissues and TGF-beta 1 serum level were evaluated, respectively. Results: BFD only exerted minor effects on A549 cell proliferation and this was in accordance with the in vivo result, which showed that the tumor growth and weight were not be restrained by BFD administration. However, the data elucidated that BFD could dose-dependently suppress EMT induced by TGF-beta 1 in vitro via attenuating canonical Smad signaling pathway. In the A549 xenograft mouse model, BFD also inhibited protein markers that are associated with EMT and TGF-beta 1 secretion into serum. Conclusions: Based on these above data, the conclusion could be put forward that BFD probably attenuated TGF-beta 1 mediated EMT in A549 cells via decreasing canonical Smad signaling pathway both in vitro and in vivo, which may help restrain the malignant phenotype induced by TGF-beta 1 in A549 cells to some extent. Ethnopharmacological relevance: In traditional Chinese medicine, Glechoma hederacea is frequently prescribed to patients with cholelithiasis, dropsy, abscess, diabetes, inflammation, and jaundice. Polyphenolic compounds are main bioactive components of Glechoma hederacea. Aim of the study: This study was aimed to investigate the hepatoprotective potential of hot water extract of Glechoma hederacea against cholestatic liver injury in rats. Materials and methods: Cholestatic liver injury was produced by ligating common bile ducts in Sprague-Dawley rats. Saline and hot water extract of Glechoma hederacea were orally administrated using gastric gavages. Liver tissues and bloods were collected and subjected to evaluation using histological, molecular, and biochemical approaches. Results: Using a rat model of cholestasis caused by bile duct ligation (BDL), daily oral administration of Glechoma hederacea hot water extracts showed protective effects against cholestatic liver injury, as evidenced by the improvement of serum biochemicals, ductular reaction, oxidative stress, inflammation, and fibrosis. Glechoma hederacea extracts alleviated BDL-induced transforming growth factor beta-1 (TGF-beta 1), connective tissue growth factor, and collagen expression, and the anti-fibrotic effects were accompanied by reductions in a-smooth muscle actin-positive matrix-producing cells and Smad2/3 activity. Glechoma hederacea extracts attenuated BDL-induced inflammatory cell infiltration/accumulation, NF-kappa B and AP-1 activation, and inflammatory cytokine production. Further studies demonstrated an inhibitory effect of Glechoma hederacea extracts on the axis of high mobility group box-1 (HMGB1)/toll-like receptor-4 (TLR4) intracellular signaling pathways. Conclusions: The hepatoprotective, anti-oxidative, anti-inflammatory, and anti-fibrotic effects of Glechoma hederacea extracts seem to be multifactorial. The beneficial effects of daily Glechoma hederacea extracts supplementation were associated with anti-oxidative, anti-inflammatory, and anti-fibrotic potential, as well as down-regulation of NF-kappa B, AP-1, and TGF-beta/Smad signaling, probably via interference with the HMGB1/TLR4 axis. Ethnopharmacological relevance: Herbal medicines including Tanshinone IIA (TanIIA) and Astragaloside IV (AsIV) are widely used in Asia as therapeutic agents for cardiovascular diseases, due to their complementary roles and shared properties based on the theory of traditional Chinese medicine and pharmacological researches. However, the underlying pathological mechanisms for their efficacy are still unclear. In addition, the compatibility or incompatibility of the herbal medicines when administered with other herbal remedies or with prescription drugs is unknown. Aim of the study: We aimed to investigate the compatibility of TanIIA and AsIV in protecting cardiomyocytes against hypoxia-induced injury. Materials and methods: Cultured cardiomyocytes were stimulated in hypoxia condition, in the absence or presence of the two herbal compounds, TanIIA and AsIV. Indicators were determined by cytotoxicity assay, quantitative PCR, ELISA, flow cytometry assay, immunofluorescence staining and western blot. Results: Either TanIIA alone or the combined herbal compounds inhibited hypoxia-triggered chemokines production including CCL2/5/19, CXCL2 and Transwell assay-indicated monocyte/macrophage recruitment, cytokines production including TNF-alpha and IL-6. While AsIV alone or the combined herbal compounds attenuated hypoxia-induced cell apoptosis indicated by decreased Annexin V+ cells and the ratio of Bax/Bcl-2, but no significant effect of the herbal compounds was observed in modulating cell apoptosis following both hypoxia and TNF-a stimulation. As an anti-apoptotic factor, stress granule formation was further enhanced by AsIV alone or the combined herbal compounds in hypoxia or heat shock stress. Moreover, immunoblotting analysis indicated that stress-responsive mitogen-activated protein kinases (MAPK) pathways including the phosphorylation of ERK1/2, p38 and JNK were inhibited while the phosphorylation of Akt in phosphatidylinositol 3-kinase (PI3K)-Akt pathway for cell survival was restored by the herbal compounds. Among these results, the combination of TanIIA and AsIV comprised most of the beneficial properties tested, although their combination did not improve the maximal effects achieved by any of the compounds alone. Conclusions: Taken together, these data suggest a compatibility of TanIIA and AsIV in protecting cardiomyocyte against hypoxia-induced injury. Ethnopharmacological relevance: Coriolus versicolor (CV) is a mushroom traditionally used for strengthening the immune system and nowadays used as immunomodulatory adjuvant in anticancer therapy. Breast cancer usually metastasizes to the skeleton, interrupts the normal bone remodeling process and causes osteolytic bone lesions. The aims of the present study were to evaluate its herb-drug interaction with metronomic zoledronate in preventing cancer propagation, metastasis and bone destruction. Materials and methods: Mice inoculated with human breast cancer cells tagged with a luciferase (MDA-MB-231-TXSA) in tibia were treated with CV aqueous extract, mZOL, or the combination of both for 4 weeks. Alteration of the luciferase signals in tibia, liver and lung were quantified using the IVIS imaging system. The skeletal response was evaluated using micro-computed tomography (micro-CT). In vitro experiments were carried out to confirm the in vivo findings. Results: Results showed that combination of CV and mZOL diminished tumor growth without increasing the incidence of lung and liver metastasis in intratibial breast tumor model. The combination therapy also reserved the integrity of bones. In vitro studies demonstrated that combined use of CV and mZOL inhibited cancer cell proliferation and osteoclastogenesis. Conclusions: These findings suggested that combination treatment of CV and mZOL attenuated breast tumor propagation, protected against osteolytic bone lesion without significant metastases. This study provides scientific evidences on the beneficial outcome of using CV together with mZOL in the management of breast cancer and metastasis, which may lead to the development of CV as adjuvant health supplement for the control of breast cancer. Ethnopharmacological relevance: Potentilla erecta (L.) Raeusch is a medicinal plant of the Northern hemisphere belonging to the plant family of roses (Rosaceae). It has traditionally been used to treat inflammatory disorders of the skin and mucous membranes as well as chronic diarrhea. Aim of the study: In the present study we analyzed the anti-inflammatory and vasoconstrictive effect of a Potentilla erecta extract (PE) and questioned if PE is similar effective as mild corticosteroids. Then we analyzed if PE acts in the skin via a similar mode of action as corticosteroids. Material and methods: The anti-inflammatory effect of PE was analyzed in irradiated HaCaT keratinocytes by measuring the formation of IL-6 and PGE(2). Additionally the effect of PE on TNF-alpha induced NF-kappa B activation was determined. As the anti-inflammatory effect of corticosteroids correlates with their vasoconstrictive properties we tested if PE displays also vasoconstriction. Therefore we performed an occlusive patch test and a collagen contraction assay. Furthermore the binding of PE to the glucocorticoid receptor was determined with stainings and reporter assays. The interaction of PE on the nitric oxide (NO) content was examined with radical scavenging and endothelial NO synthase (eNOS) reporter assays. Results: In irradiated or TNF-alpha stimulated HaCaT cells the formation of IL-6 and PGE(2) or NF-kappa B activation was strongly reduced by PE. Furthermore PE showed a blanching effect comparable to hydrocortisone. However, in contrast to glucocorticoids, PE did not cause nuclear translocation of the glucocorticoid receptor in HaCaT cells. The blanching effect of PE was at least partly attributable to a scavenging effect of NO and inhibition of eNOS. Conclusions: PE displays anti-inflammatory and vasoconstrictive effects and might therefore be beneficial for the topical treatment of inflammatory skin disorders. Ethnopharmacological relevance: Aerial parts of Peganum harmala Linn (APP) is used as traditional medical herb for treatment of forgetfulness in Uighur medicine in China. But, the active ingredients and underlying mechanisms are unclear. Aim of the study: The present study was undertaken to investigate the improvement effects of extract and alkaloid fraction from APP on scopolamine-induced cognitive dysfunction and to elucidate their underlying mechanisms of action, and to support its folk use with scientific evidence, and lay a foundation for its further researches. Materials and methods: The acetylcholinesterase (AChE) inhibitory activities of extract (EXT), alkaloid fraction (ALK) and flavonoid fraction (FLA) from APP were evaluated in normal male C57BL/6 mice. The anti-amnesic effects of EXT and ALK from APP were measured in scopolamine-induced memory deficits mice by the Morris water maze (MWM) tasks. The levels of biomarkers, enzyme activity and protein expression of cholinergic system were determined in brain tissues. Results: The AChE activity was significantly decreased and the content of neurotransmitter acetylcholine (ACh) was significantly increased in normal mice cortex and hippocampus by treatment with donepezil at dosage of 8 mg/kg, EXT at dosages of 183, 550, 1650 mg/kg and ALK at dosages of 10, 30, 90 mg/kg (P < 0.05), and the AChE activity and the content of ACh were not significantly changed in cortex and hippocampus after treatment with FLA at dosages of 10, 30, 90 mg/kg (P> 0.05). In the MWM task, scopolamine-induced a decrease in both the swimming time within the target zone and the number of crossings where the platform had been placed were significantly reversed by treatment with EXT at dosages of 550, 1650 mg/kg and ALK at dosages of 30, 90 mg/ kg (P < 0.05). Moreover, the activity and protein expression of AChE was significantly decreased and the content of neurotransmitter ACh was significantly increased in cerebral cortex of scopolamine-induced mice by treatment with EXT at dosages of 183, 550, 1650 mg/kg and ALK at dosages of 10, 30, 90 mg/kg (P < 0.05), compared with scopolamine-treated group. Conclusions: EXT and ALK from APP exert beneficial effect on learning and memory processes in mice with scopolamine-induced memory impairment. APP is an effective traditional folk medicine and the ALK fraction is proved to be the main effective components for the treatment of forgetfulness. The ALK may be valuable source for lead compounds discovery and drug development for treatment of memory impairment such as in Alzheimer's disease. Ethnopharmacological relevance: Artemisia argyi is a herbal medicine traditionally used in Asia for the treatment of bronchitis, dermatitis and arthritis. Recent studies revealed the anti-inflammatory effect of essential oil in this plant. However, the mechanisms underlying the therapeutic potential have not been well elucidated. The present study is aimed to verify its anti-inflammatory effect and investigate the probable mechanisms. Materials and methods: The essential oil from Artemisia argyi (AAEO) was initially tested against LPS-induced production of inflammatory mediators and cytokines in RAW264.7 macrophages. Protein and mRNA expressions of iNOS and COX-2 were determined by Western blotting and RT-PCR analysis, respectively. The effects on the activation of MAPK/NF-kappa B/AP-1 and JAK/STATs pathway were also investigated by western blot. Meanwhile, in vivo anti-inflammatory effect was examined by histologic and immunohistochemical analysis in TPA-induced mouse ear edema model. Results: The results of in vitro experiments showed that AAEO dose-dependently suppressed the release of pro inflammatory mediators (NO, PGE2 and ROS) and cytokines (TNF-alpha, IL-6, IFN-beta and MCP-1) in LPS-induced RAW264.7 macrophages. It down-regulated iNOS and COX-2 protein and mRNA expression but did not affect the activity of these two enzymes. AAEO significantly inhibited the phosphorylation of JAK2 and STAT1/3, but not the activation of MAPK and NF-kappa B cascades. In animal model, oral administration of AAEO significantly attenuated TPA-induced mouse ear edema and decreased the protein level of COX-2. Conclusion: AAEO suppresses inflammatory responses via down-regulation of the JAK/STATs signaling and ROS scavenging, which could contribute, at least in part, to the anti-inflammatory effect of AAEO. Ethnopharmacology relevance: Ginsenoside Rb1, a 20 (S)-protopanaxadiol, is a major active ingredient of Panax ginseng C.A. Meyer, which as the King of Chinese herbs, has been wildly used for the treatment of central nervous system diseases. Previous studies have shown that 20 (S)-protopanaxadiol possesses a novel antidepressant-like effect in the treatment of depression, whereas ginsenoside Rb1 in depression has been rarely reported. Aim of the review: The present study was to investigate the antidepressant-like effect of ginsenoside Rb1 and its relevant mechanisms. Materials and methods: The whole experiment was divided into two parts: one part we examined the antidepressant-like effect of ginsenoside Rb1 with open-field test (OFT), tail suspension test (TST), forced swim test (FST), 5-HTP induced head-twitch and reserpine response in mice, another part we used chronic unpredicted mild stress (CUMS) model to further explore the antidepressant-like effect of ginsenoside Rb1 with caffeine, fluoxetine and p-Chlorophenylalanine (PCPA) in rats. Furthermore, the levels of monoamine neurotransmitters of NE, 5-HT, DA and their metabolites 5-HIAA, DOPAC, HVA were all measured by ELISA kits after the CUMS protocol. Results: Our data indicated that 7 days treatment with ginsenoside Rb1 (4, 8, 10 mg/kg, p.o.) significantly decreased immobility time in the FST and TST in mice, and played important roles in mice which were induced by 5-HTP (200 mg/kg, i.p.) and reserpine (4 mg/kg, i.p.). On the basis of CUMS model, 21 days treatment with ginsenoside Rbl not only had effective interactions with caffeine (5 mg/kg, i.p.), fluoxetine (1 mg/kg, i.p.) and PCPA (100 mg/kg, i.p.), but also significantly up-regulated the 5-HT, 5-HIAA, NE and DA levels in CUMS rats' brain, whereas HVA and DOPAC had no significant difference. Moreover, there was no alteration in spontaneous locomotion in any experimental group. Conclusions: These results suggest that ginsenoside Rb1 exhibits significant antidepressant-like effect in behavioral tests, chronic animal model and drug interactions, its mechanisms mainly mediated by central neurotransmitters of serotonergic, noradrenergic and dopaminergic systems. Ethnopharmacology relevance: Acacia cochliacantha is a small tree whose foliage is traditionally used in Mexico for treatment of kidney pain, gastrointestinal illnesses and to kill intestinal parasites. In recent decades, the study of vegetal extracts has offered other possible alternatives for the control of Haemonchus contortus. Considering that this nematode affects dramatically the health and productivity of small ruminants, the aim of this study was to identify the anthelmintic compounds from A. cochliacantha hydro-alcoholic extract (HA-E) through an ovicidal test. Material and methods: In vitro egg hatch assay was conducted to determinate the anthelmintic effects of a HAM (60 g). Liquid-liquid ethyl acetate/water extraction gave two fractions (EtOAc-F, 1.92 g; Aq-F; 58.1 g). The less polar compounds from ethyl acetate fraction were extracted by addition of dichloromethane offering a precipitate phase (Mt-F, 1.25 g) and a soluble mixture (DCMt-F 1.15 g). All fractions were evaluated for ovicidal activity obtaining the egg hatching inhibition (EHI, 0.07-25 mg/mL). Ivermectin (0.5 mg/mL) was used as a reference drug (positive control), and distilled water, 2.5% DMSO and 2% methanol were used as negative controls. The isolated compounds from the most active fractions were subjected to spectroscopic (H-1 NMR) Spectrometric (MS) and UV HPLC analysis in order to identify the bioactive compounds. Results: The less polar treatments (AcOEt-F, DCMt-F, DCMt-P) showed the highest ovicidal activities (98-100% EHI; at 0.62-1.56 mg/mL) and the major compounds found in these fractions were identified as caffeoyl and coumaroyl derivatives, including caffeic acid (1), p-coumaric acid (2), ferulic acid (3), methyl caffeate (4), methyl-p-coumarate (5), methyl ferulate (6) and quercetin. In case of the less active fractions (Aq-F, Mt-F) were constituted principally by glycosylated flavonoids. Conclusion: These results show that caffeoyl and coumaroyl derivatives from Acacia cochliacantha leaves had promising anthelmintic activity against Haemonchus contortus. This leguminous may offer an alternative source for the control of gastrointestinal nematodes of small ruminants. Ethnopharmacological relevance: The heart wood of Dalbergia odorifera is a Chinese herbal medicine commonly used for the treatment of various ischemic diseases in Chinese medicine practice. Aim of the study: In this study, therapeutic angiogenesis effects of the Dalbergia odorifera extract (DOE) were investigated on transgenic zebrafish in vivo and human umbilical vein endothelial cells (HUVECs) in vitro. Materials and methods: The pro-angiogenic effects of DOE on zebrafish were examined by subintestinal vessels (SIVs) sprouting assay and intersegmental vessels (ISVs) injury assay. And the pro-angiogenic effects of DOE on HUVECs were examined by MTT, scratch assay, protein chip and western blot. Results: In the in vivo studies, we found that DOE was able to dose-dependently promote angiogenesis in zebrafish SIVs area. In addition, DOE could also restore the injury in zebrafish ISVs area and upregulate the reduced mRNA expression of VEGFRs including kdr, kdrl and flt-1 induced by VEGF receptor kinase inhibitor II (VRI). In the in vitro studies, we observed that DOE promoted the proliferation, migration of HUVECs and also restored the injury induced by VRI. Moreover, protein chip and western blot experiments showed the PI3K/ MAPK cell proliferation/migration pathway were activated by DOE. Conclusions: DOE has a therapeutic effects on angiogenesis, and its mechanism may be related to adjusting the VEGFRs mRNA and activation of PI3K/MAPK signaling pathway. These results suggest a strong potential for Dalbergia odorifera to be developed as an angiogenesis-promoting therapeutic. Ethnopharmacological relevance: Ethnobotany takes into account past uses to be projected into the present and future. Most current ethnobotanical research is focused, especially in industrialised countries, on obtaining information of plant uses from elderly people. Historical ethnobotany is less cultivated, although papers have demonstrated its interest. Particularly poor, but potentially very relevant, is the attention paid to historical herbaria as a source of data on useful plants. Aims of the study: Bearing this in mind, we studied the herbarium of the Catalan pharmacist and naturalist Francesc Bolos (1773-1844), which contains information on medicinal uses and folk names, with the aim of establishing a catalogue of plants and uses and tracing them through old and contemporary literature. Methodology: The ca. 6000 plant specimens of this herbarium were investigated to assess those including plant uses and names. These taxa have been thoroughly revised. The data have been tabulated, their biogeographic profile, possible endemic or threatened status, or invasive behaviour have been assessed, and the content regarding medicinal uses, as well as folk names, has been studied. The medicinal terms used have been interpreted as per current days' medicine. The popular names and uses have been compared with those appearing in a certain number of works published from 11th to 20th centuries in the territories covered by the herbarium and with all the data collected in 20th and 21st centuries in an extensive database on Catalan ethnobotany. Results: A total of 385 plant specimens (381 taxa) have been detected bearing medicinal use and folk names inforination. We collected data on 1107 reports of plant medicinal properties (in Latin), 32 indications of toxicity, nine reports of food use, and 123, 302 and 318 popular plant names in Catalan, Spanish and French, respectively. The most quoted systems are digestive, skin and subcutaneous tissue (plus traumatic troubles) and genitourinary. Relatively high degrees of coincidence of plant names and uses in the herbarium and the literature comparison set have been found. Of the taxa contained in this medicinal herbarium, 294 were native to the Iberian Peninsula, and 86 were alien. Neither endemic nor threatened taxa have been detected, whereas a considerable portion of the alien taxa shows invasive behaviour at present. Conclusions: Our analyses indicate a certain degree of consistency between the medicinal uses of plants recorded in this 18th and 19th century herbarium and the records found in the literature and in recent ethnobotanical datasets, accounting for the robustness of pharmaceutical ethnobotanical knowledge in the area considered. Data appearing on the specimen labels are numerous, pointing out the herbarium as a relevant source of ethnopharmacological information. Special attention should be paid to some original uses contained in the herbarium's labels for further investigation on plant properties and drug design. Ethnopharmacological relevance: Ocimum gratissimum L. is a herbaceous plant that has been reported in several ethnopharmacological surveys as a plant readily accessible to the communities and widely used for the treatment of inflammatory diseases. The main goal of this study was to investigate the in vitro and in vivo anti-inflammatory activity and mechanism of action of the ethylacetate fraction of O. gratissimum leaf (EAFOg) and to chemically characterize this fraction. Materials and methods: EAFOg was obtained from a sequential methanol extract. The safety profile was evaluated on RAW 264.7 cells, using the alamarBlue (R) assay. Phenolic contents were determined by spectrophotometry, and metabolites quantified by high performance liquid chromatography. The anti-inflammatory activity of EAFOg and its ability to acts on leucocytes infiltration, inflammatory mediators as NO, IL-1 beta, TNF-alpha, and IL-10 in lipopolysaccharide-induced peritonitis in mice and LPS-stimulated RAW 264.7 macrophage were evaluated. In addition, the anti-inflammatory activity of EAFOg was also investigated in arachidonic acid related enzymes. Results: Total phenolic and fiavonoid contents of EAFOg were 139.76 +/- 1.07 mg GAE/g and 109.95 +/- 0.05 mg RE/g respectively. HPLC analysis revealed the presence of rutin, ellagic acid, myricetin and morin. The fraction exhibited no cytotoxic effects on the RAW 264.7 cells. The EAFOg (10, 50 and 200 mg/kg) significantly reduced (p < 0.05) neutrophils (38.8%, 58.9%, and 66.5%) and monocytes (38.9%, 58.0% and 72.8%) in LPS-induced peritonitis. Also, EAFOg (5, 20 and 100 pg/mL) produced significant reduction in NO, IL-1 beta, and TNF-alpha in RAW 264.7 cells. However, IL-lO level was not affected by the EAFOg, and it preferentially inhibits COX-2 (IC50 =48.86 +/- 0.02 mu g/mL) than COX-1 and 15-LO (IC50 > 100 mu g/mL). Conclusion: The flavonoid-rich fraction of O. gratissimum leaves demonstrated anti-inflammatory activity via mechanisms that involves inhibition of leucocytes influx, NO, IL-1 beta, and TNF-alpha in vivo and in vitro, thus supporting its therapeutic potential in slowing down inflammatory processes in chronic diseases. Ethnopharmacological relevance: Casearia sylvestris Sw. is widely used in popular medicine to treat conditions associated with pain. Aim of the study: The present study investigated the influence of hydroalcoholic crude extract of Casearia sylvestris (HCE-CS) and contribution of pro-resolving mediators on mechanical hyperalgesia in a mouse model of chronic post-ischemia pain (CPIP). Methods and results: Male Swiss mice were subjected to ischemia of the right hind paw (3 h), then reperfusion was allowed. At 10 min, 24 h or 48 h post-ischemia/reperfusion (I/R), different groups of animals were treated with HCE-CS (30 mg/Kg, orally [p.o]), selected agonists at the pro-resolving receptor ALX/FPR2 (natural molecules like resolvin D1 and lipoxin A4 or the synthetic compound BML-111; 0.1 mu g/animal) or vehicle (saline, 10 mL/Kg, s.c.), in the absence or presence of the antagonist WRW4 (10 g, s.c.). Mechanical hyperalgesia (paw withdrawal to von Frey filament) was asseseed together with histological and immunostainning analyses. In these settings, pro-resolving mediators reduced mechanical hyperalgesia and HCE-CS or BML-111 displayed anti-hyperalgesic effects which was markedly attenuated in animals treated with WRW4. ALX/ FPR2 expression was raised in skeletal muscle or neutrophils after treatment with HCE-CS or BML-111. Conclusion: These results reveal significant antihyperalgesic effect of HCE-CS on CPIP, mediated at least in part, by the pathway of resolution of inflammation centred on the axis modulated by ALX/FPR2. Most Relevant Explanation (MRE) is an inference problem in Bayesian networks that finds the most relevant partial instantiation of target variables as an explanation for given evidence. It has been shown in recent literature that it addresses the overspecification problem of existing methods, such as MPE and MAP. In this paper, we propose a novel hierarchical beam search algorithm for solving MRE. The main idea is to use a second-level beam to limit the number of successors generated by the same parent so as to limit the similarity between the solutions in the first-level beam and result in a more diversified population. Three pruning criteria are also introduced to achieve further diversity. Empirical results show that the new algorithm outperforms local search and regular beam search. (C) 2016 Elsevier B.V. All rights reserved. De Finetti's 1949 ordinal probability conjecture sparked enduring interest in intuitively meaningful necessary and sufficient conditions for orderings of finite propositional domains to agree with probability distributions. This paper motivates probabilistic ordering from subjective estimates of credibility contrasts revealed when ordered propositions are not monotonically related (e.g., A or B > C or D, but D > B) and when a portfolio of prospects is accepted as preferable to alternatives despite not dominating them. The estimated contrast primitive offers a gambling free, psychologically grounded foundation for treating individual instances and multisets of propositions as credally interchangeable with disjunctions and multisets of their constituent atomic propositions. (C) 2016 Elsevier B.V. All rights reserved. The continuous time Bayesian network (CTBN) is a probabilistic graphical model that enables reasoning about complex, interdependent, and continuous time subsystems. The model uses nodes to denote subsystems and arcs to denote conditional dependence. This dependence manifests in how the dynamics of a subsystem changes based on the current states of its parents in the network. While the original CTBN definition allows users to specify the dynamics of how the system evolves, users might also want to place value expressions over the dynamics of the model in the form of performance functions. We formalize these performance functions for the CTBN and show how they can be factored in the same way as the network, allowing what we argue is a more intuitive and explicit representation. For cases in which a performance function must involve multiple nodes, we show how to augment the structure of the CTBN to account for the performance interaction while maintaining the factorization of a single performance function for each node. We introduce the notion of optimization for CTBNs, and show how a family of performance functions can be used as the evaluation criteria for a multi-objective optimization procedure. (C) 2016 Published by Elsevier B.V. Weighted model counting (WMC) is a well-known inference task on knowledge bases, and the basis for some of the most efficient techniques for probabilistic inference in graphical models. We introduce algebraic model counting (AMC), a generalization of WMC to a semiring structure that provides a unified view on a range of tasks and existing results. We show that AMC generalizes many well-known tasks in a variety of domains such as probabilistic inference, soft constraints and network and database analysis. Furthermore, we investigate AMC from a knowledge compilation perspective and show that all AMC tasks can be evaluated using sd-DNNF circuits, which are strictly more succinct, and thus more efficient to evaluate, than direct representations of sets of models. We identify further characteristics of AMC instances that allow for evaluation on even more succinct circuits. (C) 2016 Elsevier B.V. All rights reserved. In the domain of decision theoretic planning, the factored framework (Factored Markov Decision Process, FMDP) has produced optimized algorithms using structured representations such as Decision Trees (Structured Value Iteration (SVI), Structured Policy Iteration (SPI)) or Algebraic Decision Diagrams (Stochastic Planning Using Decision Diagrams (SPUDD)). Since it may be difficult to elaborate the factored models used by these algorithms, the architecture SDYNA, which combines learning and planning algorithms using structured representations, was introduced. However, the state-of-the-art algorithms for incremental learning, for structured decision theoretic planning or for reinforcement learning require the problem to be specified only with binary variables and/or use data structures that can be improved in term of compactness. In this paper, we propose to use Multi-Valued Decision Diagrams (mDDs) as a more efficient data structure for the SDYNA architecture and describe a planning algorithm and an incremental learning algorithm dedicated to this new structured representation. For both planning and learning algorithms, we experimentally show that they allow significant improvements in time, in compactness of the computed policy and of the learned model. We then analyzed the combination of these two algorithms in an efficient SDYNA instance for simultaneous learning and planning using MDDS. (C) 2016 Elsevier B.V. All rights reserved. Multiple iterated revision requires advanced belief revision techniques that are able to integrate several pieces of new information into epistemic states. A crucial feature of this kind of revision is that the multiple pieces of information should be dealt with separately. Previous works have proposed several independence postulates which should ensure this. In this paper, we argue, first, that these postulates are too strong as they may enforce beliefs without justification, and second, that they are not necessary to ensure the principal aim of multiple revision. Instead, principles of conditional preservation guarantee a suitable handling of sets of sentences under revision. We formalize such a principle for multiple propositional revision for ranking functions, and we propose some novel postulates for multiple iterated revision that are in line with AGM and the Darwiche & Pearl postulates. We show that just a few fundamental postulates are enough to cover major approaches to (multiple) iterated belief revision, and that independence in the sense of Thielscher, Jin, and Delgrande is optional. As a proof of concept, we present propositional c-revisions of ranking functions. (C) 2016 Elsevier B.V. All rights reserved. This paper aims to shed light on two different mathematical practices in late eighteenth-century China. For this purpose, we analyze two solutions to the "three problems of the eastward motion of fixed stars" that were given separately by Jiang Sheng and Li Rui, who were both Qian-Jia scholars in the same region of eastern China. By comparing their modes of problem-solving, of reasoning and of computation, we suggest that the mathematical practices they employed represent the recreation of two traditions of mathematical commentaries by Qian-Jia scholars. (C) 2017 Elsevier Inc. All rights reserved. A number of scholars have recently maintained that a theorem in an unpublished treatise by Leibniz written in 1675 establishes a rigorous foundation for the infinitesimal calculus. I argue that this is a misinterpretation. (C) 2017 Elsevier Inc. All rights reserved. Beginning in the 1840's, Arthur Cayley (1821-1895) led a vast invariant theory programme in algebra. After learning of results of James Cockle (1819-1895) and Robert Harley (1828-1910), he applied the techniques of invariant theory to the calculation of resolvents of quintic equations. Letters recently discovered reveal the priorities of Cayley and Harley with respect to the quintic, an approach which was at variance with that via the theory of groups. As another recently discovered manuscript reveals, Cayley returned to this subject in his final days. (C) 2016 Elsevier Inc. All rights reserved. Purpose: To determine the effect of the drawing and writing technique on the anxiety level of children undergoing cancer treatment in hospital. Method: Research was conducted in the haematology-oncology clinic of a university hospital, using a quasi-experimental design (pre-and-post intervention evaluations of a single group). The sample comprised 30 hospitalised children aged 9-16 years. Data were collected with Socio-demographic form, clinical data form, and the State Anxiety Inventory. The institution gave written approval for the study and parents provided written consent. Drawing, writing and mutual story-telling techniques were used as part of a five-day programme. Children were asked to draw a picture of a hospitalised child and write a story about this drawing. After drawing and writing, mutual storytelling were used to more constructive story with positive feelings. The drawing, writing techniques was implemented on the first and third days of the programme and mutual storytelling was implemented on the second and fourth days. Data were reported as percentages and frequencies and the intervention effect analysed with the Wilcoxon test. Results: The average age of children was 12.56 years 2.67 and 76.7% were girls. The mean age diagnosis and mean treatment duration were 11.26 years 3.17 and 16.56 months 20.75 respectively. Most of the children (50%) had leukaemia and were receiving chemotherapy (66.7%). In most cases (76.7%) the mother was the primary caregiver. Scores on the State Anxiety Inventory were lower indicating lower anxiety-after the intervention (36.86 +/- 4.12 than before it (40.46 +/- 4.51) (p < 0.05). Conclusion: The therapeutic intervention reduced children's state anxiety. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: Despite the improvement in radiotherapy (RT) technology, patients with nasopharyngeal carcinoma (NPC) still suffer from numerous distressing symptoms simultaneously during RT. The purpose of the study was to investigate the symptom clusters experienced by NPC patients during RT. Methods: First-treated Chinese NPC patients (n = 130) undergoing late-period RT (from week 4 till the end) were recruited for this cross-sectional study. They completed a sociodemographic and clinical data questionnaire, the Chinese version of the M. D. Anderson Symptom Inventory - Head and Neck Module (MDASI-HN-C) and the Chinese version of the Functional Assessment of Cancer Therapy - Head and Neck Scale (FACT-H&N-C). Principal axis factor analysis with oblimin rotation, independent t-test, one-way analysis of variance (ANOVA) and Pearson product-moment correlation were used to analyze the data. Results: Four symptom clusters were identified, and labelled general, gastrointestinal, nutrition impact and social interaction impact. Of these 4 types, the nutrition impact symptom cluster was the most severe. Statistically positive correlations were found between severity of all 4 symptom clusters and symptom interference, as well as weight loss. Statistically negative correlations were detected between the cluster severity and the QOL total score and 3 out of 5 subscale scores. Conclusion: The four clusters identified reveal the symptom patterns experienced by NPC patients during RT. Future intervention studies on managing these symptom clusters are warranted, especially for the nutrition impact symptom cluster. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: To compare the effects of the care bundles including chlorhexidine dressing and advanced dressings on the catheter-related bloodstream infection (CRBSI) rates in pediatric hematology-oncology patients with central venous catheters (CVCs). Method: Twenty-seven PHO patients were recruited to participate in a prospective, randomized study in Turkey. The researcher used care bundles with chlorhexidine dressing in the experimental group (n = 14), and care bundles with advanced dressings in the control group (n = 13). Results: According to the study results, 28.6% of the patients in the experimental group had CRBSI, while this rate was 38.5% in the control group patients. The CRBSI rate in the experimental group was 3.9, and the control group had 4.4 per 1000 inpatient catheter days. There was no exit-site infection in the experimental group. However, the control group had 1.7 per 1000 inpatient catheter days. Conclusions: Even though there was no difference between the two groups in which the researcher implemented care bundles with chlorhexidine dressing and advanced dressings in terms of CRBSI development, there was reduction in the CRBSI rates thanks to the care bundle approach. It is possible to control the CRBSI rates using care bundles in pediatric hematology-oncology patients. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: The cancer experience may cultivate positive psychological changes that can help reduce distress during adult survivors of childhood and adolescent cancer life course. The aim of this study is to examine the positive impact of cancer in adult survivors utilizing posttraumatic growth as a guiding framework. Method: Participants were identified and recruited through the Utah Cancer Registry. Eligible cases were diagnosed with cancer age <= 20 years from 1973 to 2009, born in Utah, and were age >= 18 at study. Semi structured phone interviews (N = 53) were analyzed using deductive analysis. Results: The primary five themes that emerged were similar to Tedeschi and Calhoun's (1996) themes for measuring positive effects, and were used to frame our results. The primary themes along with uniquely identified sub-themes are the following: personal strength (psychological confidence, emotional maturity), improved relationship with others (family intimacy, empathy for others), new possibilities (having passion work with cancer), appreciation for life (reprioritization), and spiritual development (strengthened spiritual beliefs, participating in religious rituals and activities). Conclusions: For survivors, cancer was life altering and for many the experience continues. Understanding survivors' complex cancer experience can help improve psychosocial oncology care. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: This study was conducted with infants diagnosed with bilateral retinoblastoma (RB) and their mothers. It explored characteristics of the mother-infant interaction, the infants' developmental characteristics and related risk factors. Method: Cross-sectional statistical analysis was performed with 18 dyads of one-year-old infants with bilateral RB and their mothers. Results: Using the Japanese Nursing Child Assessment Teaching Scale (JNCATS) results showed that infants with RB had significantly lower scores compared to normative Japanese scores on all of the infants' subscales and "Child's contingency" (p < 0.01). Five infants with visual impairment at high risk of developmental problems had a pass rate of 0% on six JNCATS items. There were positive correlations between Developmental quotients (DQ) and JNCATS score of "Responsiveness to caregiver" (rho = 0.50, p < 0.05) and DQ and "Child's contingency" (rho = 0.47, p < 0.05). Conclusions: Infants with visual impairment were characterized by high likelihood of developmental delays and problematic behaviors; they tended not to turn their face or eyes toward their mothers, smile in response to their mothers' talking to them or the latter's changing body language or facial expressions, or react in a contingent manner in their interactions. These infant behaviors noted by their mothers shared similarities with developmental characteristics of children with visual impairments. These findings indicated a need to provide support promoting mother-infant interactions consistent with the developmental characteristics of RB infants with visual impairment. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: The aim of the study was to report the psychometric properties of the modified 'Breast Cancer Screening Beliefs Questionnaire' (BCSBQ) among women living in China. Methods: A convenience sample of 494 women was recruited from community centres and out-patient clinics in Foshan city. Cronbach's alpha was used to assess internal consistency reliability. Criterion validity was examined by testing three pre-specified hypotheses and confirmatory factor analysis was conducted to study the factor structure. Results: The results indicated that the modified BCSBQ has satisfactory validity and internal consistency. Cronbach's alpha of the three subscales ranged between 0.77 and 0.84. As hypothesized, the frequencies of breast self-examination and clinical breast examination were significantly associated with the sub scales' score. Confirmatory factor analysis showed an adequate fit for the hypothesized three-factor structure with our data set. Conclusions: The modified BCBSQ is a culturally appropriate, valid and reliable instrument for assessing the beliefs, knowledge and attitudes to breast cancer and breast cancer screening practices among women living in China. It can be used for providing health care professionals with insights into the development of breast cancer screening promotion programs. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: Radical Cystectomy with a creation of an uro-stoma is first line treatment in advanced bladder cancer. Enhancing or maintaining an individual's condition, skills and physical wellbeing before surgery has been defined as prehabilitation. Whether preoperative stoma-education is an effective element in prehabilitation is yet to be documented. In a prospective randomized controlled design (RCT) the aim was to investigate the efficacy of a standardised preoperative stoma-education program on an individual's ability to independently change a stoma-appliance. Methods: A parent RCT-study investigated the efficacy of a multidisciplinary rehabilitation program on length of stay following cystectomy. A total of 107 patients were included in the intension-to-treat population. Preoperatively, the intervention-group was instructed to a standardized stoma-education program consisting of areas recognized necessary to change a stoma appliance. The Urostomy Education Scale was used to measure stoma self-care at day 35, 120 and 365 postoperatively. Efficacy was expressed as a positive difference in UES-score between treatment-groups. Results: A significant difference in mean score was found in the intervention group compared to standard of 2.7 (95% CI: 0.9; 4.5), 43 (95% CI: 2.1; 6.5) and 5.1 (95% CI: 23; 7.8) at day 35, 120 and 365 postoperatively. Conclusions: For the first time a study in a RCT-design have reported a positive efficacy of a short-term preoperative stoma intervention. Preoperative stoma-education is an effective intervention and adds to the evidence base of prehabilitation. Further RCT-studies powered with self-efficacy as the primer outcome are requested. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: To measure Cancer Related Fatigue (CRF), and explore fatigue self-care strategies used to ameliorate CRF amongst patients undergoing chemotherapy for primary cancer. Methods: A consecutive sample of patients (n = 362) undergoing chemotherapy with a primary diagnosis of breast, colorectal, Hodgkin's and non-Hodgkin's lymphoma cancers were recruited. A mixed methods design was utilised. The study questionnaires included: the Piper Fatigue Scale-Revised and a researcher developed fatigue Self-Care Survey. Results: The mean total fatigue score was 4.9 (SD = 2.2); the highest mean subscale score occurred in the affective meaning dimension (M = 5.4, SD = 2.9). The mean number of strategies used at least "occasionally" was 14.8, (SD = 3.42, range = 5-24). The most frequently used self-care strategies were: "Receiving support from family and friends" (66.6%); "having a healthy diet" (57.1%); "taking part in hobbies or distraction activities" (42.9%); "spending time chatting with friends"(37.3%); "adjusting mood and being more positive" (36.3%) and "resting and taking it easy" (33.8%). The self-care strategies of socializing (OR = 0.66, 95% CI = 0.47-0.930, p = 0.016) and exercise (OR = 0.73, 95% CI = 0.57-0.93, p = 0.012) were associated with decreased odds of developing CRF. Four categories emerged following analysis of qualitative data, these included: rest and relaxation, physical activity, psychological well-being, and supportive care. Conclusions: CRF is a debilitating, complex phenomenon, therefore multiple CRF strategies should be used for the optimum management of CRF including exercise and socializing. Health care professionals have an important role in promoting the use of evidence based fatigue management strategies. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: Extravasation, or leakage of vesicant drugs into subcutaneous tissues, causes serious complications such as induration and necrosis in chemotherapy-treated patients. As macroscopic observation may overlook symptoms during infusion, we focused on skin temperature changes at puncture sites and studied thermographic patterns related to induration or necrosis caused by extravasation. Methods: Outpatients undergoing chemotherapy using peripheral intravenous catheters were enrolled in this prospective observational study. We filmed and classified infrared thermography movies of puncture sites during infusion; ultrasonography was also utilized at puncture sites to observe the subcutaneous condition. Multiple logistic regression analysis was performed to examine the association of thermographic patterns with induration or necrosis observed on the next chemotherapy day. Differences in patient characteristics, puncture sites, and infusions were analyzed by Mann-Whitney's U test and Fisher's exact test according to thermographic patterns. Results: Eight patients developed induration among 74 observations in 62 patients. Among six thermographic patterns, a fan-shaped lower temperature area gradually spreading from the puncture site (fan at, puncture site) was significantly associated with induration. Ultrasonography revealed that catheters of patients with fan at puncture site remained in the vein at the end of infusion, indicating that the infusion probably leaked from the puncture site. Patients with fan at puncture site had no significant differences in characteristics and infusion conditions compared with those with the other five thermographic patterns. Conclusion: We determined that fan at puncture site was related to induration caused by extravasation. Continuous thermographic observation may enable us to predict adverse events of chemotherapy. (C) 2017 Published by Elsevier Ltd. Purpose: To explore the post-treatment experiences and preferences for follow-up support of lymphoma survivors. Methods: Two focus groups were conducted with 17 participants to explore informational, psychological, emotional, social, practical and physical needs, 6-30 months post-treatment for lymphoma. Perceptions regarding a potential model of survivorship care were also elicited. Results: Thematic content analysis revealed five key themes: Information; Loss and uncertainty; Family, support and post-treatment experience; Transition, connectivity and normalcy, and Person-centred post-treatment care. Participants described a sense of loss as they transitioned away from regular interaction with the hospital at the end of treatment, but also talked about the need to find a "new normal". Establishing post-treatment support structures that can provide individualised information, support, reassurance and referrals to community and peer support were identified as a helpful way to navigate the transition from patient to post-treatment survivor. Conclusions: Participants in our study articulated a need for a flexible approach to survivorship care, providing opportunities for individuals to access different types of support at different times post-treatment. Specialist post-treatment nurse care coordinators working across acute and community settings may offer one effective model of post-treatment support for survivors of haematological malignancies. Crown Copyright (C) 2017 Published by Elsevier Ltd. All rights reserved. Purpose: The aim of this study was to identify the concerns of postmenopausal breast cancer patients in Ireland and inform the development of a survivorship care plan. Method: A qualitative participatory approach was used. Focus group interviews (n = 6) with 51 women were undertaken. Following analysis of the focus group discussions, two nominal group technique (NGT) (consensus workshops) involving representatives (n = 17) from each of the six focus groups were held. Results: Ten key issues were highlighted by women in the focus groups and these were prioritised at the consensus workshops. The most important issues in survivorship care planning prioritised by the women were as follows: meet the same healthcare professional at each review visit; contact number of a named person that you can contact if you have any concerns between review visits; at each review visit, have a physical examination and blood tests and explanation from health care professional outlining if follow up scans needed and if not, why not; information on signs and symptoms of recurrence; advice on diet, exercise, healthy lifestyle and advice on coping and pacing yourself; information and management of side effects of therapy-long and short term. Conclusion: Survivorship care planning for breast cancer is underdeveloped in Ireland. There is a lack of consensus regarding its provision and a lack of structured approach to its implementation. This study demonstrates the role of postmenopausal breast cancer patients' involvement in identifying their needs and reports that continuity of care was their top priority and the need for an adoption of a survivorship care plan was emphasised by participants. (C) 2017 The Authors. Published by Elsevier Ltd. Purpose: To describe the process by which Chinese women accept living with breast cancer. Methods: Individual interviews were conducted with 18 Chinese women who completed breast cancer treatment. Data were collected from September 2014 to January 2015 at a large tertiary teaching hospital in Beijing, China. In this grounded theory study, data were analyzed using constant comparative and coding analysis methods. Results: In order to explain the process of accepting having breast cancer among women in China through the grounded theory study, a model that includes 5 axial categories was developed. Cognitive reconstruction emerged as the core category. The extent to which the women with breast cancer accepted having the disease was found to increase with the treatment stage and as their treatment stage progressed with time. The accepting process included five stages: non-acceptance, passive acceptance, willingness to accept, behavioral acceptance, and transcendence of acceptance. Conclusions: Our study using grounded theory study develops a model describing the process by which women accept having breast cancer. The model provides some intervention opportunities at every point of the process. (C) 2017 Published by Elsevier Ltd. Purpose: Community-based cancer organizations provide telephone-based information and support services to assist people diagnosed with cancer and their family/friends. We investigated the demographic characteristics and psychosocial support needs of family/friends who contacted Australian Cancer Council 13 11 20 information and support helplines. Methods: Data collected on 42,892 family/friends who contacted a 13 11 20 service across Australia from January 2010 to December 2012 were analyzed. Chi-square analysis was used to examine associations between caller groups and reasons for calling, logistic regression to examine age and gender interaction effects. Results: The majority of calls received were from women (81%) of middle- (40%) and high-socioeconomic backgrounds (41%), aged 40-59 years (46%); 52% phoned for information on cancer diagnosis (including early detection, risk factors), 22% on treatment/disease management, and 26% phoned seeking psychological/emotional support. Information on a diagnosis was significantly more often the reason older males called, compared to female callers of any age. Overall, 32% found out about the service through Cancer Council resources or events, 20% from the media, 18% from the internet; 11% from health professionals. Conclusions: Family/friends of persons diagnosed with cancer have specific information and support needs. This study identifies groups of family/friends to whom the promotion of this service could be targeted. Within Australia and internationally, clinicians and oncology nurses as well as allied health professionals can provide an important role in increasing access to cancer telephone support services to ensure the needs of the family and friends of people affected by cancer are being met. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: The aims were to describe symptoms and health-related quality of life (HRQoL) in Greenlandic patients with advanced cancer and to assess the applicability and internal consistency of the Greenlandic version of the EORTC-QLQ-C30 core version 3.0. Methods: A Greenlandic version of the EORTC QLQ-C30 v.3.0 was developed. The translation process included independent forward translation, reconciliation and independent back translation by native Greenlandic-speaking translators who were fluent in English. After pilot testing, a population-based cross-sectional study of patients with advanced cancer receiving palliative treatment was conducted. Internal consistency was examined by calculating Cronbach's alpha coefficients for five function scales and three symptom scales. Results: Of the 58 patients who participated in the study, 47% had reduced social functioning, 36% had reduced physical and role functioning and 19% had reduced emotional and cognitive functioning. Furthermore, 48% reported fatigue, and 33% reported financial problems. The Greenlandic version of the EORTC had good applicability in the assessment of symptoms and quality of life. Acceptable Cronbach's alpha coefficients (above 0.70) were observed for the physical, role and social functioning scales, the fatigue scale and the global health status scale. Conclusions: Patients with undergoing palliative treatment in Greenland for advanced cancer reported high levels of social and financial problems and reduced physical functioning. This indicates a potential for improving palliative care service and increasing the focus on symptom management. The Greenlandic version of the EORTC-QLQ-C30 represents an applicable and reliable tool to describe symptoms and health-related quality of life among Greenlandic patients with advanced cancer. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: Chronic sorrow is a multidimensional concept experienced by mothers of children suffering with chronic conditions, e.g. cancer. Little is known about the concept of chronic sorrow and related issues/experiences among mothers of children with cancer living in Iran. This study aimed to explore the concept of chronic sorrow, based on the lived experiences of chronic sorrow experienced in a group of Iranian mothers of children with cancer. Methods: In this hermeneutic phenomenological study, 8 mothers of children with cancer participated in semi structured, in-depth interviews about their experiences of chronic sorrow. Interviews continued until data saturation was reached. All interviews were recorded, transcribed, analyzed, and interpreted using the seven steps of the Dickelman et al.'s phenomenological approach. Results: The three main themes that emerged from mothers' experiences of chronic sorrow related to their child's cancer were "climbing up shaky rocks," "religious fear and hope," and "continuous role changing." Each of these themes consisted of several subthemes. Besides the possibility of growth and coping with the chronic condition of a child which has been seen in other studies on chronic sorrow experiences, religious issues were more profound than what has reported in Western studies. Also the ambiguous prognosis and uncertain process of the cancer in children had made the experience of chronic sorrow more unique. Conclusion: The results of this study show that the experiences of mothers of children with cancer in Iran are not specific to them, but are better comprehended in their traditional socio-cultural context. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: Patients with rectal cancer have issues in adjusting to their permanent colostomy after surgery, and support is required to help them resume normal life. However, few studies have explored the experience and factors that affect a patient's decision-making and maladjustment prior to colostomy surgery. The aim of this study was to explore the experience of rectal cancer patients who have to undergo colostomy surgery. Method: A descriptive, qualitative design was used. We studied a purposive sample of 18 patients who had received a diagnosis of primary rectal cancer and were expecting permanent colostomy surgery. The thematic analysis approach was used to analyze the data collected using semi-structured, open-ended questions. Results: The overriding theme that emerged was 'stoma dilemma: a hard decision-making process'. From this main theme, three themes were derived: the resistance stage, the hesitation stage, and the acquiescence stage. Conclusion: It is hard for preoperative rectal patients to choose to undergo stoma surgery or a sphincter saving operation. From the initial stage of definitive diagnosis to the final consent to stoma surgery, most patients experience the resistance and hesitation stages before reaching the acquiescence stage. Arriving at a decision is a process that nurses can facilitate by eliminating unnecessary misunderstanding surrounding colostomy surgery and by fully respecting patients' right to choose at the various stages. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: This study explored the role of several psychological factors in professional quality of life in nurses. Specifically, we tried to clarify the relationships between several dimensions of empathy, self compassion, and psychological inflexibility, and positive (compassion satisfaction) and negative (burnout and compassion fatigue) domains of professional quality of life. Methods: Using a cross-sectional design, a convenience sample of 221 oncology nurses recruited from several public hospitals filling out a battery of self-report measures. Results: Results suggested that nurses that benefit more from their work of helping and assisting others (compassion satisfaction) seem to have more empathic feelings and sensibility towards others in distress and make an effort to see things from others' perspective. Also, they are less disturbed by negative feelings associated with seeing others' suffering and are more self-compassionate. Nurses more prone to experience the negative consequences associated with care-providing (burnout and compassion fatigue) are more self-judgmental and have more psychological inflexibility. In addition, they experience more personal feelings of distress when seeing others in suffering and less feelings of empathy and sensibility to others' suffering. Psychological factors explained 26% of compassion satisfaction, 29% of burnout and 18% of compassion fatigue. Conclusion: We discuss the results in terms of the importance of taking into account the role of these psychological factors in oncology nurses' professional quality of life, and of designing nursing education training and interventions aimed at targeting such factors. (C) 2017 Elsevier Ltd. All rights reserved. Purpose: One of the unanswered questions in symptom clusters research is whether the number and types of symptom clusters vary based on the dimension of the symptom experience used to create the clusters. Given that patients with breast cancer receiving chemotherapy (CfX), report between 10 and 32 concurrent symptoms and studies of symptom clusters in these patients are limited, the purpose of this study, in breast cancer patients undergoing CTX (n = 515), was to identify whether the number and types of symptom clusters differed based on whether symptom occurrence rates or symptom severity ratings were used to create the clusters. Methods: A modified version of the Memorial Symptom Assessment Scale was used to assess for the occurrence and severity of 38 symptoms, one week after the administration of CTX. Exploratory factor analysis was used to extract the symptom clusters. Results: Both the number and types of symptom clusters were similar using symptom occurrence rates or symptom severity ratings. Five symptom clusters were identified using symptom occurrence rates (i.e., psychological, hormonal, nutritional, gastrointestinal, epithelial). Six symptpm clusters (i.e., psychological, hormonal, nutritional, gastrointestinal, epithelial, chemotherapy neuropathy) were identified using symptom severity ratings. Across the two dimensions, the specific symptoms within each of the symptom clusters were similar. Conclusions: Identification of symptom clusters in patients with breast cancer may be useful in guiding symptom management interventions. Future studies are warranted to determine if symptom clusters remain stable over a cycle of CTX in patients with breast cancer. (C) 2017 Elsevier Ltd. All rights reserved. Gonadotropin-releasing hormone (GnRH) is a key neuropeptide regulating reproduction in humans and other vertebrates. Recently, GnRH-like cDNAs and peptides were reported in marine mollusks, implying that GnRHmediated reproduction is an ancient neuroendocrine system that arose prior to the divergence of protostomes and deuterostomes. Here, we evaluated the reproductive control system mediated by GnRH in the Pacific abalone Haliotis discus hannai. We cloned a prepro-GnRH cDNA (Hdh-GnRH) from the pleural -pedal ganglion (PPG) in H. discus hannai, and analyzed its spatiotemporal gene expression pattern. The open reading frame of Hdh-GnRH encodes a protein of 101 amino acids, consisting of a signal peptide, a GnRH dodecapeptide, a cleavage site, and a GnRH-associated peptide. This structure and sequence are highly similar to GnRH-like peptides reported for mollusks and other invertebrates. Quantitative polymerase chain reaction demonstrated that Hdlt-GnRH mRNA was more strongly expressed in the ganglions (PPG and cerebral ganglion [CG]) than in other tissues (gonads, gills, intestine, hemocytes, muscle, and mantle) in both sexes. In females, the expression levels of Hdh-GnRH mRNA in the PPG and branchial ganglion (BG) were significantly higher at the ripe and partial spent stages than at the early and late active stages. In males, Hdh-GnRH mRNA levels in the BG showed a significant increase in the partial spent stage. Unexpectedly, Hdh-GnRH levels in the CG were not significantly different among the examined stages in both sexes. These results suggest that Hdh-GnRH mRNA expression profiles in the BG and possibly the PPG are tightly correlated with abalone reproductive activities. Stable carbon isotope ratios (delta C-13) in breath show promise as an indicator of immediate metabolic fuel utilization in animals because tissue lipids have a lower delta C-13 value than carbohydrates and proteins. Metabolic fuel consumption is often estimated using the respiratory exchange ratio (RER), which has lipid and carbohydrate boundaries, but does not differentiate between protein and mixed fuel catabolism at intermediate values. Because lipids have relatively low delta C-13 values, measurements of stable carbon isotopes in breath may help distinguish between catabolism of protein and mixed fuel that includes lipid. We measured breath delta C-13 and RER concurrently in arctic ground squirrels (Urocitellus panyii) during steady-state torpor at ambient temperatures from - 2 to - 26 degrees C. As predicted, we found a correlation between RER and breath delta C-13 values; however, the range of RER in this study did not reach intermediate levels to allow further resolution of metabolic substrate use with the addition of breath delta C-13 measurements. These data suggest that breath delta C-13 values are 1.1%o lower than lipid tissue during pure lipid metabolism. From RER, we determined that arctic ground squirrels rely on nonlipid fuel sources for a significant portion of energy during torpor (up to 37%). The shift toward nonlipid fuel sources may be influenced by adiposity of the animals in addition to thermal challenge. The alligator gar Atractosteus spatula is a primitive fish species, occupying a wide range of temperature and salinity habitats. Long-distance movements are limited, leading to genetic differentiation between inland and coastal populations. UnknoWn is whether physiological capacity differs between geographically separated populations, particularly for traits important to osmoregulation in saline environments. Alligator gar from inland and coastal populations were reared in a similar environment and exposed to temperature (10, 30 degrees C) and salinity (0, 20 ppt) extremes to determine whether iono- and osmoregulatory ability differed between populations. There were few differences in osmoregulatory ability between populations, with similar gill, blood and gastrointestinal tract osmoregulatory parameters. Blood plasma osmolality, ion concentrations, intestinal pH and bicarbonate base concentrations, intestinal fluid osmolality, ion concentrations and gill Na+ K+ -ATPase (NKA) activity were similar between populations. Notably, gar from both populations did not osmoregulate well at low temperature and high salinity, with elevated plasma osmolality and ion concentrations, low gill NKA, and little evidence of gastrointestinal tract contribution to ionic and base regulation based on a lack of intestinal fluid and low base content. Therefore, the hypothesis that coastal gar would have improved osmotic regulatory ability in saline environments as compared to inland alligator gar was not supported, suggesting physiological capacity may be retained in primitive species possibly due to its importance to their persistence through time. Adaptive capacities, governing the ability of animals to cope with an environmental stressor, have been demonstrated to be strongly dependent upon genetic factors. Two isogenic lines of rainbow trout, previously described for their sensitivity and resilience to an acute confinement challenge, were used in the present study to investigate whether adaptive capacities remain consistent when fish are exposed to a different type of challenge. For this purpose, the effects of a 4-hour hypercapnia (CO2 increase) challenge at concentrations relevant in aquaculture conditions are described for the two isogenic lines. Oxygen consumption, cortisol release, group dispersion and group swimming activity were measured before, during and after the challenge. Sensitivity and resilience for each measure were extracted from temporal responses and analyzed using multivariate statistics. The two fish lines displayed significant differences in their cortisol response, translating differences in the stress axis sensitivity to the stressor. On the contrary, both lines showed, for other measures, similar temporal patterns across the study. Notable within line variability in the stress response was observed, despite identical genome between fish. The results are discussed in the context of animal robustness. The activation of the hypothalamic pituitary adrenal (HPA) axis is one of the most important physiological processes in coping with any deviation in an organism's homeostasis. This activation and the secretion of glucocorticoids, such as corticosterone, allow organisms to cope with perturbations and return to optimal physiological functioning as quickly as possible. In this study, we examined the HPA axis activation in common gartersnakes (Thamnophis sirtalis) as a response to a natural toxin, tetrodotoxin (TTX). This neurotoxin is found in high levels in the Rough-skinned Newt (Taricha granulosa), which is a prey item for these snakes. To consume this toxic prey, these snakes have evolved variable resistance. We hypothesized that the more resistant individuals would show a lower HPA axis response than less resistant individuals, as measured by corticosterone (CORT) and bactericidal ability, which is a functional downstream measurement of CORT's activity. We determined "resistance level" for tetrodotoxin from each individual snake by determining the dose which reduced race speed by 50%. Individuals were injected them with an increasing amount of tetrodotoxin (10, 25, and 50 MAMUs) to determine this value. Thirty minutes after every injection, we gathered blood samples from each snake. Our results show that, while there were no significant differences among individual CORT levels in a dose-dependent manner, female snakes did have a larger stress response when compared to both males and juveniles. Different life-histories could explain why females were able to mount a higher HPA axis response. However, TTX had no downstream effects on bactericidal ability, although juveniles had consistently lower values than adults. Our research shows a possible dichotomy between how each sex manages tetrodotoxin and gives way for a more comprehensive analysis of tetrodotoxin in an ecological context. Sodium channel blockers are commonly injected local anesthetics but are also routinely used for general immersion anesthesia in fish and amphibians. Here we report the effects of subcutaneous injection of lidocaine (5 or 50 mg kg(-1)) in the hind limb of bullfrogs (Lithobates catesbeianus) on reflexes, gular respiration and heart rate (handled group, n = 10) or blood pressure and heart rate via an arterial catheter (catheterized group n = 6). 5 mg kg(-1) lidocaine did not cause loss of reflexes or change in heart rate in the handled group, but was associated with a reduction in gular respiratory rate (from 99 7 to 81 17 breaths min(-1)). 50 mg kg(-1) lidocaine caused a further reduction in respiratory rate to 59 15 breaths min(-1), and led to a progressive loss of righting reflex (10/10 loss by 40 min), palpebral reflex (9/10 loss at 70 min), and contralateral toe pinch withdrawal (9/10 loss at 70 min). Reflexes were regained over 4 h. Systemic sedative effects were not coupled to local anti-nociception, as a forceps pinch test at the site of injection provoked movement at the height of the systemic effect (tested at 81 4 min). Amphibians are routinely subject to general anesthesia via exposure to sodium channel blockers such as MS222 or benzocaine, however caution should be exercised when using local injectable lidocaine in amphibians, as it appears to dose-dependently cause sedation, without necessarily preventing local nociception for the duration of systemic effects. While severe hypoxia can be lethal and is usually avoided by mobile aquatic organisms, moderate hypoxic conditions are likely more prevalent and may affect organisms, such as fishes, in a variety of systems. However, fishes have the potential to adjust physiologically and behaviorally and thus reduce the negative effects of hypoxia. Quantifying such physiological responses may shed light on the ability of fishes to tolerate reduced oxygen concentrations. This study assessed how two different hatchery populations of yellow perch Perca flavescens, a fish that is likely to encounter moderate hypoxic conditions in a variety of systems, responded to moderate hypoxic exposure through three experiments: 1) a behavioral foraging experiment, 2) an acute exposure experiment, and 3) a chronic exposure experiment. No marked behavioral or physiological adjustments were observed in response to hypoxia (e.g., hemoglobin, feeding rate, movement frequency, gene expression did not change to a significant degree), possibly indicating a high tolerance level in this species. This may allow yellow perch to utilize areas of moderate hypoxia to continue foraging while avoiding predators that may be more sensitive to moderately low oxygen. In birds, alpha-MSH is anorexigenic, but effects on adipose tissue are unknown. Four day-old chicks were intraperitoneally injected with 0 (vehicle), 5, 10, or 50 mu g of alpha-MSH and subcutaneous and abdominal adipose tissue collected at 60 min for RNA isolation (n = 10). Plasma was collected post-euthanasia at 60 and 180 min for measuring non-esterified fatty acids (NEFA) and a-MSH (n = 10). Relative to the vehicle, food intake was reduced in the 50 mu g-treated group. Plasma NEFAs were greater in 10 mu g than vehicle-treated chicks at 3 h. Plasma alpha-MSH was 3.06 +/- 0.57 ng/ml. In subcutaneous tissue, melanocortin receptor 5 (MCSR) mRNA was increased in 10 mu g, MC2R and CCAAT-enhancer-binding protein beta (C/EBP beta) mRNAs increased in 50 mu g, peroxisome proliferator-activated receptor gamma and C/EBP alpha decreased in 5, 10. and 50 mu g, and Ki67 mRNA decreased in 50 pg alpha-MSH-injected chicks, compared to vehicle-injected chicks. In abdominal tissue, adipose triglyceride lipase mRNA was greater in 10 mu g alpha-MSH-than vehicle-treated chicks. Cells isolated from abdominal fat that were treated with 10 and 100 nM alpha-MSH for 4 h expressed more MC5R and perilipin-1 than control cells (n = 6). Cells that received 100 nM alpha-MSH expressed more fatty acid binding protein 4 and comparative gene identification-58 mRNA than control cells. Glycerol-3-phosphate dehydrogenase (G3PDH) activity was greater in cells at 9 days post-differentiation that were treated with 1 and 100 nM alpha-MSH for 4 h than in control cells (n = 3). Results suggest that a-MSH increases lipolysis and reduces adipogenesis in adipose tissue. Some insect taxa from polar or temperate habitats have shown cross-tolerance for multiple stressors but tropical insect taxa have received less attention. Accordingly, we considered adult flies of a tropical drosophilid-Zaprionus indianus for testing direct as well as cross-tolerance effects of rapid heat hardening (HH), desiccation acclimation (DA) and starvation acclimation (SA) after rearing under warmer and drier season specific simulated conditions. We observed significant direct acclimation effects of HH, DA and SA; and four cases of cross tolerance effects but no cross-tolerance between desiccation and starvation. Cross-tolerance due to heat hardening on desiccation showed 20% higher effect than its reciprocal effect. There is greater reduction of water loss in heat hardened flies (due to increase in amount of cuticular lipids) as compared with desiccation acclimated flies. However, cross-tolerance effect of SA on heat knockdown was two times higher than its reciprocal. Heat hardened and desiccation acclimated adult flies showed substantial increase in the level of trehalose and proline while body lipids increased due to heat hardening or starvation acclimation. However, maximum increase in energy metabolites was stressor specific i.e. trehalose due to DA; proline due to HH and total body lipids due to SA. Rapid changes in energy metabolites due to heat hardening seem compensatory for possible depletion of trehalose and proline due to desiccation stress; and body lipids due to starvation stress. Thus, observed cross-tolerance effects in Z. indianus represent physiological changes to cope with multiple stressors related to warmer and drier subtropical habitats. Sexual selection has been widely explored from numerous perspectives, including behavior, ecology, and to a lesser extent, energetics. Hormones, and specifically androgens such as testosterone, are known to trigger sexual behaviors. Their effects are therefore of interest during the breeding period. Our work investigates the effect of testosterone on the relationship between cellular bioenergetics and contractile properties of two skeletal muscles involved in sexual selection in tree frogs. Calling and locomotor abilities are considered evidence of good condition in Hyla males, and thus server as proxies for male quality and attractiveness. Therefore, how these behaviors are powered efficiently remains of both physiological and behavioral interest. Most previous research, however, has focused primarily on biomechanics, contractile properties or mitochondrial enzyme activities. Some have tried to establish a relationship between those parameters but to our knowledge, there is no study examining muscle fiber bioenergetics in Hyla arborea. Using chronic testosterone supplementation and through an integrative study combining fiber bioenergetics and contractile properties, we compared sexually dimorphic trunk muscles directly linked to chronic sound production to a hindlimb muscle (i.e. gastrocnemius) that is particularly adapted for explosive movement. As expected, trunk muscle bioenergetics were more affected by testosterone than gastrocnemius muscle. Our study also underlines contrasted energetic capacities between muscles, in line with contractile properties of these two different muscle phenotypes. The discrepancy of both substrate utilization and contractile properties is consistent with the specific role of each muscle and our results are elucidating another integrative example of a muscle force-endurance trade-off. Background. The two control points of cholesterol synthesis, 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMGCR) and squalene monooxygenase (SQLE) are known targets of the transcription factor sterol-regulatory element binding protein-2(SREBP-2). Yet the location of the sterol-regulatory elements (SREs) and cofactor binding sites, nuclear factor-Y (NF-Y) and specificity protein 1 (Spl), have not been satisfactorily mapped in the human SQLE promoter, or at all in the human HMGCR promoter. Methods: We used luciferase reporter assays to screen the sterol-responsiveness of a library of predicted SRE, Sp1 and NF-Y site mutants and hence identify bone fide binding sites. We confirmed SREs via an electrophoretic mobility shift assay (EMSA) and ChIP-PCR. Results: We identified two SREs in close proximity in both the human HMGCR and SQLE promoters, as well as one NF-Y site in HMGCR and two in SQLE. In addition, we found that HMGCR expression is highly activated only when SREBP-2 levels are very high, in contrast to the low density lipoprotein receptor (LDLR), a result reflected in mouse models used in other studies. Conclusions: Both HMGCR and SQLE promoters have two SREs that may act as a homing region to attract a single SREBP-2 homodimer, with HMGCR being activated only when there is absolute need for cholesterol synthesis. This ensures preferential uptake of exogenous cholesterol via LDLR, thereby conserving energy. General Significance: We provide the first comprehensive investigation of SREs and NF-Ys in the human HMGCR and SQLE promoters, increasing our fundamental understanding of the transcriptional regulation of cholesterol synthesis. The ATP-binding cassette transporter A7 (ABCA7), which is highly expressed in the brain, is associated with the pathogenesis of Alzheimer's disease (AD). However, the physiological function of ABCA7 and its transport substrates remain unclear. Immunohistochemical analyses of human brain sections from AD and non-AD subjects revealed that ABCA7 is expressed in neuron and microglia cells in the cerebral cortex. The transport substrates and acceptors were identified in BHK/ABCA7 cells and compared with those of ABCA1. Like ABCA1, ABCA7 exported choline phospholipids in the presence of apoA-I and apoE; however, unlike ABCA1, cholesterol efflux was marginal. Lipid efflux by ABCA7 was saturated by 5 mu g/ml apoA-I and was not dependent on apoE isoforms, whereas efflux by ABCA1 was dependent on apoA-I up to 20 mu g/m1 and apoE isoforms. Liquid chromatography tandem mass spectrometry analyses revealed that the two proteins had different preferences for phospholipid export: ABCA7 preferred phosphatidylcholine (PC) lysoPC > sphingomyelin (SM) = phosphatidylethanolamine (PE), whereas ABCA1 preferred PC > > SM > PE = lysoPC. The major difference in the pattern of lipid peaks between ABCA7 and ABCA1 was the high lysoPC/PC ratio of ABCA7. These results suggest that lysoPC is one of the major transport substrates for ABCA7 and that lysoPC export may be a physiologically important function of ABCA7 in the brain. Mammalian lipoxygenases (LOX) have been implicated in cell differentiation and in the pathogenesis of inflammatory, hyperproliferative and neurological diseases. Although the reaction specificity of mammalian LOX with n 6 fatty acids (linoleic acid, arachidonic acid) has been explored in detail little information is currently available on the product patterns formed from n 3 polyenoic fatty acids, which are of particular nutritional importance and serve as substrate for the biosynthesis of pro-resolving inflammatory mediators such as resolvins and maresins. Here we expressed the ALOX15 orthologs of eight different mammalian species as well as human ALOX12 and ALOX15B as recombinant his-tag fusiom proteins and characterized their reaction specificity with the most abundantly occurring polyunsaturated fatty acids (PUFAs) including 5,8,11,14,17eicosapentaenoic acid (EPA) and 4,7,10,13,16,19-docosahexaenoic acid (DHA). We found that the LOX isoforms tested accept these fatty acids as suitable substrates and oxygenate them with variable positional specificity to the corresponding n 6 and n 9 hydroperoxy derivatives. Surprisingly, human ALOX15 as well as the corresponding orthologs of chimpanzee and orangutan, which oxygenates arachidonic acid mainly to 15S-H(p) ETE, exhibit a pronounced dual reaction specificity with DHA forming similar amounts of 14- and 17-H(p)DHA. Moreover, ALOX15 orthologs prefer DHA and EPA over AA when equimolar concentrations of n 3 and n 6 PUFA were supplied simultaneously. Taken together, these data indicate that the reaction specificity of mammalian LOX isoforms is variable and strongly depends on the chemistry of fatty acid substrates. Most mammalian ALOX15 orthologs exhibit dual positional specificity with highly unsaturated n 3 polyunsaturated fatty acids. A polymorphism of TM6SF2 associates with hepatic lipid accumulation and reduction of triacylglycerol (TAG) secretion, but the function of the encoded protein has remained enigmatic. We studied the effect of stable TM6SF2 knock-down on the lipid content and composition, mitochondrial fatty acid oxidation and organelle structure of HuH7 hepatoma cells. Knock-down of TM6SF2 resulted in intracellular accumulation of TAGs, cholesterol esters, phosphatidylcholine (PC) and phosphatidylethanolamine. In all of these lipid classes, polyunsaturated lipid species were significantly reduced while saturated and monounsaturated species increased their proportions. The PCs encountered relative and absolute arachidonic acid (AA, 20:4n-6) depletion, and AA was also reduced in the total cellular fatty acid pool. Synthesis and turnover of the hepatocellular glycerolipids was enhanced. The TM6SF2 knock-down cells secreted lipoprotein-like particles with a smaller diameter than in the controls, and more lysosome/endosome structures appeared in the knock-down cells. The mitochondrial capacity for palmitate oxidation was significantly reduced. These observations provide novel clues to TM6SF2 function and raise altered mebrane lipid composition and dynamics among the mechanism(s) by which the protein deficiency disturbs hepatic TAG secretion. Drug addiction is a complex disorder, evoking significant changes in the proteome of the central nervous system. To check if there are also changes in the lipidomic profiles we used desorption electrospray-MS technique for imaging of the brain slices of rats exposed to morphine, cocaine and amphetamine. Our investigations showed alternative regulation of selected lipid's levels in the central nervous system structures, under the influence of applied drugs. Results of our investigations can show changes in the brain treated with drugs of abuse in the new light, indicating role of the lipids in the addiction development. Lipid droplet (LD) accumulation in hepatocytes is a typical character of steatosis. Hepatitis C virus (HCV) infection, one of the risk factors related to steatosis, induced LD accumulation in cultured cells. However, the mechanisms of which HCV induce LD formation are not fully revealed. Previously we identified cytosolic phospholipase A2 gamma (PLA2G4C) as a host factor upregulated by HCV infection and involved in HCV replication. Here we further revealed that PLA2G4C plays an important role in LD biogenesis and refined the functional analysis of PLA2G4C in LD biogenesis and HCV assembly. LD formation upon fatty acid and HCV stimulation in PLA2G4C knockdown cells was impaired and could not be restored by complementation with PLA2G4A. PLA2G4C was tightly associated in the membrane with the domain around the amino acid residues 260-292, normally in ER but relocated into LDs upon oleate stimulation. Mutant PLA2G4C without enzymatic activity was not able to restore LD formation in PLA2G4C knockdown cells. Thus, both the membrane attachment and the enzymatic activity of PLA2G4C were required for its function in LD formation. The participation of PLA2G4C in LD formation is correlated with its involvement in HCV assembly. Finally, PLA2G4C overexpression itself led to LD formation in hepatic cells and enhanced LD accumulation in the liver of high-fat diet (HFD)-fed mice, suggesting its potential role in fatty liver disease. The genome of the fungal plant pathogen Fusarium graminearwn harbors six catalases, one of which has the sequence characteristics of a fatty acid peroxide-metabolizing catalase. We cloned and expressed this hemoprotein (designated as Fg-cat) along with its immediate neighbor, a 13S-lipoxygenase (cf. Brodhun et al., PloS One, e64919, 2013) that we considered might supply a fatty acid hydroperoxide substrate. Indeed, Fg-cat reacts abruptly with the 13S-hydroperoxide of linoleic acid (13S-HPODE) with an initial rate of 700-1300 s(-1). By comparison there was no reaction with 9R-or 9S-HPODEs and extremely weak reaction with 13R-HPODE ( 0.5% of the rate with 13S-HPODE). Although we considered Fg-cat as a candidate for the allene oxide synthase of the jasmonate pathway in fungi, the main product formed from 13S-HPODE was identified by UV, MS, and NMR as 9-oxo-10E-12,13-cis-epoxy-octadecenoic acid (with no traces of AOS activity). The corresponding analog is formed from the 13S-hydroperoxide of a-linolenic acid along with novel diepoxyketones and two C13 aldehyde derivatives, the reaction mechanisms of which are proposed. In a peroxidase assay monitoring the oxidation of ABTS, Fg-cat exhibited robust activity (kcat 550 s 1) using the 13Shydroperoxy-C-18 fatty acids as the oxidizing co-substrate. There was no detectable peroxidase activity using the corresponding 9S-hydroperoxides, nor with t-butyl hydroperoxide, and very weak activity with H2O2 or cumene hydroperoxide at micromolar concentrations of Fg-cat. Fg-cat and the associated lipoxygenase gene are present together in fungal genera Fusarium, Metarhizium and Fonsecaea and appear to constitute a partnership for oxidations in fungal metabolism or defense. In the yeast Saccharomyces cerevisiae, the mitochondrial phosphatidylserine decarboxylase 1 (Psd1p) produces the largest amount of cellular phosphatidylethanolamine (PE). Psdlp is synthesized as a larger precursor on cytosolic ribosomes and then imported into mitochondria in a three-step processing event leading to the formation of an a-subunit and a beta-subunit. The alpha-subunit harbors a highly conserved motif, which was proposed to be involved in phosphatidylserine (PS) binding. Here, we present a molecular analysis of this consensus motif for the function of Psdlp by using Psdlp variants bearing either deletions or point mutations in this region. Our data show that mutations in this motif affect processing and stability of Psdlp, and consequently the enzyme's activity. Thus, we conclude that this consensus motif is essential for structural integrity and processing of Psd1p. Enterprise software giants use armies of salespeople to hawk imperfect products and endless updates. The two Australian billionaires behind Atlassian have built smarter tools that sell themselves. Just ask Tesla, Snapchat, NASA and the entire Ivy League. Despite making a series of smash-hit games for its home Asian market, NCSoft has struggled to translate to America until its billionaire founder put his MIT-educated wife at the controls. Peter Gassner rebooted Veeva Systems, and his gambit has created the ultimate pharmaceutical tool. By listening to its core audience teachers ClassDojo's educational software has reached 90% of U.S. schools. Now the real work begins: how to get someone to pay for it. Starting with $15,000, Jorn Lyseggen built a version of Google Alerts before Google Alerts. And despite the big-foot competition, Meltwater has found the talent to forge a $300 million company. It took a trip to China for the founders of Green Creative to spot an LED-bulb niche in the U.S. Tim Estes' insights into language and meaning are helping detect terrorists, human traffickers and market manipulators-and making him rich. Trump's incendiary tweets against Mexico are music to the ears of BlackRock's intrepid emerging-market portfolio manager Gerardo Rodriguez. It's no secret: Having more women in leadership roles is good for business. It improves financial results, enhances innovation and eases talent shortfalls. Former news anchor Jamie Kern Lima began testing makeup a decade ago to help cover her red, blotchy skin. That experimentation led to the creation of IT cosmetics, which L'Oreal snapped up in 2016 for $1.2 billion. Now she's plotting to turn IT into the largest beauty brand in the world. When watch collector Pascal Raffy bought the venerable Swiss brand Bovet, he reinvented the company-and himself. Multiobjective optimization entails minimizing or maximizing multiple objective functions subject to a set of constraints. Many real world applications can be formulated as multi-objective optimization problems (MOPs), which often involve multiple conflicting objectives to be optimized simultaneously. Recently, a number of multi-objective evolutionary algorithms (MOEAs) were developed suggested for these MOPs as they do not require problem specific information. They find a set of non-dominated solutions in a single run. The evolutionary process on which they are based, typically relies on a single genetic operator. Here, we suggest an algorithm which uses a basket of search operators. This is because it is never easy to choose the most suitable operator for a given problem. The novel hybrid non-dominated sorting genetic algorithm (HNSGA) introduced here in this paper and tested on the ZDT (Zitzler-Deb-Thiele) and CEC'09 (2009 IEEE Conference on Evolutionary Computations) benchmark problems specifically formulated for MOEAs. Numerical results prove that the proposed algorithm is competitive with state-of-the-art MOEAs. (C) 2017 Elsevier B.V. All rights reserved. Magnetic levitation systems have become very important in many applications. Due to their instability and high nonlinearity, such systems pose a challenge to many researchers attempting to design high-performance and robust tracking control. This paper proposes an improved adaptive fuzzy backstepping control for systems with uncertain input nonlinear function (uncertain parameters and structure), and applies it to a magnetic levitation system, which is a typical representative of such systems. An adaptive fuzzy system is used to approximate unknown, partially known or uncertain input nonlinear functions of a magnetic levitation system. An adaptation law is obtained based on Ljapunov analysis in order to guarantee closed-loop stability and good tracking performance. Initial adaptive and control parameters have been initialized with Symbiotic Organism Search optimization algorithm, due to strong non-linearity and instability of the magnetic levitation system. The theoretical background of the proposed control method is verified with a simulation study and implementation on a laboratory experimental application. (C) 2017 Elsevier B.V. All rights reserved. Maize is one of the three main cereal crops of the world. Accurately knowing its tassel flowering status can help to analyze the growth status and adjust the farming operation accordingly. At the current stage, acquiring the tassel flowering status mainly depends on human observation. Actually, it is costly and subjective, especially for the large-scale quantitative analysis under the in-field environment. To alleviate this, we propose an automatic maize tassel flowering status (i.e., non-flowering, partially-flowering and fully-flowering) recognition method via the computer vision technology in this paper. In particular, this task is formulated as a fine-grained image categorization problem. More specifically, scale-invariant feature transform (SIFT) is first extracted as the low-level visual descriptor to characterize the maize flower. Fisher vector (FV) is then applied to execute feature encoding on SIFT to generate more discriminative flowering status representation. To further leverage the performance, a novel metric leaning method termed large-margin dimensionality reduction (LMDR) is proposed. To verify the effectiveness of the proposed method, a flowering status dataset that consists of 3000 images is built. The experimental results demonstrate that our approach goes beyond the state-of-the-art by large margins (at least 8.3%). The dataset and source code are made available online. (C) 2017 Elsevier B.V. All rights reserved. The purpose of this study is to enable consistency control along with expert consistency prioritization for direct fuzzy inputs as basic events (BEs) assigned to the fault tree analysis (FTA) method. In the recent literature, fuzzy fault tree analysis (FFTA) applications have no consistency check for the expert judgments. In this study, as a multi criteria decision making method, fuzzy analytic hierarchy process (FAHP) is applied for checking the consistency by centric consistency index (CCI). Expert consistency prioritization is also implemented for FFTA by using extent analysis method of trapezoidal FAHP. An analytic comparison between with and without consistency control is obtained. The numerical results for collapse of an offshore platform are presented to illustrate the applicability of the approach. (C) 2017 Elsevier B.V. All rights reserved. The modeling of power capacitors has great importance on the allocation of such devices in power distribution systems. Many models based on electrical circuits are found in the literature, but their proper parameterization is difficult when these capacitors are fed by harmonic voltages. In this context, the objective of this paper is to propose a methodology to determine, by means of a Particle Swarm Optimization (PSO) algorithm, the parameters of three different capacitor models supplied by voltages with harmonic distortions. In such a methodology, the PSO algorithm determines the parameters of each model by comparing a reference current obtained in laboratory tests with the calculated current flowing through its terminals, considering four different voltage waveforms and two types of metallized polypropylene power capacitors (all-film and impregnated). This approach is intended to be applicable as a prior tool to support capacitor allocation studies in power distribution systems. The results show that the PSO is able to efficient and effectively estimate the unknown parameters of such models. The replicability of results provided by the PSO and the robustness of the proposed methodology are demonstrated by means of statistical analyses. The Mean Squared Error (MSE) between the reference and the calculated currents for ten consecutive executions of the proposed methodology considering each waveform, model and capacitor varied between 3.049 and 0.032 A(2), with variation coefficients smaller than 1%; to validate the choice of the PSO algorithm, a comparison of MSE values provided by the PSO and four nonlinear programming methods is also shown. (C) 2017 Elsevier B.V. All rights reserved. This paper studies parallel machine scheduling problems in consideration of real world uncertainty quantified based on fuzzy numbers. Although this study is not the first to study the subject problem, it advances this area of research in two areas: (1) Rather than arbitrarily picking a method, it chooses the most appropriate fuzzy number ranking method based on an in-depth investigation of the effect of spread of fuzziness on the performance of fuzzy ranking methods; (2) It develops the first hybrid ant colony optimization for fuzzy parallel machine scheduling. Randomly generated datasets are used to test the performance of fuzzy ranking methods as well as the proposed algorithm, i.e. hybrid ant colony optimization. The proposed hybrid ant colony optimization outperforms a hybrid particle swarm optimization published recently and two simulated annealing based algorithms modified from our previous work. (C) 2017 Elsevier B.V. All rights reserved. network theory offers an efficient mathematical framework for modelling natural phenomena. However, these studies focus mainly on the topological characteristics of networks, while the actual reasons behind the networks' formation remain overlooked. This paper proposes a new approach to complex network analysis. By searching for the optimal functional definition of the network's edge set, it allows an examination of the influences of the physical properties of the nodes on the network's structure and behaviour (i.e. changes of the network's structure when the physical properties of nodes change). A two-level evolutionary algorithm is proposed for this purpose, whereby the search for a suitable function form is achieved at the first level, while the second level is used for optimal function fitting. In this way, not only the features with the largest influences are identified, but also the intensities of their influences are estimated. Synthetic networks are examined in order to show the superiority of the proposed approach over traditional machine learning algorithms, while the applicability of the proposed method is demonstrated on a real-world study of the behaviour of biological cells. (C) 2017 Elsevier B.V. All rights reserved. data often consist of redundant and irrelevant features. These features can lead to misleading in modeling the algorithms and overfitting problem. Without a feature selection method, it is difficult for the existing models to accurately capture the patterns on data. The aim of feature selection is to choose a small number of relevant or significant features to enhance the performance of the classification. Existing feature selection methods suffer from the problems such as becoming stuck in local optima and being computationally expensive. To solve these problems, an efficient global search technique is needed. Black Hole Algorithm (BHA) is an efficient and new global search technique, inspired by the behavior of black hole, which is being applied to solve several optimization problems. However, the potential of BHA for feature selection has not been investigated yet. This paper proposes a Binary version of Black Hole Algorithm called BBHA for solving feature selection problem in biological data. The BBHA is an extension of existing BHA through appropriate binarization. Moreover, the performances of six well-known decision tree classifiers (Random Forest (RF), Bagging, C5.0, C4.5, Boosted C5.0, and CART) are compared in this study to employ the best one as an evaluator of proposed algorithm. The performance of the proposed algorithm is tested upon eight publicly available biological datasets and is compared with Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Simulated Annealing(SA), and Correlation based Feature Selection (CFS) in terms of accuracy, sensitivity, specificity, Matthews' Correlation Coefficient (MCC), and Area Under the receiver operating characteristic (ROC) Curve (AUC). In order to verify the applicability and generality of the BBHA, it was integrated with Naive Bayes (NB) classifier and applied on further datasets on the text and image domains. The experimental results confirm that the performance of RF is better than the other decision tree algorithms and the proposed BBHA wrapper based feature selection method is superior to BPSO, GA, SA, and CFS in terms of all criteria. BBHA gives significantly better performance than the BPSO and GA in terms of CPU Time, the number of parameters for configuring the model, and the number of chosen optimized features. Also, BBHA has competitive or better performance than the other methods in the literature. (C) 2017 Elsevier B.V. All rights reserved. For long, the efficient use of herbicides has engaged the minds of many researchers. The traditional method where the entire farm is uniformly sprayed entails two major drawbacks: 1) Human health threat due to excessive consumption of herbicides; and 2) Environmental pollution. Therefore, in this study, a video processing system was proposed to detect Agria potato plants as well as Chenopodium album L., Secalecereale L, and Polygoniumaviculare L. The algorithm of such system performs spraying operation depending on the type and number of weeds. Videos were takenon4 ha of Agria potato fields in Kermanshah, Iran (longitude: 7.03 degrees E; latitude: 4.22 degrees N). The vision system of the proposed machine has two main subsystems. The first subsystem is responsible for making videos of the potato farms under controlled conditions. The second subsystem classifies potato plants along with the weeds of Chenopodium album L., Secale cereal L., and Polygoniumaviculare L. using the hybrid artificial neural network-imperialist competitive algorithm (ANN-ICA) classifier. After the recording and pre-processing of the objects, 25 features were extracted from each object. In this study, the decision tree algorithm was employed so as to select the most distinctive features. The eight selected features were area, length, width, perimeter, elongation, compactness, length to perimeter ratio, and image ratio. In order to classify the potato plants and different weeds based on these 8 features, the data were divided into two groups: Training and validation data with 2163 samples (15% for validation), and the testing data with 1015 samples. To compare the performance of the hybrid propagation neural network-imperialist competitive algorithm classifier, the meta-heuristic classifier of learning vector quantization (LVQ) and classic classifier of K-nearest neighbor (K-NN) were used. Sensitivity, specificity, and accuracy were considered in analyzing the performance of the confusion matrix. Except for the potato class, these values were above 91% for the hybrid ANN-ICA method. As far as LVQ method, these values were zero for the in Secale cereal L, and Polygoniumaviculare L. classes while they were less than 88% for the potato class. Finally, the results showed the superiority of the hybrid ANN-ICA method over the LVQ and K-NN methods, further corroborating the ability of the present method to detect the real-time of potato plants and three types of weeds. (C) 2017 Elsevier B.V. All rights reserved. To measure the efficiency of clean energy markets, a multi-scale complexity analysis approach is proposed. Due to the coexisting characteristics of clean energy markets, the "divide and conquer" strategy is introduced to provide a more comprehensive complexity analysis framework for both overall dynamics and hidden features (in different time scales), and to identify the leading factors contributing to the complexity. In the proposed approach, ensemble empirical mode decomposition (EEMD), a competitive multi-scale analysis tool, is first implemented to capture meaningful features hidden in the original market system. Second, fuzzy entropy, an effective complexity measurement, is employed to analyze both the whole system and inner features. In empirical analysis, the nuclear energy and hydropower markets in China and US are investigated, and some interesting results are obtained. For overall dynamics, the US clean energy markets appear a significantly higher complexity level than China's markets, implying market maturity and efficiency of US clean energy relative to China. For inner features, similar features (in terms of similar time scales) in different markets present similar complexity levels. For different inner features, there are some distinct differences in clean energy markets between US and China. China's markets are mainly driven by upward long-term trends with a low-level complexity, while short-term fluctuations with high-level complexity are the leading features for the US markets. All these results demonstrate that the proposed EEMD-based multi-scale fuzzy entropy approach can provide a new analysis tool to understand the complexity of clean energy markets. (C) 2017 Elsevier B.V. All rights reserved. The Dempster-Shafer (D-S) theory of evidence is introduced to improve fuzzy inference under the complex stochastic environment. The Dempster-Shafer based fuzzy set (DFS) is first proposed, together with its union and intersection operations, to capture the principal stochastic uncertainties. Then, the fuzzy inference will be modified based on the extensional Dempster rule of combination. This new approach is able to capture the stochastic disturbance acting on fuzzy membership function, and provide a more effective inference under strong stochastic uncertainty. Finally, the numerical simulation and the experimental prediction of the wind speed are conducted to show the potential of the proposed method in stochastic modeling. (C) 2017 Elsevier B.V. All rights reserved. Collaborative two-echelon logistics joint distribution network can be organized through a negotiation process via logistics service providers or participants existing in the logistics system, which can effectively reduce the crisscross transportation phenomenon and improve the efficiency of the urban freight transportation system. This study establishes a linear optimization model to minimize the total cost of two-echelon logistics joint distribution network. An improved ant colony optimization algorithm integrated with genetic algorithm is presented to serve customer clustering units and resolve the model formulation by assigning logistics facilities. A two-dimensional colony encoding method is adopted to generate the initial ant colonies. Improved ant colony optimization combines the merits of ant colony optimization algorithm and genetic algorithm with both global and local search capabilities. Finally, an improved Shapley value model based on cooperative game theory and a cooperative mechanism strategy are presented to obtain the optimal profit allocation scheme and sequential coalitions respectively in two-echelon logistics joint distribution network. An empirical study in Guiyang City, China, reveals that the improved ant colony optimization algorithm is superior to the other three methods in terms of the total cost. The improved Shapley value model and monotonic path selection strategy are applied to calculate the best sequential coalition selection strategy. The proposed cooperation and profit allocation approaches provide an effective paradigm for logistics companies to share benefit, achieve win-win situations through the horizontal cooperation, and improve the negotiation power for logistics network optimization. (C) 2017 Elsevier B.V. All rights reserved. This paper treats the control of multilevel rectifier by applying the soft computing technique represented by the fuzzy logic. This latest, provides an inexpensive solution for controlling the ill-known complex systems, while relying on the imitation of human reasoning. In our case, the proposed intelligent controller aims at eliminating the current line harmonics, guaranteing an improved power factor, reducing the output voltage ripple and ensuring the balance between the two voltages across the two capacitors of Vienna rectifier. In order to evaluate the performance of the proposed control for Vienna rectifier, a comparative study in real time via the dSPACE card 1104 has been carried out between the traditional approach based on the conventional controllers and the new method using an intelligent controller. The experimental results confirm that the modified control approach has guaranteed a good quality of source currents in phase with the grid voltages and operation of the studied system with a power factor very close to unity. Furthermore, only one fuzzy controller used has ensured the regulation of the DC bus voltage, the minimization of the undulations for the two partial voltages and the improvement of the error of the voltage balance for the Vienna rectifier. (C) 2017 Elsevier B.V. All rights reserved. Network filtering is a challenging area in high-speed computer networks, mostly because lots of filtering rules are required and there is only a limited time available for matching these rules. Therefore, network filters accelerated by field-programmable gate arrays (FPGAs) are becoming common where the fast lookup of filtering rules is achieved by the use of hash tables. It is desirable to be able to fill-up these tables efficiently, i.e. to achieve a high table-load factor in order to reduce the offline time of the network filter due to rehashing and/or table replacement. A parallel reconfigurable hash function tuned by an evolutionary algorithm (EA) is proposed in this paper for Internet Protocol (IP) address filtering in FPGAs. The EA fine-tunes the reconfigurable hash function for a given set of IP addresses. The experiments demonstrate that the proposed hash function provides high-speed lookup and achieves a higher table-load factor in comparison with conventional solutions. (C) 2017 Elsevier B.V. All rights reserved. In order to obtain better generalization abilities and mitigate the impacts of the best and worst individuals during the process of optimization, this paper suggests Bee and Frog Co-Evolution Algorithm (abbreviation for BFCEA), which combines Mnemonic Shuffled Frog Leaping algorithm With Cooperation and Mutation (abbreviation for MSFLACM) with improved Artificial Bee Colony (abbreviation for ABC). The contrast experimental study about different iteratively updating strategies was acted in BFCEA, including strategy of integrating with ABC, regeneration of the worst frog and its leaping step. The key techniques focus on the first 10 and the last 10 frogs evolving ABC in BFCEA, namely, the synchronous renewal strategy for those winner and loser should be applied, after certain G times' MSFLACM-running, so as to avoid trapping local optimum in later stage. The ABC evolution process will be called between all memes' completing inner iteration and all frogs' outer shuffling, the crossover operation is removed from MSFLACM for its little effect on time-consuming and convergence in this novel algorithm. Besides, in ABC, the scout bee is generated by Cauchy mutating instead at random. The performance of proposed approach is examined by well-known 16 numerical benchmark functions, and obtained results are compared with basic Shuffled Frog Leaping algorithm (abbreviation for SFLA), ABC and four other variants. The experimental results and related application in cloud resource scheduling show that the proposed algorithm is effective and outperforms other variants, in terms of solution quality and convergence, and the improved variants can obtain a lower degree of unbalanced load and relatively stable scheduling strategy of resources in complicated cloud computing environment. (C) 2017 Elsevier B.V. All rights reserved. The integrated piecewise linear representation (PLR) and weighted support vector machine (PLR-WSVM) has shown success in the prediction of stock trading signals. Meanwhile drawbacks of PLR-WSVM exist particularly in a real world setting. For example, the profitability using PLR-WSVM is unstable, it is not reasonable to specify same threshold value for all stocks in PLR, and critical errors in trading signals may significantly reduce the profit. In this paper, we conduct a set of improvements to PLR-WSVM. First, most of absolute technical indicators in input variables are substituted with relative indicators since the relative indicators are generally more helpful in predicting trading signals. Second, a four-class prediction problem is converted into a two-class problem in which one class is a turning point (TP) and the other is an ordinary point. And prior domain knowledge is exploited to identify either buying or selling signals from TPs. Thirdly, a delay-one-day strategy (DODS) is proposed to further correct the predicted trading signals. DODS reduces the critical errors occurring to PLR-WSVM. Finally, a procedure for selecting a threshold in PLR is provided. The threshold is automatically selected by a given percentage of TPs in a training set. The percentage of TPs is easier to understand by investor compared with the threshold. We conduct experimental study over 20 stocks, and the results confirm the expected performance of the improved PLR-WSVM. More importantly, the improved PLR-WSVM provides steady profits in average over the stocks of interest with accepted retracements. (c) 2017 Elsevier B.V. All rights reserved. The aim of the present study is to select a set of higher order spectral features for emotion/stress recognition system. 50 Bispectral (28 features) and Bicoherence (22 features) based higher order spectral features were extracted from speech signal and its glottal waveform. These features were combined with Inter-Speech 2010 features to further improve the recognition rates. Feature subset selection (FSS) was carried out in this proposed work with the objective of maximizing emotion recognition rate for subject independent with minimum features. The FSS contains two stages: Multi-cluster feature selection was adopted in Stage 1 to reduce feature space and identify relevant feature subset from Interspeech 2010 features. In Stage 2, Biogeography based optimization (BBO), Particle swarm optimization (PSO) and proposed BBO_PSO Hybrid optimization were performed to further reduce the dimension of feature space and identify the most relevant feature subset, which has higher discrimination ability to distinguish different emotional states. The proposed method was tested in three different databases: Berlin emotional speech database (BES), Surrey audio-visual expressed emotion database (SAVEE) and Speech under simulated and actual stress (SUSAS) simulated domain. The proposed feature set was evaluated with subject independent (SI), subject dependent (SD), gender dependent male (GD-male), gender dependent female (GD-female), text independent pairwise speech (TIDPS), and text independent multi-style speech (TIDMSS) experiments by using SVM and ELM classifiers. From the results obtained, it is evident that the proposed method attained accuracies of 93.25% (SI), 100% (SD), 93.75% (GD-male), and 97.58% (GD-female) for BES; 62.38% (SI) and 76.19% (SD) for SAVEE; and 90.09% (TIDMSS), 97.04% (TIDPS - Angryvs. Neutral), 98.89% (TIDPS - Lombard vs. Neutral), 99.07% (TIDPS - Loud vs. Neutral) for SUSAS. (c) 2017 Elsevier B.V. All rights reserved. An information system is an important model in the field of artificial intelligence. A distributed fc-decision information system is an information system with distributed fuzzy data. This paper investigates a distributed fc-decision information system, proposes a multi-granulation decision-theoretic method for this system and gives an application in medical diagnosis. The fuzzy T-cos-equivalence relation, induced by an fc-decision information system by means of Gaussian kernel method, is first obtained, where Gaussian kernel is used to compute the similarity between objects in this system. Then, the approximately equal relation (for short, ae-relation) on fuzzy sets is defined and based on this ae-relation, the ae-granular structure induced by the fuzzy Tcos-equivalence relation is given. Next, a multi-granulation decision-theoretic rough set method for distributed fc-decision information systems is proposed by means of inclusion degree of sets in the viewpoint of multi-granulation. Finally, an example of medical diagnosis is employed to illustrate the effectiveness of the proposed method. These results will be helpful for dealing with distributed fuzzy data and significant for establishing a framework of multi-granulation decision-theoretic rough sets in distributed fc-decision information systems. (c) 2017 Elsevier B.V. All rights reserved. computing techniques including fuzzy logic have been successfully applied to wireless body area networks (WBANs). However, most of the existing research works rely on manual design of the fuzzy logic controller (FLC). To address this issue, in this paper, we propose an evolutionary approach to automate the design of FLCs for cross layer medium access control in WBANs. With the goal of improving network reliability while keeping the communication delay at a low level, we have particularly studied the usefulness of three coding schemes with different levels of flexibility during the evolutionary design process. The influence of fitness functions that measure the effectiveness of each possible FLC design has also been examined carefully in order to achieve a good balance between reliability and performance. Moreover, we have utilised surrogate models to improve the efficiency of the design process. In consideration of practical usefulness, we have further identified two main design targets. The first target is to design effective FLCs for a specific network configuration. The second target focuses on designing FLCs to function across a wide range of network settings. In order to examine the usefulness of our design approach, we have utilised two widely used evolutionary algorithms, i.e. particle swarm optimisation(PSO) and differential evolution (DE). The FLC designed by our approach is also shown to outperform some related algorithms as well as the IEEE 802.15.4 standard. (c) 2017 Elsevier B.V. All rights reserved. The search for binary sequences with a high figure of merit, known as the low autocorrelation binary sequence (labs) problem, represents a formidable computational challenge. To mitigate the computational constraints of the problem, we consider solvers that accept odd values of sequence length L and return solutions for skew- symmetric binary sequences only - with the consequence that not all best solutions under this constraint will be optimal for each L. In order to improve both, the search for best merit factor and the asymptotic runtime performance, we instrumented three stochastic solvers, the first two are state-of-the-art solvers that rely on variants of memetic and tabu search (lssMAts and lssRRts), the third solver (lssOrel) organizes the search as a sequence of independent contiguous self-avoiding walk segments. By adapting a rigorous statistical methodology to performance testing of all three combinatorial solvers, experiments show that the solver with the best asymptotic average-case performance, lssOrel_8 = 0.000032 * 1.1504(I), has the best chance of finding solutions that improve, as L increases, figures of merit reported to date. The same methodology can be applied to engineering new labs solvers that may return merit factors even closer to the conjectured asymptotic value of 12.3248. (c) 2017 Elsevier B.V. All rights reserved. In this paper, a novel differential evolution (DE) algorithm is proposed to improve the search efficiency of DE by employing the information of individuals to adaptively set the parameters of DE and update population. Firstly, a combined mutation strategy is developed by using two mixed mutation strategies with a prescribed probability. Secondly, the fitness values of original and guiding individuals are used to guide the parameter setting. Finally, a diversity-based selection strategy is designed by assembling greedy selection strategy and defining a new weighted fitness value based on the fitness values and positions of target and trial individuals. The proposed algorithm compares with eight existing algorithms on CEC 2005 and 2014 contest test instances, and is applied to solve the Spread Spectrum Radar Polly Code Design. Experimental results show that the proposed algorithm is very competitive. (C) 2017 Published by Elsevier B.V. In the design of a financial bankruptcy prediction model, financial ratio selection and classifier design play major roles. Methodology based on expert opinion, statistical theory and computational intelligence technique has been widely applied. In this study, a hybrid structure integrating statistical theory and computational intelligence technique was developed using genetic algorithm (GA) with statistical measurements and fuzzy logic based fitness functions for key ratio selection. A fuzzy clustering algorithm was used for the classifier design. In the experiments, two financial ratio sets, one extracted from the suggestions of other studies and the other obtained by using the GA toolbox in the SAS statistical software package, were applied to examine the proposed ratio selection schemes. For classifier design, the developed fuzzy classifier was compared with the well known BPNN classifier frequently used in other studies. Besides, comparison between the developed hybrid structure and other well applied structures was also given. Experimental results based on one to four years of financial data prior to the occurrence of bankruptcy were used to evaluate the performance of the proposed prediction model. (C) 2017 Elsevier B.V. All rights reserved. In this study, we develop and test a local rainfall (precipitation) prediction system based on artificial neural networks (ANNs). Our system can automatically obtain meteorological data used for rainfall prediction from the Internet. Meteorological data from equipment installed at a local point is also shared among users in our system. The final goal of the study was the practical use of "big data" on the Internet as well as the sharing of data among users for accurate rainfall prediction. We predicted local rainfall in regions of Japan using data from the Japan Meteorological Agency (JMA). As neural network (NN) models for the system, we used a multi-layer perceptron (MLP) with a hybrid algorithm composed of back-propagation (BP) and random optimization (RO) methods, and radial basis function network (RBFN) with a least squares method (LSM), and compared the prediction performance of the two models. Precipitation (total amount of rainfall above 0.5 mm between 12: 00 and 24: 00 JST (Japan standard time)) at Matsuyama, Sapporo, and Naha in 2012 was predicted by NNs using meteorological data for each city from 2011. The volume of precipitation was also predicted (total amount above 1.0 mm between 17: 00 and 24: 00 JST) at 16 points in Japan and compared with predictions by the JMA in order to verify the universality of the proposed system. The experimental results showed that precipitation in Japan can be predicted by the proposed method, and that the prediction performance of the MLP model was superior to that of the RBFN model for the rainfall prediction problem. However, the results were not better than those generated by the JMA. Finally, heavy rainfall (above 10 mm/h) in summer (Jun.-Sep.) afternoons (12: 00-24: 00 JST) in Tokyo in 2011 and 2012 was predicted using data for Tokyo between 2000 and 2010. The results showed that the volume of precipitation could be accurately predicted and the caching rate of heavy rainfall was high. This suggests that the proposed system can predict unexpected local heavy rainfalls as "guerrilla rainstorms." (C) 2017 Elsevier B.V. All rights reserved. The field of Search-Based Software Engineering (SBSE) has widely utilized Multi-Objective Evolutionary Algorithms (MOEAs) to solve complex software engineering problems. However, the use of such algorithms can be a hard task for the software engineer, mainly due to the significant range of parameter and algorithm choices. To help in this task, the use of Hyper-heuristics is recommended. Hyper-heuristics can select or generate low-level heuristics while optimization algorithms are executed, and thus can be generically applied. Despite their benefits, we find only a few works using hyper-heuristics in the SBSE field. Considering this fact, we describe HITO, a Hyper-heuristic for the Integration and Test Order Problem, to adaptively select search operators while MOEAs are executed using one of the selection methods: Choice Function and Multi-Armed Bandit. The experimental results show that HITO can outperform the traditional MOEAs NSGA-II and MOEA/DD. HITO is also a generic algorithm, since the user does not need to select crossover and mutation operators, nor adjust their parameters. (C) 2017 Elsevier B.V. All rights reserved. In this paper, in order to search the global optimum solution with a very fast convergence speed across the whole search space, we propose a partitioned and cooperative quantum-behaved particle swarm optimization (SCQPSO) algorithm. The auxiliary swarms and partitioned search space are introduced to increase the population diversity. The cooperative theory is introduced into QPSO algorithm to change the updating mode of the particles in order to guarantee that this algorithm well balances the effectiveness and simplification. Firstly, we explain how this method leads to enhanced population diversity and improved algorithm over previous strategies, and emphasize this algorithm with comparative experiments using five benchmark test functions and five shift complex functions. After that we demonstrate a reasonable application of the proposed algorithm, by showing how it can be used to optimize the parameters for OTSU image segmentation for processing medical images. The results show that the proposed SCQPSO algorithm outperforms than the other improved QPSO in terms of the quality of the solution, and performs better for solving the image segmentation than the QPSO algorithm, the sunCQPSO algorithm, the CCQPSO algorithm. (C) 2017 Elsevier B.V. All rights reserved. Multi-class classification problems can be addressed by using decomposition strategy. One of the most popular decomposition techniques is the One-vs-One (OVO) strategy, which consists of dividing multi class classification problems into as many as possible pairs of easier-to-solve binary sub-problems. To discuss the presence of classes with different cost, in this paper, we examine the behavior of an ensemble of Cost-Sensitive Back-Propagation Neural Networks (CSBPNN) with OVO binarization techniques for multi-class problems. To implement this, the original multi-class cost-sensitive problem is decomposed into as many sub-problems as possible pairs of classes and each sub-problem is learnt in an independent manner using CSBPNN. Then a combination method is used to aggregate the binary cost-sensitive classifiers. To verify the synergy of the binarization technique and CSBPNN for multi-class cost-sensitive problems, we carry out a thorough experimental study. Specifically, we first develop the study to check the effectiveness of the OVO strategy for multi-class cost-sensitive learning problems. Then, we develop a comparison of several well-known aggregation strategies in our scenario. Finally, we explore whether further improvement can be achieved by using the management of non-competent classifiers. The experimental study is performed with three types of cost matrices and proper statistical analysis is employed to extract the meaningful findings. (C) 2017 Elsevier B.V. All rights reserved. Designing suitable behavioral rules of agents so as to generate realistic behaviors is a fundamental and challenging task in many forms of computational modeling. This paper proposes a novel methodology to automatically generate a descriptive model, in the form of behavioral rules, from video data of human crowds. In the proposed methodology, the problem of modeling crowd behaviors is formulated as a symbolic regression problem and the self-learning gene expression programming is utilized to solve the problem and automatically obtain behavioral rules that match data. To evaluate its effectiveness, we apply the proposed method to generate a model from a video dataset in Switzerland and then test the generality of the model by validating against video data from the United States. The results demonstrate that, based on the observed movement of people in one scenario, the proposed methodology can automatically construct a general model capable of describing the crowd dynamics of another scenario in a different context (e.g., Switzerland vs. U.S.) as long as that the crowd behavior patterns are similar. (C) 2017 Elsevier B.V. All rights reserved. As a new service-oriented smart manufacturing paradigm, cloud manufacturing (CMfg) aims at fully sharing and circulation of manufacturing capabilities towards socialization, in which composite CMfg service optimal selection (CCSOS) involves selecting appropriate services to be combined as a composite complex service to fulfill a customer need or a business requirement. Such composition is one of the most difficult combination optimization problems with NP-hard complexity. For such an NP-hard CCSOS problem, this study proposes a new approach, called multi-population parallel self-adaptive differential artificial bee colony (MPsaDABC) algorithm. The proposed algorithm adopts multiple parallel subpopulations, each of which evolves according to different mutation strategies borrowed from the differential evolution (DE) to generate perturbed food sources for foraging bees, and the control parameters of each mutation strategy are adapted independently. Moreover, the size of each subpopulation is dynamically adjusted based on the information derived from the search process. Different scales of the CCSOS problems are conducted to validate the effectiveness of the proposed algorithm, and the experimental results show that the proposed algorithm has superior performance over other hybrid and single population algorithms, especially for complex CCSOS problems. (C) 2017 Elsevier B.V. All rights reserved. In this paper, we propose a novel life-long learning framework, which constantly evolves with changing data distribution, learning new knowledge while retaining some old knowledge. In many practical systems, data in the past is still useful but no longer available. Therefore, a question arises on how to update the model based on both new data and current model. To address this issue, our framework lays its basis on ensemble method with multiple sub-classifiers, independent of base type. When new data is processed, new sub-classifiers are generated accordingly. The classifiers are then dynamically combined using decision tree, together with a novelly proposed pruning method to prevent overfitting and eliminate out-dated models. Guarantees are provided to the combination method. Experiments indicate that the framework achieves good performance when the data changes with time, and has better accuracy compared to existing transfer and incremental learning, and methods in stream data mining. (C) 2017 Elsevier B.V. All rights reserved. In this research, we propose an intelligent decision support system for acute lymphoblastic leukaemia (ALL) diagnosis using microscopic images. Two Bare-bones Particle Swarm Optimization (BBPSO) algorithms are proposed to identify the most significant discriminative characteristics of healthy and blast cells to enable efficient ALL classification. The first BBPSO variant incorporates accelerated chaotic search mechanisms of food chasing and enemy avoidance to diversify the search and mitigate the premature convergence of the original BBPSO algorithm. The second BBPSO variant exhibits both of the abovementioned new search mechanisms in a subswarm-based search. Evaluated with the ALL-IDB2 database, both proposed algorithms achieve superior geometric mean performances of 94.94% and 96.25%, respectively, and outperform other metaheuristic search and related methods significantly for ALL classification. (C) 2017 The Author(s). Published by Elsevier B.V. In this work, a new stochastic computing technique is developed to study the nonlinear dynamics of Troesch's problem by designing the mathematical models of Morlet Wavelets Artificial Neural Networks (MW-ANNs) optimized with Genetic Algorithm (GA) integrated with Sequential Quadratic Programming (SQP). The differential equation mathematical model for MW-ANNs are designed for Troesch's system by incorporating a windowing kernel based on Morlet Wavelets as an activation function and these networks are constructed to define a fitness function of Troesch's system in the mean squared sense. The unknown adjustable parameters of MW-ANNs are trained initially by an effective global search using GAs hybridized with SQP for rapid local refinement of the results. The proposed scheme is evaluated to solve the Troesch's problems for small and large values of the critical parameter in the system. Comparison of the proposed results with standard reference solutions of Adams method shows good agreement. Validation of accuracy and convergence of the proposed scheme is made using statistical analysis based on a sufficiently large number of independent runs, this is done in terms of performance measures of mean absolute deviation and root mean squared error. (C) 2017 Elsevier B.V. All rights reserved. In the globalized world, fatal competition ruling in almost all business sectors hits especially airline companies due to the global nature of the industry. Thus, maintaining the business operations in a cost-effective manner is of paramount importance in this destructive business environment. While realizing this goal, a special focus on Maintenance Repair and Overhaul (MRO) operations is indispensable, considering that a significant proportion of expenses in the industry are originated from them. Since the MRO departments are responsible for maintenance, repair, and overhaul of aircrafts and engines under very strict standards, a smooth, clean-cut supply chain does not only provide a solid technical ground for daily operations but also it helps to gain a comparative advantage in cost leadership which, in turn, results with high customer satisfaction. Obviously, a key component of this chain is designing a robust and reliable Supplier Performance Evaluation (SPE) methodology where numerous tangible and intangible evaluation criteria come into play reflecting the multifaceted characteristic of the decision problem. This work proposes a three-phase hybrid approach to address the SPE problem in aviation industry where the suppliers are obliged to follow tight tolerances regarding quality and delivery times. Starting with selection of the most significant evaluation criteria, in the second stage, an Interval Type-2 Fuzzy (IT2F)-Analytic Hierarchy Process (AHP) is employed to determine their relative importance. These weights are then given as input to an IT2F-Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. We used the IT2F extensions due to the fact that some of the evaluation criteria are subjective and qualitative in nature. To prove the merit of the proposed methodology, an application is presented to SPE problem at Turkish Technic Inc., a subsidiary of Turkish Airlines. (C) 2017 Elsevier B.V. All rights reserved. In this paper, a kind of novel soft set model called a Z-soft fuzzy rough set is presented by means of three uncertain models: soft sets, rough sets and fuzzy sets, which is an important generalization of Z-soft rough fuzzy sets. As a novel Z-soft fuzzy rough set, its applications in the corresponding decision making problems are established. It is noteworthy that the underlying concepts keep the features of classical Pawlak rough sets. Moreover, this novel approach will involve fewer calculations when one applies this theory to algebraic structures. In particular, an approach for the method of decision making problem with respect to Z-soft fuzzy rough sets is proposed and the validity of the decision making methods is testified by a given example. At the same time, an overview of techniques based on some types of soft set models is investigated. Finally, the numerical experimentation algorithm is developed, in which the comparisons among three types of hybrid soft set models are analyzed. (C) 2017 Elsevier B.V. All rights reserved. Developing a precise dynamic model is a critical step in the design and analysis of the overhead crane system. To achieve this objective, we present a novel radial basis function neural network (RBF-NN) modeling method. One challenge for the RBF-NN modeling method is how to determine the RBF-NN parameters reasonably. Although gradient method is widely used to optimize the parameters, it may converge slowly and may not achieve the optimal purpose. Therefore, we propose the cuckoo search algorithm with membrane communication mechanism (mCS) to optimize RBF-NN parameters. In mCS, the membrane communication mechanism is employed to maintain the population diversity and a chaotic local search strategy is adopted to improve the search accuracy. The performance of mCS is confirmed with some benchmark functions. And the analyses on the effect of the communication set size are carried out. Then the mCS is applied to optimize the RBF-NN models for modeling the overhead crane system. The experimental results demonstrate the efficiency and effectiveness of mCS through comparing with that of the standard cuckoo search algorithm (CS) and the gradient method. (C) 2017 Elsevier B.V. All rights reserved. Thresholding is a commonly used simple and effective technique for image segmentation. The computational time in multi-level thresholding significantly increases with the level of computation because of exhaustive searching, adding to exponential growth of computational complexity. Hence, in this paper, the features of quantum computing are exploited to introduce four different quantum inspired meta-heuristic techniques to accelerate the execution of multi-level thresholding. The proposed techniques are Quantum Inspired Genetic Algorithm, Quantum Inspired Simulated Annealing, Quantum Inspired Differential Evolution and Quantum Inspired Particle Swarm Optimization. The effectiveness of the proposed techniques is exhibited in comparison with the backtracking search optimization algorithm, the composite DE method, the classical genetic algorithm, the classical simulated annealing, the classical differential evolution and the classical particle swarm optimization for ten real life true colour images. The experimental results are presented in terms of optimal threshold values for each primary colour component, the fitness value and the computational time (in seconds) at different levels. Thereafter, the quality of thresholding is judged in terms of the peak signal-to-noise ratio for each technique. Moreover, statistical test, referred to as Friedman test, and also median based estimation among all techniques, are conducted separately to judge the preeminence of a technique among them. Finally, the performance of each technique is visually judged from convergence plots for all test images, which affirms that the proposed quantum inspired particle swarm optimization technique outperforms other techniques. (C) 2016 Elsevier B.V. All rights reserved. Mobile industry promotes new products every recent year. It is reported that consumers in high-income countries typically replace their mobile phones at intervals of between 12 and 18 months. How to transfer data from one mobile to another and how to share data between different mobile apps are taken into consideration. We build a cloud service to combine contacts from different mobiles and synchronize contacts in different mobiles. Some algorithms and data structures are designed to meet the needs of data combination and synchronization. The result shows that our method can provide more efficient result than other solutions. (C) 2016 Published by Elsevier B.V. This paper is the first one of the two papers entitled "Weighted Superposition Attraction (WSA)", which is based on two basic mechanisms, "superposition" and " attracted movement of agents", that are observable in many systems. Dividing this paper into two parts raised as a necessity because of their individually comprehensive contents. If we wanted to write these papers as a single paper we had to write more compact as distinct from its current versions because of the space requirements. So, writing them as a single paper would not be as effective as we desired. In many natural phenomena it is possible to compute superposition or weighted superposition of active fields like light sources, electric fields, sound sources, heat sources, etc.; the same may also be possible for social systems as well. An agent ( particle, human, electron, etc.) may be supposed to move to wards superposition if it is attractive to it. As systems status changes the superposition also changes; so it needs to be recomputed. This is the main idea behind the WSA algorithm, which mainly attempts to realize this superposition principle in combination with the attracted movement of agents as a search procedure for solving optimization problems in an effective manner. In this current part, the performance of the proposed WSA algorithm is tested on the well-known unconstrained continuous optimization functions, through a set of computational study. The comparison with some other search algorithms is performed in terms of solution quality and computational time. The experimental results clearly indicate the effectiveness of the WSA algorithm. (C) 2015 Elsevier B.V. All rights reserved. In recent few decades, linear quadratic optimal control problems have achieved great improvements in theoretical and practical perspectives. For a linear quadratic optimal control problem, it is well known that the optimal feedback control is characterized by the solution of a Riccati differential equation, which cannot be solved exactly in many cases, and sometimes the optimal feedback control will be a complex time-oriented function. In this paper, we introduce a parametric optimal control problem of uncertain linear quadratic model and propose an approximation method to solve it for simplifying the expression of optimal control. A theorem is given to ensure the solvability of optimal parameter. Besides, the analytical expressions of optimal control and optimal value are derived by using the proposed approximation method. Finally, an inventory-promotion problem is dealt with to illustrate the efficiency of the results and the practicability of the model. (C) 2016 Elsevier B.V. All rights reserved. Uncertain coalitional game deals with situations in which the transferable payoffs are uncertain variables. The uncertain core has been proposed as the solution of uncertain coalitional game. This paper goes further by presenting two definitions of uncertain Shapley value: expected Shapley value and alpha-optimistic Shapley value. Meanwhile, some characterizations of the uncertain Shapley value are investigated. Finally, as an application, uncertain Shapley value is used to solve a profit allocation problem of supply chain alliance. (C) 2016 Elsevier B.V. All rights reserved. Based on uncertainty theory, we investigate the relations among efficiency concepts of the multiobjective programming (MOP) with uncertain vectors. We first propose the uncertain MOP model, and study its convexity. Then, we define different efficiency concepts such as expected-value efficiency, expected-value proper efficiency, and establish their relations under the assumed conditions, which are illustrated through two numerical examples. Finally, in the uncertain environment, we apply the theoretical results to a redundancy allocation problem with two objective in reparable parallel-series systems, and discuss how to obtain different types of efficient solutions according to the decision-maker's preferences. (C) 2016 Elsevier B.V. All rights reserved. Engineering components and systems are often subject to multiple dependent competing failure processes (MDCFPs). MDCFPs have been well studied in literature and various models have been developedto predict the reliability of MDCFPs. In practice, however, due to the limited resource, it is often hard to estimate the precise values of the parameters in the MDCFP model. Hence, the predicted reliability is affected by epistemic uncertainty. Probability box (P-box) is applied in this paper to describe the effect of epistemic uncertainty on MDCFP models. A dimension-reduced sequential quadratic programming(DRSQP) method is developed for the construction of P-box. A comparison to the conventional construction method shows that DRSQP method reduces the computational costs required for P-box constructions. Since epistemic uncertainty reflects the unsureness in the predicted reliability, a decision maker might want to reduce it by investing resource to more accurately estimate the value of each model parameter. A two-stage optimization framework is developed to allocate the resource among the parameters and ensure that epistemic uncertainty is reduced in a most efficient way. Finally, the developed methods are applied on a real case study, a spool valve, to demonstrate their validity. (C) 2016 Elsevier B.V. All rights reserved. Uncertainty theory has shown great advantages in solving many nondeterministic problems, one of which is the degree-constrained minimum spanning tree (DCMST) problem in uncertain networks. Based on different criteria for ranking uncertain variables, three types of DCMST models are proposed here: uncertain expected value DCMST model, uncertain alpha-DCMST model and uncertain most chance DCMST model. In this paper, we give their uncertainty distributions and fully characterize uncertain expected value DCMST and uncertain alpha-DCMST in uncertain networks. We also discover an equivalence relation between the uncertain alpha-DCMST of an uncertain network and the DCMST of the corresponding deterministic network. Finally, a related genetic algorithm is proposed here to solve the three models, and some numerical examples are provided to illustrate its effectiveness. (C) 2016 Elsevier B.V. All rights reserved. Reliability allocation is one of the most critical tasks in system reliability design, the result of which directly influences a product's quality and the robustness of the system. Traditionally, reliability allocation has used the feasibility-of-objectives (FOO) method to perform system reliability allocation at the beginning of the product design stage. However, the FOO method has several shortcomings; for example, it requires that the value of the reliability allocation factors be single linguistic variables, and it does not consider the ordered weight of the reliability allocation factors. To solve this issue, this paper integrates the hesitant fuzzy linguistic term sets and minimal variance OWGA weights to effect flexible allocation of system reliability. To verify the efficacy of the proposed approach, an example of reliability allocation in a transceiver system is provided to illustrate its application, and the results of this method are compared with those of the FOO and fuzzy allocation methods. We conclude that the proposed approach is more accurate and performs flexible allocation of system reliability. (C) 2016 Elsevier B.V. All rights reserved. Absolute deviation is a commonly used risk measure, which has attracted more attentions in portfolio optimization. The existing mean-absolute deviation models are devoted to either stochastic portfolio optimization or fuzzy one. However, practical investment decision problems often involve the mixture of randomness and fuzziness such as stochastic returns with fuzzy information. Thus it is necessary to model portfolio selection problem in such a hybrid uncertain environment. In this paper, we employ random fuzzy variables to describe the stochastic return on individual security with ambiguous information. We first define the absolute deviation of random fuzzy variable and then employ it as risk measure to formulate mean-absolute deviation portfolio optimization models. To find the optimal portfolio, we design random fuzzy simulation and simulation-based genetic algorithm to solve the proposed models. Finally, a numerical example for synthetic data is presented to illustrate the validity of the method. (C) 2016 Elservier B.V. All rights reserved. In this paper we tackle a variant of the job shop scheduling problem with uncertain task durations modelled as fuzzy numbers. Our goal is to simultaneously minimise the schedule's fuzzy makespan and maximise its robustness. To this end, we consider two measures of solution robustness: a predictive one, prior to the schedule execution, and an empirical one, measured at execution. To optimise both the expected makespan and the predictive robustness of the fuzzy schedule we propose a multiobjective evolutionary algorithm combined with a novel dominance-based tabu search method. The resulting hybrid algorithm is then evaluated on existing benchmark instances, showing its good behaviour and the synergy between its components. The experimental results also serve to analyse the goodness of the predictive robustness measure, in terms of its correlation with simulations of the empirical measure. (C) 2016 Elsevier B.V. All rights reserved. We consider a contract-design problem for two competing heterogeneous suppliers working with a common retailer. The retailer's type low-volume or high-volume is unknown to the suppliers. One supplier has a high variable cost and a low fixed cost, whereas the other has a low variable cost and a high fixed cost, and their variable costs are uncertain. They sell the same products to the retailer, and each supplier offers the retailer a menu of contracts. The retailer chooses the contract that maximizes her alternative profit based on her confidence level instead of her expected profit. In this setting, we find that the retailer's optimal order quantity is determined by the inverse distribution of the external demand and the confidence level. Furthermore, higher confidence levels correlate with lower order quantities. We also show that the equilibrium contract menus depend on the magnitudes of the confidence level and the high fixed cost. Importantly, if the confidence level of the supply chain tends to be 0 or 1, the supplier with the low fixed cost possesses a competitive advantage over the other supplier. In some cases, the supplier with the low fixed cost may choose not to serve the high-volume retailer to avoid excessive information rent. (C) 2016 Elsevier B.V. All rights reserved. The use of potentially hazardous materials has been aroused wide concerns in the world. In order to reduce the hazard of these potentially hazardous materials, some non-profit organizations (NPOs) devote to the substitution of potentially hazardous materials. This paper studies a potentially hazardous material substitution problem in a bi-level decision-making model with a NPO and two competitive firms. The NPO, the leader, aims to prompt the two firms to instantly substitute a kind of potentially hazardous material. This purpose is achieved by raising the material's market sensitivity, which is determined by an effort exerted by the NPO with an uncertain shock. Under the uncertain environment, the two firms, the followers, proceed a static Nash game with their strategies of instant substituting and substituting with delay. We analyze the effects of the uncertain shock's volatility on the decisions of the NPO and the firms. Our results demonstrate that when the marginal cost of instant substituting is very small, the uncertain shock's volatility has no impact on the decisions of the participators. The NPO exerts no effort and the firms substitute instantly. Through numerical simulations, when the marginal cost of instant substituting is big, the NPO exerts a relatively big effort. The impact of the volatility on the NPO's effort is contingent on the marginal cost of instant substituting and the difference between the firms' initial market shares. (C) 2016 Elsevier B.V. All rights reserved. The optimization of spare parts inventory for equipment system is becoming a dominant support strategy, especially in the defense industry. Tremendous researches have been made to achieve optimal support performance of the supply system. However, the lack of statistical data brings limitations to these optimization models which are based on probability theory. In this paper, personal belief degree is adopted to compensate the data deficiency, and the uncertainty theory is employed to characterize uncertainty arising from subjective personal cognition. A base-depot support system is taken into consideration in the presence of uncertainty, supplying repairable spare parts for equipment system. With some constraints such as costs and supply availability, the minimal expected backorder model and the minimal backorderrate model will be presented based on uncertain measure. Genetic algorithm is adopted in this paper to search for optimal solution. Finally, a numerical example is employed to illustrate the feasibility of the optimization models. (C) 2016 Elsevier B.V. All rights reserved. Effective project selection and staff assignment strategies directly impact organizational profitability. Based on critical value optimization criterion, this paper discusses how uncertainty and interaction impact the project portfolio return and staff allocation. Since the exact possibility distributions of uncertain parameters in practical project portfolio problems are often unavailable, we adopt variable parametric possibility distributions to characterize uncertain model parameters. Furthermore, this paper develops a novel parametric credibilistic optimization method for project portfolio selection problem. According to the structural characteristics of variable parametric possibility distributions, we derive the equivalent analytical expressions of credibility constraints, and turn the original credibilistic project portfolio model into its equivalent nonlinear mixed-integer programming models. To show the advantages of the proposed parametric credibilistic optimization method, some numerical experiments are conducted by setting various values of distribution parameters. The computational results support our arguments by comparing with the optimization method under fixed possibility distributions. (C) 2016 Elsevier B.V. All rights reserved. The optimal coordination strategy is achieved when the dynamic supply chain profit is maximized. The profit of dynamical supply chain is decided by the corresponding effort level of each node enterprise. This paper presents a cooperative stochastic differential game model to research the optimal coordination strategy of a dynamic supply chain under uncertain conditions and studies how to coordinate the effort level of node enterprise to maximize supply chain profit. For this purpose, a cooperative stochastic differential game model of the enterprises' effort level is constructed, and the optimal solution of the model is solved by converting it to the solution of the equivalent stochastic partial differential equation. As a result, the optimal strategy of a local node enterprise is achieved. Considering the gap between the independent decision of each node enterprise and the optimal decision of the supply chain system, the coordination mechanisms of the dynamic supply chain, including the temporary static allocation mechanism and the compensation mechanism, are given to narrow the gap to achieve the optimal coordination strategy under uncertain conditions. (c) 2016 Elsevier B.V. All rights reserved. Collaborative logistics networks (CLNs) are considered to be an effective organizational form for business cooperation that provides high stability and low cost. One common key issue regarding CLN resource combination is the network design optimization problem under discrete uncertainty (DU-CLNDOP). Operational environment changes and information uncertainty in network designs, due to partner selection, resource constrains and network robustness, must be effectively controlled from the system perspective. Therefore, a general two-stage quantitative framework that enables decision makers to select the optimal network design scheme for CLNs under uncertainty is proposed in this paper. Phase 1 calculates the simulation result of each hypothetical scenario of CLN resource combination using the expected value model with robust constraints. Phase 2 selects the optimal network design scheme for DU-CLNDOP using the orthogonal experiment design method. The validity of the model and method are verified via an illustrative example. (c) 2016 Published by Elsevier B.V. Least squares support vector regression (LSSVR) is an effective and competitive approach for crude oil price prediction, but its performance suffers from parameter sensitivity and long tuning time. This paper considers the user-defined parameters as uncertain (or random) factors to construct an LSSVR ensemble learning paradigm, by taking four major steps. First, probability distributions of the user-defined parameters in LSSVR are designed using grid method for low upper bound estimation (LUBE). Second, random sets of parameters are generated according to the designed probability distributions to formulate diverse individual LSSVR members. Third, each individual member is applied to individual prediction. Finally, all individual results are combined to the final output via ensemble weighted averaging, with probabilities measuring the corresponding weights. The computational experiment using the crude oil spot price of West Texas Intermediate (WTI) verifies the effectiveness of the proposed LSSVR ensemble learning paradigm with uncertain parameters compared with some existing LSSVR variants (using other popular parameters selection algorithms), in terms of prediction accuracy and time-saving. (c) 2016 Elsevier B.V. All rights reserved. Dynamic time-linkage optimization problems (DTPs) are a special class of dynamic optimization problems (DOPs) with the feature of time-linkage. Time-linkage means that the decisions taken now could influence the problem states in future. Although DTPs are common in practice, attention from the field of evolutionary optimization is little. To date, the prediction method is the major approach to solve DTPs in the field of evolutionary optimization. However, in existing studies, the method of how to deal with the situation where the prediction is unreliable has not been studied yet for the complete Black-Box Optimization (BBO) case. In this paper, the prediction approach EA + predictor, proposed by Bosman, is improved to handle such situation. A stochastic-ranking selection scheme based on the prediction accuracy is designed to improve EA + predictor under unreliable prediction, where the prediction accuracy is based on the rank of the individuals but not the fitness. Experimental results show that, compared with the original prediction approach, the performance of the improved algorithm is competitive. (c) 2016 Elsevier B.V. All rights reserved. Capability based planning (CBP) is a strategy focused planning framework that facilitates organizations to systematically develop capacity to achieve their business objectives in highly uncertain, dynamic and competitive environments. Capability programming is an integral part of CBP which requires selecting a portfolio of capability projects for execution, referred as a capability program, such that the overall strategic risk facing the planning organization across a number of projected future operating scenarios is minimized while maintaining the most economical choice. It is a challenging optimization problem that requires handling a number of dynamic constraints and objectives that vary throughout the entire planning horizon. An optimizing simulation approach is presented in this paper that combines an evolutionary multi-objective optimization algorithm with a reinforcement learning technique to generate capability programs which optimize strategic risks and program costs across multiple planning scenarios as well as over a rolling planning horizon. The role of the optimization algorithm in this approach is to search for the non-dominated capability programs at each decision point by minimizing the strategic risks associated with individual capability projects across a number of planning scenarios as well as the total cost of the program. The reinforcement learning algorithm, on the other hand, searches horizontally within the set of non-dominated programs to minimize capability risks and costs over the entire planning horizon. The methodology is evaluated on a test problem generated based on the data distributions in an Australian Defence Capability Plan and the performance is compared with two myopic heuristic methods. (c) 2016 Elsevier B.V. All rights reserved. The problem of node seeding for optimizing influence diffusion in a social network can be applied in many fields, and thus has drawn much attention. In real life, because of a variety of reasons, decision maker needs to make a sequence of decisions about how to select the seeded nodes. In this paper, we study the problem of sequentially seeding nodes in a social network such that the complete influence time is minimized. We formulate a Markov decision process to describe the problem and embed a modified greedy search method into an online algorithm to solve the Markov decision process. Numerical experiments are performed to show the effectiveness of the proposed online algorithm. (C) 2016 Elsevier B.V. All rights Medical error is a serious issue in hospitals in Jordan. This study explored Jordanian nurses' perceptions of the culture of safety in their hospitals. The Hospital Survey of Patient Safety Culture translated into Arabic was administered to a convenience sample of 391 nurses from 7 hospitals in Jordan. The positive responses to the 12 dimensions of safety culture ranged from 20.0% to 74.6%. These are lower than the benchmarks of the Agency for Healthcare Research and Quality. Jordanian nurses perceive their hospitals as places that need more effort to improve the safety culture. We conducted a feasibility study to test an intervention to reduce medication omissions without documentation using nurse-initiated recall cards and medication chart checking at handover. No significant difference in the omission rate per 100 medications was found, although after adjusting for hospital and patient age, a significant effect occurred in the intervention group (n = 262 patients) compared with the control group (n = 272). This intervention may reduce medication omissions without documentation, requiring further study within larger samples. The aim of this project was to describe hospital nurses' work activity through observations, nurses' perceptions of time spent on tasks, and electronic health record time stamps. Nurses' attitudes toward technology and patients' perceptions and satisfaction with nurses' time at the bedside were also examined. Activities most frequently observed included documenting in and reviewing the electronic health record. Nurses' perceptions of time differed significantly from observations, and most patients rated their satisfaction with nursing time as excellent or good. Interdisciplinary rounds provide a valuable venue for delivering patient-centered care but are difficult to implement due to time constraints and coordination challenges. In this article, we describe a unique model for fostering a culture of bedside interdisciplinary rounds through adjustment of the morning medication administration time, auditing physician communication with nurses, and displaying physician performance in public areas. Implementation of this model led to measurable improvements in physician-to-nurse communication on rounds, teamwork climate, and provider job satisfaction. Approximately a quarter of medication errors in the hospital occur at the administration phase, which is solely under the purview of the bedside nurse. The purpose of this study was to assess bedside nurses' perceived skills and attitudes about updated safety concepts and examine their impact on medication administration errors and adherence to safe medication administration practices. Findings support the premise that medication administration errors result from an interplay among system-, unit-, and nurse-level factors. This quality improvement project evaluates the effectiveness of implementing an evidence-based alcohol withdrawal protocol in an acute care setting. Patient outcomes, length of stay, and nurses' knowledge and satisfaction with care are compared pre- and postimplementation. Implementation resulted in significant reduction of restraint use, transfers to critical care, 1: 1 observation, and length of stay, whereas no reduction was seen in rapid response calls. Nurses' knowledge post-alcohol withdrawal protocol education increased and satisfaction with patient care improved. The Johns Hopkins Fall Risk Assessment Tool (JHFRAT) is relatively new in Korea, and it has not been fully evaluated. This study revealed that the JHFRAT had good predictive validity throughout the hospitalization period. However, 2 items (fall history and elimination patterns) on the tool were not determinants of falls in this population. Interestingly, the nurses indicated those 2 items were the most difficult items to assess and needed further training to develop the assessment skills. Nurses strive to reduce risk and ensure patient safety from falls in health care systems. Patients and their families are able to take a more active role in reducing falls. The focus of this article is on the use of bundled fall prevention interventions highlighted by a patient/family engagement educational video. The implementation of this quality improvement intervention across 2 different patient populations was successful in achieving unit benchmarks. This study analyzed risk factors for medication/near-miss errors in the neonatal intensive care unit by using Grey Relational Analysis based on self-incident reports from staff nurses. The ASSESS-ERR Medication System Worksheet was used. A total of 156 medication/near-miss errors were found across 5 stages of the medication use process. The order prescribing stage had the most errors. The highest systemic risk factors were critical drug information missing; environmental, staffing, and workflow problems; and lack of staff education. Hyperglycemia occurs in more than 30% of hospitalized patients. The condition has been associated with higher mortality and poor outcomes. Systems to effectively treat dysglycemia have been put into place, although many focus on critical care areas. The purpose of this article is to provide an overview of the challenges for glycemic control in non-critical care areas. Standardized order sets, critical pathways, professional education, and collaborative systems can support improved control. The burden of diabetes is greater for minorities and medically underserved populations in the United States. An evidence-based provider-delivered diabetes self-management education intervention was implemented in a federally qualified health center for medically underserved adult patients with type 2 diabetes. The findings provide support for the efficacy of the intervention on improvement in self-management behaviors and glycemic control among underserved patients with diabetes, while not substantially changing provider visit time or workload. This specific case of Chulalongkorn University (CU), Thailand, is useful to readers who are interested in comparative aspect of the experiences of research universities in the South East Asian context. This paper aims to provide a description of the environments, changes, and university stakeholders' perceptions in terms of governance arrangements when CU envisioned itself to be a comprehensive public university geared towards becoming a research-oriented university, and in line with national and international changes in the higher education landscape. The analysis framework of the institutional university governance is examined through three dimensions: (1) context-underpinning factors; (2) incentive arrangements and funding; and (3) monitoring and oversight mechanisms. The study adopted a qualitative approach, which was based on three methods of data collection: document analysis, interviews, and observations. There were 33 interviews conducted in the study. The 33 research participants could be categorized into 5 main groups: (1) 6 senior officials from governmental agencies and independent organizations; (2) 2 junior officials working for the Office of the Higher Education Commission; (3) 16 top executives of different faculties and the central administration from CU; (4) 8 academics from different faculties of CU; and (5) 1 graduate student. Plagiarism is a concept that is difficult to define. Although most higher education institutions have policies aimed at minimising and addressing student plagiarism, little research has examined the ways in which plagiarism is discursively constructed in university policy documents, or the connections and disconnections between institutional and student understandings of plagiarism in higher education. This article reports on a study that explored students' understandings of plagiarism in relation to institutional plagiarism discourses at a New Zealand university. The qualitative study involved interviews with 21 undergraduate students, and analysis of University plagiarism policy documents. The University policy documents revealed moral and regulatory discourses. In the interviews, students predominantly drew on ethico-legal discourses, which reflected the discourses in the policy documents. However, the students also drew on (un)fairness discourses, confusion discourses, and, to a lesser extent, learning discourses. Notably, learning discourses were absent in the University policy. Our findings revealed tensions between the ways plagiarism was framed in institutional policy documents, and students' understandings of plagiarism and academic writing. We suggest that, in order to support students' acquisition of academic writing skills, plagiarism should be framed in relation to 'learning to write', rather than as a moral issue. The paper takes an interest in consumer behavior in international higher education (HE). It takes qualitative narratives of international student experience as a point of departure for a discussion of the degree to which students conceive of their experience in consumer terms when they evaluate their stays abroad. Intentionally, the group of informants consists of culturally diverse subjects (Danish and Chinese students). While the size of the sample does not allow for any wide-ranging conclusions on the connection between cultural background and adoption of consumer identity, it enables the researchers to evaluate whether cultural background seems to pertain to the propensity of students to think and act in a consumer-oriented manner in their experience of the different material and academic standards they were faced with in their study abroad environment. Based on an interest in the role of the student in the era of academic capitalism, the study investigates whether the fact that universities increasingly operate on market and market-like conditions influences students' way of conceiving of their study abroad experience. To what extent do students perceive themselves as consumers investing in services and products?. There are dissonances between educators' aspirations for assessment design and actual assessment implementation in higher education. Understanding how assessment is designed 'on the ground' can assist in resolving this tension. Thirty-three Australian university educators from a mix of disciplines and institutions were interviewed. A thematic analysis of the transcripts indicated that assessment design begins as a response to an impetus for change. The design process itself was shaped by environmental influences, which are the circumstances surrounding the assessment design, and professional influences, which are those factors that the educators themselves bring to the process. A range of activities or tasks were undertaken, including those which were essential to all assessment design, those more selective activities which educators chose to optimise the assessment process in particular ways and meta-design processes which educators used to dynamically respond to environmental influences. The qualitative description indicates the complex social nature of interwoven personal and environmental influences on assessment design and the value of an explicit and strategic ways of thinking within the constraints and affordances of a local environment. This suggests that focussing on relational forms of professional development that develops strategic approaches to assessment may be beneficial. The role of disciplinary approaches may be significant and remains an area for future research. This article reports on the role and value of social reflexivity in collaborative research in contexts of extreme inequality. Social reflexivity mediates the enablements and constraints generated by the internal and external contextual conditions impinging on the research collaboration. It fosters the ability of participants in a collaborative project to align their interests and collectively extend their agency towards a common purpose. It influences the productivity and quality of learning outcomes of the research collaboration. The article is written by fourteen members of a larger research team, which comprised 18 individuals working within the academic development environment in eight South African universities. The overarching research project investigated the participation of academics in professional development activities, and how contextual, i.e. structural and cultural, and agential conditions, influence this participation. For this sub-study on the experience of the collaboration by fourteen of the researchers, we wrote reflective pieces on our own experience of participating in the project towards the end of the third year of its duration. We discuss the structural and cultural conditions external to and internal to the project, and how the social reflexivity of the participants mediated these conditions. We conclude with the observation that policy injunctions and support from funding agencies for collaborative research, as well as support from participants' home institutions are necessary for the flourishing of collaborative research, but that the commitment by individual participants to participate, learn and share, is also necessary. China's higher education system has been marked by dramatic growth since 1999. In response to calls for quality assurance, substantial efforts have been made to improve collegiate environments and enhance student learning. However, only limited empirical research has been conducted to investigate the effects of the college environment on student gains in the Chinese context. Drawing on data from 1121 students at a prestigious four-year university, this study investigated how college environmental factors (i.e., course challenge, faculty guidance, academic climate, and interpersonal relationships) and student involvement affected students' intellectual development. The results of the structural equation modeling indicated that academic involvement mediated the relations between college environmental factors and intellectual development. Among the four environmental factors studied, faculty guidance was the strongest predictor of intellectual development. The results highlight the pivotal role of teachers in student involvement and development. Practical implications for the design of college environments conducive to student learning are discussed. The present study assessed the associations among higher-order thinking skills (reflective thinking, critical thinking) and self-monitoring that contribute to academic achievement among university students. The sample consisted of 196 Iranian university students (mean age = 22.05, SD = 3.06; 112 females; 75 males) who were administered three questionnaires. To gauge reflective thinking, the "Reflective Thinking Questionnaire" designed by Kember et al. (Assess Eval High Educ 25(4):380-395, 2000) was utilized. It includes 16 items measuring four types of reflective thinking (habitual action, understanding, reflection, and critical reflection). To assess critical thinking, the "Watson-Glaser Critical Thinking Appraisal"(2002) was utilized. It comprises 80 items and consists of 5 subtests (inference, recognizing unstated assumptions, deduction, interpretation, and evaluation). Self-monitoring was measured via 8 items of the self-regulation trait questionnaire designed by O'Neil and Herl (Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA, 1998). The results demonstrated that critical thinking and all components of reflective thinking positively and significantly predicted achievement with habitual action having the lowest impact and reflection exhibiting the highest influence. Self-monitoring indirectly exerted a positive influence on achievement via understanding and reflection. It was also found that among the four subscales of reflective thinking, reflection and critical reflection predicted critical thinking positively and significantly. Self-monitoring had a positive and significant impact on critical thinking. It also significantly and positively influenced understanding as well as reflection. Universities have a long history of collecting student feedback using surveys and other mechanisms. The last decade has witnessed a significant shift in how student feedback is systematically collected, analysed, reported, and used by governments and institutions. This shift is due to a number of factors, including changes in government policy related to quality assurance, and the increased use of the results by various stakeholders such as governments, institutions, and potential students and employers. The collection, analysis, and reporting of results are systematically carried out in many institutions worldwide. However, how to use student feedback to effectively improve student learning experience remains an issue to be addressed. This paper will contribute to this debate by comparing how Australian and Scottish universities use student feedback results to inform improvements. Based on thematic analysis of external quality audit reports of all Australian and Scottish universities, this paper suggests that universities have systematic processes to collect student feedback using a range of mechanisms, but limited work is done to use the data to inform improvements. This paper argues the need for universities to genuinely listen to student voice by facilitating partnership between students and institutions to act on their feedback as part of quality assurance. In higher education, just amounts of tuition fees are often a topic of heated debate among different groups such as students, university teachers, administrative staff, and policymakers. We investigated whether unpleasant situations that students often experience at university due to social crowding can affect students' views on the justified amount of tuition fees at universities. We report two experiments on whether conditions that lead to experienced crowding in higher education can affect how students cognitively deal with a given topic. Experiment 1 (N = 80) showed that the mere cognitive activation of crowdedness in text stories about situations related to student activities influenced prospective students' estimates of what are justified university tuition fees. In Experiment 2 (N = 72), student participants wrote an essay on tuition fees in a small versus large room in groups of three versus six persons. Here, results showed that students together with relatively many others in a small room estimated higher tuition fees to be justified than participants in all other experimental conditions. We discuss the implications of the present findings for the configuration of classes in higher education. Chilean higher education has expanded greatly in recent decades, primarily through drawing on the private contributions of students and families, and an increased number and variety of institutions. In the context of attempts to address criticism that the sector is not free, public or high-quality enough, this article examines the association between education and its moral and ethical dimensions, and their separate yet complementary consideration alongside economic development, through the two centuries of the Chilean state's existence. Since the beginning of the current decade, discontent with the framing and performance of higher education as a whole has grown. The overview traces this process not as fresh crisis, but part of a social question pondered repeatedly in the past and supported with varying success through educational and political initiatives. This historical (and historiographic) approach illuminates the limits of conceiving of higher education as either an economic good or as a human right, and an overlooked need to support its benefits through policy. Not simply an interpenetration with economic thinking, but also a lack of sufficient appreciation of Chile's fundamental and singular character, present as challenges in understanding expanded access's function and its prospective contribution to growing debates around ethics and inequality. While a class gap remains in obtaining a degree despite an expansion of higher education, a variety of second chances have become available. How class matters in receiving parental assistance for seeking a second chance is of increasing importance to understanding educational inequality in an altered context of higher education, but it is under-researched. This article seeks to fill this gap by referring to a qualitative study of 85 community-college students in Hong Kong for illustration. First interviews with respondents recruited from a community college were conducted between the year 2006 and 2009 where they discussed how they desired a second chance option by studying an associate degree in community college. Encouragement and emotional support to seek this new, costly, and risky second chance was provided by parents of most respondents. And yet, as a deficit approach would have us believe, middle-class respondents received more relevant information and academic advice with financial support from their parents than working-class respondents. In spite of that fact, it seems to remain whether the middle class are indeed better able than the working class to get transferred to university through the transfer of an associate degree in Hong Kong. This illustration suggests that the availability of a second chance does not immediately imply that the middle and the working classes are equally capable of taking advantage of it to rectify their previous educational failure. This article will be concluded by discussing the implications of this study for educational inequality. As the quality of university education garners increasingly more interest in both the public and in the literature, and as quality assurance (QA) processes are developed and implemented within universities around the world, it is important to carefully consider what is meant by the term quality. This study attempts to add to the literature empirical data from interviews conducted with senior administrators within Canada's province of Ontario. A quality assurance framework was developed by the Ontario Council of Academic Vice-Presidents in response to international trends in QA and implemented by all 21 Ontario universities in 2011. This phenomenographic study explored the conceptions of quality held by senior university administrators and their strategies for implementing QA processes. Results revealed a range of QA approaches that are employed within Ontario's universities. Rather than the two categories of retrospective QA and prospective QA that Biggs (High Educ 41:221-238, 2001) postulated, results indicate a more complex spectrum that involves three main approaches to QA: an approach aimed at defending quality, an approach aimed at demonstrating quality, and an approach aimed at enhancing quality. These approaches are considered in relation to Biggs's (High Educ 41:221-238, 2001) ideas about quality enhancement and a revision to his model is proposed. The fat mass and obesity-associated protein is a potential target for anti-obesity medicines. In the present work, the interaction between our synthesized 3-substituted 2-aminochromones and fat mass and obesity-associated protein were investigated using fluorescence spectroscopy, ultraviolet-visible spectroscopy and molecular modeling approach. Fluorescence spectroscopy showed that the fluorescence of fat mass and obesity-associated protein can be quenched by these compounds with a static quenching procedure. In addition, the thermodynamic parameters obtained from the fluorescence data showed that the hydrophobic force played a major role in stabilizing the complexes. Ultraviolet-visible spectroscopy, synchronous fluorescence spectroscopy and three-dimensional fluorescence spectroscopy indicated that these compounds can induce some conformational changes of fat mass and obesity-associated protein. A series of N'-arylidene-2-(5-aryl-1H-1, 2, 4-triazol-3-ylthio) acetohydrazide were synthesized. Condensation of aromatic aldehydes with 2-(5-aryl-1H-1, 2, 4-triazol-3-ylthio) acetohydrazide gave corresponding N' -arylidene-2-(5-aryl-1H-1, 2, 4-triazol-3-ylthio) acetohydrazide. Spectral and elemental analysis was used for structural elucidation of novel 1, 2, 4-triazole schiff bases. The newly synthesized compounds were screened for their antidepressant activity by using tail suspension test in mice. Compound 4l showed significant activity among the series. A series of new N'-arylidene-2-(5-aryl-1H-1, 2, 4-triazol-3-ylthio)acetohydrazide have been synthesized and characterized. The results revealed that compound 4l with bromo substitution exhibited promising antidepressant activity among the series. Graphical Abstract A series of new N'-arylidene-2-(5aryl- 1H-1, 2, 4-triazol-3-ylthio) acetohydrazide have been synthesized and characterized. The results revealed that compound 4l with bromo substitution exhibited promising antidepressant activity among the series. [GRAPHICS] . New amide derivatives of (+)-dehydroabietylamine, tricyclic abietane diterpene amine, were prepared using Zhongping's protocol. (+)-N-(2-phenyl-acetyl)-dehydroabietylamine derivative (11) demonstrated noticeable growth attenuation of hepatocellular carcinoma cell lines including PLC/PRF/5, SNU475, Hep3B-TR, and Huh7 with IC50 of 7.4 A mu M, 9.8 A mu M, 11.7 A mu M, and 11.8 A mu M, respectively. A breast cancer cell line MCF7 was the most sensitive against amide 11 with lowest IC50 value (4.8 A mu M). Low cell confluence and increase in G2/M phase was recorded after 48 and 72 h of treatment of amide 11 on PLC/PRF/5 cell line. Finally, amide 11 has comparatively sufficient therapeutic role due to addition of N-phenacyl group at C-18. Amide 11 demonstrated as potential candidate for future cancer interference and research. In this paper, the inhibitory effects of 4-chlorocinnamaldehyde on mushroom tyrosinase were investigated. The results showed that 4-chlorocinnamaldehyde had a significant inhibitory effect on monophenolase and diphenolase in tyrosinase. 4-Chlorocinnamaldehyde could obviously prolong the lag phase of monophenolase and it decreased the steady-state rate of both monophenolase and diphenolase. The IC50 values were found to be 0.07 and 0.3 mM for monophenolase and diphenolase, respectively. The kinetic studies showed that the inhibitory mechanism of mushroom tyrosinase by 4-chlorocinnamaldehyde was the reversible competitive inhibition. The inhibition constant (K (I)) was measured to be 0.17 mM. The experimental research demonstrated that 4-chlorocinnamaldehyde was an effective tyrosinase inhibitor and the research results may offer the theoretical basis for designing and synthesizing safer and more efficient tyrosinase inhibitors in future. Dengue virus is the causative agent of dengue fever and also the most prevalent mosquito borne viral pathogen. Up until now, treatments for dengue fever rely mainly on intensive supportive therapies. This study aimed to investigate in vitro antiviral activities of selected nucleoside analogs. These drugs were selected based on their good interactions with dengue virus replicating enzyme in a computer modeling study. The cytotoxicity of the nucleoside analogs in green monkey kidney cell line (VERO) was verified using the MTT [(3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay. Among the drugs tested, adenovir dipivoxil exhibited toxicity to cells with only 20-40% cell viability at drug concentrations above 100 A mu M. Therefore, adenovir dipivoxil was omitted from the plaque reduction assay. The nucleoside analogs were further evaluated in a plaque reduction assay and ribavirin was used as a positive control. VERO cells were infected with dengue virus-2 at a multiplicity of infection of 0.4 and then treated with different concentrations of nucleoside analogs. Virus plaques were observed and counted after 7 days of incubation. The results showed that valganciclovir and stavudine inhibited dengue virus-2 replication by 6-21%. These nucleoside analogs may, therefore, have the potential to be used in treating dengue infections. A novel series of dihydro-hydroxyl-phene-thylsulfanyl-omega-cyclohexyl/phenyl-oxopyrimidine derivatives have been synthesized and their in vitro anti-hepatitis C virus activities have been evaluated using Huh 7.5.1 cells. Some of the compounds showed moderate anti-hepatitis C virus activities, with EC50 range from 7.53 to 0.13 mu M. Among all the compounds, 6-(cyclohexylmethyl)-5-ethyl-2-((2-hydroxy-2-phenylethyl)thio)-pyrimidin-4 (3H)-one (3a) had the most promising potential in inhibiting hepatitis C virus with an EC50 value of 0.13 mu M and SI value of 121. It was noticed that some of these compounds are both active on hepatitis C virus and human immunodeficiency virus. In addition to experimental evaluation, structure-activity relationships and the molecular modeling analysis of these new congeners are also discussed. Nicotinic acid has been reported as a potential inhibitor of carbonic anhydrase III enzyme. Carbonic anhydrase III (CAIII) is an emerging new pharmacological target for the management of dyslipidemia and cancer progression. The activity of 6-substituted nicotinic acid analogs against carbonic anhydrase III was studied using a size-exclusion chromatography. The appearance of concentration-dependent vacancy peak was indicative of binding with CAIII. Chromatographic and docking studies revealed that the carboxylic acid of ligand is essential for binding via coordinate bond formation with Zn+2 ion in the enzyme active site. Moreover, the presence of a hydrophobic group, containing a hydrogen bond acceptor, at position 6 of the pyridine improves activity, e.g., 6-(hexyloxy) pyridine-3-carboxylic acid (Ki = 41.6 A mu M). Utilizing the weak esterase activity of CAIII, the inhibitory mode of 6-substituted nicotinic acid was confirmed. We report herein the synthesis and biological activities (cytotoxicity, leishmanicidal and trypanocidal) of six quinoline-chalcone and five quinoline-chromone hybrids. The synthesized compounds were evaluated against amastigotes forms of Leishmania (V) panamensis, which is the most prevalent Leishmania species in Colombia and Trypanosoma cruzi, which is the major pathogenic species to humans. Cytotoxicity was evaluated against human U-937 macrophages. Compounds 8-12, 20, 23 and 24 showed activity against Leishmania (V) panamensis, while compounds 9, 10, 12, 20 and 23 had activity against Trypanosoma cruzi with EC50 values lower than 18 mg mL(-1). 20 was the most active compound for both Leishmania (V) panamensis and Trypanosoma cruzi with EC50 of 6.11 +/- 0.26 mu g mL(-1) (16.91 mu M) and 4.09 +/- 0.24 (11.32 mu M), respectively. All hybrids compounds showed better activity than the anti-leishmanial drug meglumine antimoniate. Compounds 20 and 23 showed higher activity than benznidazole, the current anti-trypanosomal drug. Although these compounds showed toxicity for mammalian U-937 cells,they still have the potential to be considered as candidates to antileishmanial or trypanocydal drug development. A set of new 2-alkyl-substituted 4,7-dimethyl-3,4,4a,5,8,8a-hexahydro-2H-chromene-4,8-diols was obtained by reactions of monoterpenoid (1R,2R,6S)-3-methyl-6-(prop-1-en-2-yl)cyclohex-3-ene-1,2-diol with aliphatic aldehydes in the presence of montmorillonite clay K10. Synthesized compounds were evaluated for their analgesic activity in vivo using the acetic acid-induced writhing test and the hot-plate test. Five compounds showed a significant analgesic activity in the acetic acid-induced writhing test; two of them also demonstrated analgesic activity in the hot-plate test. These compounds seem to be most promising for further development. The scaffold of 4-quinolylhydrazone was attached to dihydroartemisinin by the combination and bioisosterism principles to get a dihydroartemisinin derivative called L-A03. The previous study demonstrated that L-A03 inhibited cysteine protease falcipain-2 of Plasmodium falciparum. Our preliminary assay showed that this compound exhibited significant antitumor activity in some cancer cell lines. Besides, cytotoxicity was low against human peripheral blood mononuclear cells. These suggest that L-A03 could be used as a potent antitumor drug. This study indicated that L-A03 induced both apoptosis and autophagy in human breast cancer MCF-7 cells. L-A03 caused autophagy prior to the onset of apoptotic cell death. In the presence of chloroquine, an autophagic inhibitor, L-A03-induced apoptosis was attenuated, indicating that autophagy was indispensable for apoptosis induction. Nitric oxide generation blocked apoptotic cell death, but did not affect autophagy, suggesting that autophagy may take place in the upstream of nitric oxide generation. Moreover, autophagy decreased the generation of nitric oxide. This study provided a new insight on the mechanism of anti-tumor effect of L-A03. On the basis of our earlier work, fortyone 5-N-substituted-2N-(substituted benzenesulphonyl)-L(+)glutamines were synthesized and screened for cancer cell inhibitory activity. The best active compounds showed 91% tumor cell inhibition, whereas other three compounds showed more than 80% inhibition. Two-dimensional quantitative structure-activity relationship modeling and three-dimensional quantitative structure-activity relationship k-nearest neighbor molecular field analysis studies were done to get an insight into structural requirements toward further improved anticancer activity. Considering the fact that these compounds are competitive inhibitors of glutaminase, a molecular docking study followed by molecular dynamic simulation analysis were performed. The work may help to develop new anticancer agents. A series of triaryl-substituted hydrazones as structural acyclic prototypes were synthesized and screened for anti-proliferative activity against breast (Michigan cancer foundation-7 and MD Anderson metastatic breast-231) and uterine cancer (Ishikawa) cell lines. Two compounds were found to be the most active, 5e showed the maximum inhibition of both functional estrogen receptor containing Michigan cancer foundation-7 cells (IC50: 7.8 A mu M) and Ishikawa cells (IC50: 7.3 A mu M) whereas, compound 5i was selectively most active against ER-negative MD Anderson metastatic breast-231 cells (IC50: 4.7 A mu M). The inhibitory effect of 5e in breast cancer and uterine cancer cells was due to ER antagonistic action, also supported by molecular docking studies. A series of 2-((1-substituted-1H-1,2,3-triazol-4-yl)-1-naphthaldehydes was prepared by the propargylation of 2-hydroxynaphthaldehyde followed by Copper(I)-catalyzed azide-alkyne cycloaddition with various organic azides. 2-((1-substituted-1H-1,2,3-triazol-4-yl)-1-naphthaldehyde analogues were transformed to corresponding oxime derivatives upon grinding with hydroxylamine hydrochloride under solvent free conditions. All the synthesized compounds were characterized by various analytical and spectral techniques and screened in vitro for antimicrobial activity. The activity data revealed that most of the compounds exhibited good to significant activities. Compounds 4c and 5c exhibited very good and broad spectrum activity towards all the tested bacterial strains. Further, to understand the binding interactions, 4c and 5c were docked into the active sites of E. coli topoisomerase II DNA gyrase. A series of novel 3-(2-(5-(2-chloroquinolin-3-yl)-3-substituted phenyl-4,5-dihydro-1H-pyrazol-1-yl)thiazol-4-yl)-6-H/halo-2H-chromen-2-ones (9a-9y) was prepared as antimicrobial agents by the condensation of 3-(2-bromoacetyl)-6-H/halo-2H-chromen-2-ones (4a-4e) and 5-(2-chloroquinolin-3-yl)-3-substituted phenyl-4,5-dihydro-1H-pyrazole-1-carbothiamide (8a-8e) in ethanol. The structures of these compounds were confirmed on the basis of their infrared, H-1-nuclear magnetic resonance, C-13-nuclear magnetic resonance, mass and elemental analysis data. The antimicrobial activity of these compounds was determined by the serial plate dilution method. The compounds with fluoro-substituted coumarin ring along with the fluoro-substituted phenyl ring, 9q, 9r, and 9s, produced better and potent antimicrobial activity than their corresponding H/chloro/iodo/bromo-substituted analogs with statistically significant results (p < 0.05). The compounds 9q and 9r also produced higher antifungal activity than standard drug ketoconazole against Penicillium citrinum. However, these compounds required higher concentration than standard drugs, ofloxacin, and ketoconazole, to produce these effects. The structural modification of these compounds may enhance their potency as antimicrobial agents, but this requires further studies. 4-Sulfanilamido substitued-1,2,3-triazoles conjugated with monosaccharides (8-17) including d-glucose, d-galactose, d-mannose, and d-fructose were synthesized in good yields from azidosugars with propargyl sulfanilamides using copper catalyst 1,3-dipolar cycloaddition reaction (CuAAC). The structures of new compounds were elucidated by liquid chromatography-mass spectrometry, infrared, one-dimensional- and two-dimensional-nuclear magnetic resonance techniques. All of the new compounds were tested in vitro against Staphylococcus aureus, Bacillus subtilis, Escherichia coli, Salmonella typhimurium, Klebsiella pneumoniae, Enterococcus faecalis, Pseudomonas aeruginosa, and Candida albicans for their antibacterial and antifungal activities. Experimental results showed antimicrobial activity with minimum inhibitory concentrations values a ranging from 0.078 to 5.0 mg/mL against test microorganisms. This study describes the synthesis, pharmacological evaluation, including acetylcholinesterase (AChE)/butyrylcholinesterase (BChE) inhibition, amyloid beta (A beta) antiaggregation, and neuroprotective effects, as well as molecular modeling of novel 2-(4-substituted phenyl)-1H-benzimidazole derivatives. These derivatives were synthesized by cyclization of o-phenylenediamines with sodium hydroxy(4-substituted phenyl)methanesulfonate salts. In vitro studies indicated that the most of the target compounds showed remarkable inhibitory activity against BChE (IC50: 13.60-95.44 A mu M). Among them, 3d and 3g-i also exhibited high selectivity (SI ae 35.7) for BChE with IC50 values 39.56, 13.60, 14.45, and 15.15 A mu M, respectively. According to the molecular modeling studies, it may be assumed that the compounds are able to reach the catalytic site of BChE but not that of AChE. The compounds showing BChE inhibitory effects were subsequently examined for their A beta-antiaggregating and neuroprotective activities. Among the compounds, 3d inhibited the A beta(1-40) aggregation and demonstrated significant neuroprotection against H2O2-induced and A beta(1-40)-induced cell death. Collectively, compound 3d showed the best multifunctional activity (BChE; IC50 = 39.56 A mu M, SI > 126; A beta self-mediated aggregation; 67.78% at 100 mu M; H2O2-induced cytotoxicity with cell viability of 98% and A beta(1-40)-induced cytotoxicity with cell viability of 127%). All these results suggested that 2-(4-(4-methylpiperidin-1-yl)phenyl)-1H-benzo[d]imidazole (compound 3d) could be a promising multi-target lead candidate against Alzheimer's disease. Portulaca pilosa L and Portulaca oleracea L were comparatively studied for the total content of polyphenols and flavonoids, antioxidant activity, individual polyphenols, short-chain organic acids, and saccharides in extracts of plants collected from Bucharest ''delta'' using spectrometry and capillary electrophoresis. The polysaccharide fractions were assessed for cytotoxicity on normal and tumor cell lines. The results obtained highlighted that Portulaca pilosa could be considered more valuable than Portulaca oleracea because of its higher content in important flavonoids (quercetin 101.70 +/- 2.68 mu g g(-1) dry weight plant material, rutin 96.24 +/- 0.74 mu g g(-1) DW plant material) and some phenolic acids (chlorogenic acid 161.33 +/- 0.67 mu g g(-1) DW plant material, p-coumaric acid 61.40 +/- 5.50 mu g g(-1) DW plant material) of the ethanolic extracts, for its lower content in oxalic acid that is considered anti-nutrient of the aqueous extracts, and for its higher content in saccharides, especially rhamnose and xylose, with consequently higher cytostatic effect of the polysaccharide fraction. The current study presented for the first time the content in polyphenols, short chain organic acids, and saccharides of Portulaca pilosa and proved that this species has noticeable antioxidant activity, low toxicity on normal cells, and high toxicity on tumor cells and could be considered important for health and food industry. In this study, Mannich bases 2-8, 2-(3-or-5-aminomethyl-4-hydroxy-3-or-5-methoxybenzylidene)indan-1-one, were designed and synthesized starting from 2-(4-hydroxy-3-methoxybenzylidene)indan-1-one, 1. Synthesized compounds were tested against several tumor cell lines and non-tumor cells to evaluate the cytotoxicities of the compounds and to test whether the sequential cytotoxicity hypothesis works on the studied compounds. The data obtained from cytotoxicity tests pointed out that sequential cytotoxicity hypothesis worked on compounds 3 and 4 since they had higher potency selectivity expression values. The leader compound of the present study is compound 4, 2-(3-dipropylaminomethyl-4-hydroxy-5-methoxy-benzylidene)-indan-1-one, since it has the highest potency selectivity expression value among the compounds studied. This molecule can be the leader compound for further studies and designes. An interesting hybrid molecular framework comprising of benzylidenethiazolidin-4-one, chalcone and fibrate was designed and synthesized (BRF1-12) in order to develop safe and efficacious compounds for the treatment of dyslipidemia, and related complications such as atherosclerosis. The synthesized derivatives were characterized by Fourier transform infrared spectroscopy, mass, and nuclear magnetic resonance spectral studies and evaluated for their antihyperlipidemic potential, using in vivo and in silico methods. All the synthesized compounds exhibited promising antidyslipidemic activity comparable to, and sometimes better than that of, the standard drug-fenofibrate at the tested dose of 30 mg/kg body weight. The most active compounds of the series, BRF4 and BRF6, demonstrated significant antidyslipidemic profile by lowering low density lipoprotein cholesterol, very low density lipoprotein cholesterol, and triglyceride and increasing the level of high density lipoprotein cholesterol, thereby decreasing the atherogenic index. Overall, these effects of BRF4 and BRF6 were found to be more potent than fenofibrate, in lipid lowering activity and reducing atherogenic index. Structure-activity relationship studies conclusively established that the presence of N-acetic acid methyl ester at 3rd position of the thiazolidin-4-one nucleus, and a C-3 fibric acid moiety at benzene nucleus were instrumental for enhanced biological activity. The binding mode of benzylidenethiazolidin-4-one fibrate class of compounds, showing crucial hydrogen bonds and pi-pi stacking interactions with the key amino acid residues Phe118, His440, and Tyr464 at the active site of PPAR alpha receptor, was assessed by molecular docking studies. Anti-androgen can be used in the treatment of benign prostatic hyperplasia, acne, hirsutism, and androgenic alopecia. For the search of anti-androgenic activity through steroid 5-alpha reductase (S5 alpha R) inhibition mechanism, 12 natural analogs from plant origins, i.e., curcumin (1) demethoxycurcumin (2), and bisdemethoxycurcumin (3) isolated from Curcuma longa Linn., compounds 18, 20, 21, 22, 24, and 25 isolated from Curcuma comosa Roxb., amide analogs 29-31 obtained from Bougainvillea spectabilis Willd. together with 21 synthesized analogs were evaluated for S5 alpha R inhibitory activity using liquid chromatography-mass spectrometry assay. The results showed that compounds 1, 2, 4, 5, 6, 7, and 9 possessed S5 alpha R inhibitory activity and compounds 1, 4, and 5 were the most potent (IC50 of 13.4 +/- 0.4, 15.3 +/- 3.1 and 8.9 +/- 0.9 A mu M, respectively). This suggests that the unsaturated enone moiety in the chain linked between two aromatic rings of curcumin analog was imperative to the activity. Moreover, the m-methoxyl and p-hydroxyl substitutions in aromatic region of 1,6-heptadiene-3,5-dione linker were necessary. The cytotoxic effect on androgen-dependent cell, human dermal papilla was investigated to obtain safety information profile. We found that 1,6-heptadiene-3,5-dione linker was important for safety. This work stated that anti-androgen activity of curcumin analogs was through S5 alpha R inhibition mechanism and the information might lead to further design of new curcumin analogs with improved potency and safety. A new series of 3,4,5-trimethoxyphenyl bearing pyrazole (4a-g) and pyrazolo[3,4-d]pyridazine (5a-g) scaffolds were synthesized in good yield. The newly synthesized compounds were characterized on the basis of elemental and spectroscopic analyses. Their inhibitory activity against the pro-inflammatory inducible nitric oxide synthase and cyclooxygenase-2 proteins expression in lipopolysaccharide-stimulated murine RAW 264.7 macrophages were assessed and showed various potencies. All pyrazolo[3,4-d]pyridazine compounds (5a-g) strongly down regulated lipopolysaccharide inducible nitric oxide synthase expression to the range of 20.3 +/- 0.6-51.3 +/- 3.5% relative to the bioactive pyrazole derivatives 4b, 4c, 4e and 4g. With the exception of inactive compounds 4c and 4d, all other synthesized compounds inhibited cyclooxygenase-2 expression below 100% in the lipopolysaccharide-stimulated cells, which being declined maximally to 42.8 +/- 1.4% by one of the pyrazolo[3,4-d]pyridazine compounds (5d). Moreover, the neuroprotective activity of the less cytotoxic compounds 4b, (4e-g) and (5a-g) were evaluated against 6-hydroxydopamine (6-OHDA)-induced neuroblastoma SH-SY5Y cell death and exhibited significant (p < 0.05) cell protection. The pyrazolo[3,4-d]pyridazine compound (5e) exhibited more than 100% of relative neuroprotection (110.7 +/- 4.3%) with an additional advantage of having the highest cell viability index (107.2 +/- 2.9%). Hepatocellular carcinoma is a major example for inflammatory-associated cancer. cucurbitacins are natural triterpenoids known for their potent anticancer and anti-inflammatory activities. Recent studies showed that cucurbitacins protect the HepG2 cell lines against carbon tetrachloride-induced toxicity, however the mechanism is unknown. A molecular docking study coupled with in vitro biological assays were conducted to test the hepatoprotective effect of cucurbitacin on the inhibition of potential inflammatory factors. The effect of cucurbitacins on the activation of NF-kB pathway was analyzed using in cell-based NF-kB immunoassay. Enzyme-linked immunosorbent assays revealed the potential of Cuc D and dihydro cucurbitacin D to prevent the production of tumor necrosis factor-alpha and interleukin-6 from HSC-T6 cells. Thus, Cuc D and dihydro cucurbitacin D could have hepatoprotective effects on the activated rat HSC-T6 cells due to inhibition of the production of tumor necrosis factor-alpha and interleukin-6 through NF-kB pathway. In-silico molecular modeling data revealed potential cucurbitacin analogs with higher binding affinity to the hydrophobic pocket of NF-kB and IKK beta compared to standard IKK inhibitor (PS-1145). A new class of bis and tris heterocycles-pyrazolyl indoles and thiazolyl pyrazolyl indoles were prepared from the Michael acceptor (E)-3-(1H-indol-3-yl)-1-arylprop-2-en-1-ones by ultrasound irradiation technique and tested for antioxidant activity. The thiazolyl pyrazolyl indoles and pyrazolyl indoles showed greater radical scavenging activity than pyrazolinyl indoles. Amongst all the tested compounds, 3-(1-(4''-(p-chlorophenyl)thiazol-2''-yl)-3'-p-tolyl-1H-pyrazol-5'-yl)-1H-indole (7b) and 3-(1-(4''-(p-chlorophenyl)thiazol-2''-yl)-3'-(p-methoxyphenyl)-1H-pyrazol-5'-yl)-1H-indole (7c) displayed promising antioxidant activity when compared with standard drug ascorbic acid. The compounds having electron donating groups (CH3, OCH3) on the phenyl ring exhibited greater antioxidant activity than those with electron withdrawing groups (Cl, Br, NO2). In this paper, we present an experimental set-up and procedure to accurately measure the bearing characteristics of any single Degree of Freedom (DoF) straight-line flexure mechanism. Bearing characteristics include stiffness in the bearing and motion directions, and error motions in the bearing directions. In particular, we present this characterization for the traditional paired double parallelogram (DP-DP) flexure and its recently-reported improved variation, the clamped paired double parallelogram (C-DP-DP) flexure. Of particular interest is the bearing direction stiffness and its variation with motion direction displacement. While the bearing stiffness for both mechanisms has been extensively predicted via analysis and its consequences have been observed in experiments, its direct measurement poses several challenges and is not found in the literature. This paper presents an experimental set-up that is reconfigurable to accommodate both the above two flexures, comprises a novel virtual pulley concept, and employs carefully selected ground mounting and sensor locations, among other features that enable the desired measurements. The experimental results agree well with analytical predictions and generate insight into the importance of ground mounting, finite compliance of mechanism features that are generally assumed to be rigid, and manufacturing tolerances. (C) 2017 Elsevier Inc. All rights reserved. Traditional measurement methods of squareness for ultra-precision motion stage have many limitations, especially the errors caused by the inaccuracy of standard specimens. On the basis of error separation, this paper presents a novel method to measure squareness with an optical square brick. The angles between the guideways and the four lines of brick section are measured based on the fact that sum of interior angle of a quadrilateral is 2 pi, and the squareness is obtained. A squareness measurement experiment was performed on a profilometer with a modified optical square brick. Experimental results show that the squareness accuracy between X and Y axes is not influenced by the accuracy of brick, and the measurement repeatability reaches 0.22 arcsec. Finally, a verification experiment to the proposed method was carried out with a high accurate standard specimen, and the error between the two methods is 1.06 arcsec. According to the error results and simulation analysis of the measurement system, the measurement error based on error separation is 0.06 arcsec. The proposed method is able to achieve a very high accurate squareness measurement with auxiliary components of normal accuracy, and can be applied to measure the accuracy class of sub-arcsec squareness. (C) 2017 Elsevier Inc. All rights reserved. In automotive manufacturing, the repair polishing process of an automotive body is still manually performed by skilled polishing workers. This is because skilled workers can appropriately control the polishing motion and force according to the workpiece conditions based on their experience. However, the number of skilled workers has been decreasing. Additionally, the skill development of younger workers has not been satisfactorily conducted. To overcome such problems, in a previous research investigation, we developed a serial-parallel mechanism polishing machine that effectively reproduced the polishing motion and force of skilled workers. This replication system, however, had limited use because the acquired polishing techniques could not adapt to various workpiece conditions, such as shape and size. The present study aimed to expand the polishing method for application to curved surfaces, in other words, adapt the replication system to changes in the workpiece shape. In the past polishing methods for curved surfaces, the workpiece shape was acquired by using CAD data or external sensors that often led to an increase in process time and cost. However, the newly proposed method in this study requires neither CAD data nor external sensors, and was able to effectively achieve simultaneous posture and force control on unknown curved surface. The experimental results showed that the skilled polishing techniques were successfully replicated on an unknown curved surface and the surface roughness was greatly improved by integrating the newly proposed method into the skilled polishing replication system. (C) 2017 Elsevier Inc. All rights reserved. Electrical discharge machining by foil electrode serves as an alternative method for SiC slicing. This technology uses a highly tensioned thin foil as the tool electrode. The main advantages over wire EDM are that the foil thickness can be made smaller than the wire diameter, vibrations can be avoided by applying high tension, and higher current can be supplied since there is less risk of tool breakage. However, due to the large side surface area of the foil electrode, there is a high occurrence probability of side surface discharges and high concentration of debris, which affects kerf width accuracy and machining stability. In the aim to overcome both problems, this study proposes two foil electrode designs: a foil electrode in which holes are machined and the insulation of the side surface areas by a resin coating layer of 5 mu m thickness. The influences of both foil electrodes were tested with three different slicing strategies: no strategy, applying jump motion of the tool electrode, and applying reciprocating motion. From machining experiments and comparative studies of the discharge delay time, it was found that with both foil tools, the occurrence probability of side surface discharges can be reduced. In addition, the chip pocketing effect of the holes enhance the flushing conditions, resulting in a higher cutting speed. (C) 2017 Elsevier Inc. All rights reserved. Micro-texture at the tool face is a state-of-the-art technique to improve cutting performance. In this paper, five types of micro-texture were fabricated at the flank face to improve the cooling performance under the condition of high pressure jet coolant assistance. By using micro-textures consisted of pin fins, plate fins and pits fabricated 0.3 mm away from the cutting edge, heat transfer from the tool face to coolant was enhanced. The conditions of tool wear, adhesion and chip formation were compared between the micro-textured and non-patterned tools in the longitudinal turning of the nickel-based superalloy Inconel 718. As a result, micro-textured tools always exhibited the reduced flank and crater wear compared with the non-patterned tool, and the rate of tool wear was influenced by the array and height of fin. The energy dispersive spectroscopy analysis of worn flank faces and the electromotive forces obtained from the tool-work thermocouple supported better cooling performances of micro-textured tools. In addition, coolant deposition at flank face evidenced that heat transfer could be promoted by micro-texture near the border of the contact area between the flank wear land and machined surface. Finally, the changes of flow patterns with pit depth are analyzed for pit type tools by computational fluid dynamics. This investigation clearly showed the function of micro-textures for increasing the turbulent kinetic energy and cooling the textured tool face. (C) 2017 Elsevier Inc. All rights reserved. Recently, surface texturing has received much attention as a method of enhancing the tribological properties of a cutting tool surface. However, effective texture patterns and dimensions on a tool surface are still difficult to obtain and suitable textures can be obtained only by trial and error. In order to overcome this problem, we newly develop cutting tools with dimple-shaped textures having different dimensions and arrays, generated on the tool rake face. In addition, we evaluate their crater wear resistance and cutting forces in steel material cutting. Furthermore, under various cutting conditions, the performances of the cutting tools with dimple-shaped textures are compared with those of tools with groove-shaped textures in order to establish a guideline for designing appropriate surface textures on cutting tool surfaces. A series of cutting experiments demonstrate that the dimple textures significantly improve the crater wear resistance and the tribological behavior on the tool rake face, and they exhibit a superior performance compared with those with groove textures, especially in a severely lubricated environment. (C) 2017 Elsevier Inc. All rights reserved. Surface finish is a critical requirement for different applications in industries and research areas. Freeform surfaces are widely used in medical, aerospace, and automobile sectors. Magnetic field assisted finishing process can be used very efficiently to finish freeform surfaces. In this process, magnetorheological fluid is used as the polishing medium and permanent magnet is used to control its rheological properties to generate finishing force during polishing. To avail sufficient magnetic field in the finishing zone, it is necessary to design an optimum polishing tool. In the present study, a specially designed polishing tool is designed using a finite element based software package (Ansys Maxwell) based on Maxwell equations. At first, dimension of the permanent magnet is determined for designing optimum tool geometry. After that, dimension and configuration of the magnet fixture are optimized. A special type of metal named mu metal which is a nickel-iron based alloy is selected for magnet fixture due to its magnetic-field shielding property. Mu-metal directs the magnetic flux lines in such a way that in the finishing zone the magnetic flux can be concentrated on the workpiece surface required for finishing. Also, the Mu-metal magnet fixture shields the magnetic field from outside environment so that MR fluid as well as any surrounding magnetic materials do not stick to the polishing tool. Experiments are carried out to validate the Maxwell simulation results to compare the magnetic flux distribution on the workpiece surface which shows good agreement between them. Also, finishing of flat titanium workpieces are carried out and it is found that the novel polishing tool has the capability to finish the workpieces in the nanometer range. (C) 2017 Elsevier Inc. All rights reserved. A simple numerical model is proposed for predicting the penetration depth of metal laser drilling. A simplified 2D axisymmetric model for transient metal laser drilling is introduced. Strong-form of Symmetric Smoothed Particle Hydrodynamics (SSPH) method is used to harness its significant reduction in computational time. The 2D axisymmetric domain is discretized, then SSPH formulation is used to obtain shape functions. Collocation method is used to discretize governing and boundary conditions equations to construct the global stiffness matrix. Laser beam is assumed to be continuous wave with Gaussian distribution. MATLAB code is constructed for numerical simulation, and the results are compared with published work. A good agreement is shown, and thus the proposed numerical model is found to be computationally efficient and accurate standalone platform for predicting the penetration depth of metal laser drilling process. (C) 2017 Elsevier Inc. All rights reserved. The measurement of large components using portable measuring equipment is important to many industries, including ship-building and aerospace. Portable measuring instruments - such as laser trackers, laser radar, indoor GPS, and other systems - are used to obtain measurement data for process control, assembly alignment, or geometric conformance decisions. Traditional uncertainty estimations often focus on the measuring instrument and its performance as a primary contributor to the overall uncertainty for specific measurands. The research reported here focuses on the uncertainty contributors that are due to extrinsic effects such as part deformation due to gravitational loads and thermal distortion of the workpiece, where the uncertainty contribution from the instrument is considered insignificant in comparison. Published by Elsevier Inc. Chemical mechanical polishing (CMP) process plays the role of planarizing and smoothing the uneven layers after the material deposition process in the semiconductor industry. In this process, pad conditioning using a diamond disk is inevitable to attain a high material removal rate (MRR) and to ensure the stability of the process. Pad conditioning is performed for providing uniform surface roughness and opening up the glazed surfaces of the polishing pad. However, the uneven pad wear resulting from pad conditioning leads to changes in the uniformity of MRR and productivity of the device. In this study, we investigate the pad wear profile after swing-arm conditioning of the pad, based on measurements performed using a pad measurement system (PMS). Conditioning experiments are conducted with seven cases of profiles of the conditioner's duration time (PCDT). In all the cases, "W"-shaped pad profiles are generated through swing-arm conditioning. It is observed that a concave-shaped PCDT results in the lowest value of maximum pad wear rate. The average depth of pad wear(h(avg)) is mainly related to the MRR, and the maximum depth of pad wear (h(max).) and the horizontal distance from the wafer center to the position (e) where the maximum pad wear occurs affect the within-wafer non-uniformity (WIWNU). A concave-shaped PCDT results in longer life of the polishing pad by minimizing the variation in pad wear. This paper can provide a technical assistance in selecting the conditioning recipe and improving the lifetime of the polishing pad in the CMP process. (C) 2017 Elsevier Inc. All rights reserved. In this paper, a simplified thrust force model of iron-less permanent magnet linear synchronous motors (IPMLSMs) including end effects is presented. To establish the thrust force model, a magnetic flux linkage calculation algorithm is applied. The thrust force model simplifies all IPMLSM parameters into identifiable constants and demonstrates that the thrust force of the IPMLSM fluctuates periodically with the winding phase position. In addition, both static and dynamic identification experiments are performed to effectively identify the model. It is shown through ultra-low-velocity simulations of IPMLSMs using the identified thrust force model that the end effect causes velocity fluctuations, which corresponds to experimental results on an ultra-precision air bearing linear motion stage. Therefore, a feedback compensation controller based on the identified thrust force model is designed and implemented for reducing ultra-low-velocity fluctuations. Finally, comparative experimental results indicate that the compensation controller is effective and achieves a better performance in terms of ultra-low-velocity errors compared with traditional disturbance observer controllers. Thus, the thrust force model including end effects and identification algorithms are proved to be valid and practical. (C) 2017 Elsevier Inc. All rights reserved. An optical microcavity, which stores light at a certain spot, is an essential component to realize all-optical signal processing. Single-crystal calcium fluoride (CaF2) theoretically shows a high Q-factor which is a desirable optical property. The CaF2 microcavity can only be manufactured by ultra-precision cylindrical turning (UPCT). The authors have studied UPCT of CaF2 and shown the influence of crystal anisotropy and tool geometry on surface roughness and subsurface damage. The study indicated that a smaller nose radius of the cutting tool led to shallower subsurface damage. Thus, it is inferred that a smaller nose radius compared to the previous nose radius (0.05 mm) can further reduce subsurface damage. Nevertheless, the mechanism that causes a difference in subsurface damage due to crystal anisotropy is not sufficiently clear. The influence of subsurface damage on microcavity performance is still unclear. In this study, the UPCT of CaF2 was conducted using a tool with a nose radius of 0.01 mm. The subsurface damage was investigated by transmission electron microscope (TEM) observation from the viewpoint of the change in crystal lattice arrangement. In our previous study, fast Fourier transfer (FFT) analysis was used for confirmation of change of crystal structure. In this study, FFT analysis was also used to quantitatively evaluate the depth of subsurface damage. In addition, inverse fast Fourier transfer (IFFY) was used to analyze change of crystal lattice arrangement clearly, which enables discussion of the influence of slip systems. Finally, optical microcavities are manufactured without any crack, and the influence of subsurface damage on microcavity performance is experimentally evaluated using a wavelength tunable laser and power meter. C) 2017 Elsevier Inc. All rights reserved. Amorphous nickel phosphorus (Ni-P) alloy is a suitable mold material for fabricating micropatterns on optical elements for enhancing their performances. Ultra-precision cutting is preferred to be used to machine the mold material for high precision in a large workpiece. However, burrs and chippings always form and are detrimental especially when fabricating micropatterns. The formation mechanisms of burrs and chippings have not yet been revealed precisely in the cutting processes of amorphous alloys, because their cutting behavior is more complex and less discussed in existing researches than that of crystalline metals. In the present study, the burr formation process of amorphous Ni-P is defined and a three-dimensional cutting model using energy method is proposed to predict and minimize burrs and chippings. Microgrooving experiments were conducted with different undeformed chip geometries using three types of cutting tools to observe burr formation processes. Large burrs and chippings were formed when cutting with a tapered square tool and a tilted triangle tool. These large burrs and chippings were found to be induced by large slippages that are unique to amorphous alloys. It was revealed that burrs and chippings appear when the angle between the chip flow direction and the groove edge is less than a critical value. Energy method was used to predict the chip flow directions and the calculated results agree with the experimental ones, which proved that the energy method is valid for designing an appropriate undeformed chip geometry to reduce burrs and chippings in ultra-precision grooving. (C) 2017 Elsevier Inc. All rights reserved. Advanced nanofinishing is an important process in manufacturing technologies due to its direct influence on optical quality, bearing performance, corrosion resistivity, bio-medical compatibility and micro-fluidics attributes. Chemo-mechanical magnetorheological finishing (CMMRF) process, one of the advanced nanofinishing process, was developed by combining essential aspects of chemo-mechanical polishing (CMP) process and magnetorheological finishing (MRF) process for surface finishing of engineering materials: The CMMRF process was experimentally analyzed on silicon and copper alloy to generate surface roughness of the order of few angstroms and few nanometers respectively. However, the process needs theoretical exploration towards better understanding, process optimization and result prediction. Hence, an attempt has been made for theoretical study of CMMRF process to analyze the effects of MR fluid under various process parameters. The present theoretical work is split as per following two sub-activities to simplify intricacy of the work. 1) FEA-CFD simulation to analyze magnetism, polishing pad formation and polishing pressure during the CMMRF process. The simulation results are used to conduct experiments on aluminium alloy. 2) A mathematical model has been developed to predict material removal as well as surface roughness during the CMMRF process. Model validation is conducted by comparing finite element simulation results with the experiments on aluminium alloy. The theoretical results show good agreement with the experimental data and the same has been discussed in this paper. (C) 2017 Elsevier Inc. All rights reserved. This paper presents the theoretical modeling and numerical simulation of the probe tip based nanochannel scratching. According to the scratching depth, the probe tip is modeled as a spherical capped conical tip or a spherical capped regular three side pyramid tip to calculate the normal force needed for the nanochannel scratching. In order to further investigate the impact of scratching speed, scratching depth and scratching direction on the scratching process, the scratching simulation is implemented in LS-DYNA software, and a mesh-less method called smooth particle hydrodynamics (SPH) is used for the sample construction. Based on the theoretical and simulated analyses, the increase of the scratching speed, the scratching depth and the face angle will result in an increase in the normal force. At the same scratching depth, the normal forces of the spherical capped regular three side pyramid tip model are different in different scratching directions, which are in agreement with the theoretical calculations in the d(3) and d(4) directions. Moreover, the errors between the theoretical and simulated normal forces increase as the face angle increases. (C) 2017 Elsevier Inc. All rights reserved. Elastic averaging has worked well throughout history to help create precision machines. With the advent of rapid fabrication processes such as additive manufacturing, abrasive waterjet machining, and laser cutting etc., parts fabricated from these processes can be designed with special elastic features that "average out" the uncertainty in dimension tolerances and manufacturing errors as a collective system at very low cost. This paper presents the principle of elastic averaging and explores simple flexure design to create elastic averaging features within parts for precision alignment and assembly applications. A first-order analytical model and a quick estimation approach is introduced to predict the alignment errors and the repeatability of these parts. Experimental results show that a part with four elastic averaging features was capable of achieving precision assembly with another mating part even after huge errors were purposefully introduced to the mating features. Results also show that the part can achieve sub-micron level repeatability after more than 20 trials of removals and assemblies. Lastly, analytical simulations show that repeatability of the part can be further improved by increasing the number of elastic contacts. All these results suggest that the assemblies of rapid fabricated parts with elastic averaging features can be as precise as those made from conventional machine centers. (C) 2017 Elsevier Inc. All rights reserved. Molecular dynamics simulations on partially overlapped nano-cutting of monocrystalline germanium with different feeds are carried out to investigate the surface topography, cutting force and subsurface deformation. The results indicate that the side-flow material piling up on the edges of tool marks is a decisive factor for the surface topography with the machining parameters in this study, ignoring the tool wear and machining vibration. In the partially overlapped nano-cutting, the lateral force is more affected by nominal depth of cut than by pitch feed with the value of feed being in some range. The atomic sights on the subsurface deformation show that the thickness of deformed layer after machining is much thinner than that after single cut with the same nominal depth of cut. Laser micro-Raman spectroscopy and cross-sectional transmission electron microscopy are used to detect the subsurface deformation of monocrystalline germanium after eccentric turning. The crystalline structure with defects rather than the amorphous germanium is observed in many areas of machined surface. Both molecular dynamics simulation and experimental results indicate that the amorphous-damage-less and even amorphous damage-free machined surface can be achieved by partially overlapped nano-cutting. (C) 2017 Elsevier Inc. All rights reserved. This paper combined experimentally-measured grinding wheel topography data taken around the entire circumference of the grinding wheel with a kinematic simulation of the grinding process. Several new methods were developed in order to create the resulting high-fidelity and computationally-efficient simulation. First a novel peak-removal technique was developed and applied to effectively remove erroneous peaks in the raw wheel topography data. Next a method was found to determine only the active cutting points on the wheel model by considering the kinematics of the grinding process. This new approach was able to reduce the simulation time from over twelve hours to about four seconds without losing any information about the cutting edge-workpiece interaction. The resulting predicted workpiece surface was then experimentally validated by carrying out a grinding experiment using the same grinding wheel used to develop the grinding wheel computer model and then measuring the resulting workpiece surface profile. Good agreement between simulated and experimental workpiece profiles was observed. Finally, the validated simulator was used to develop a kinematically-exact method to calculate the maximum uncut chip thickness and the simulation results were investigated for different depths of cut, wheel speeds and workpiece feeds. (C) 2017 Elsevier Inc. All rights reserved. As a flexible forming technology, Incremental Sheet Forming (ISF) is a promising alternative to traditional sheet forming processes in small-batch or customised production but suffers from low part accuracy in terms of its application in the industry. The ISF toolpath has direct influences on the geometric accuracy of the formed part since the part is formed by a simple tool following the toolpath. Based on the basic structure of a simple Model Predictive Control (MPC) algorithm designed for Single Point Incremental Forming (SPIF) in our previous work Lu et al. (2015) [1] that only dealt with the toolpath correction in the vertical direction, an enhanced MPC algorithm has been developed specially for Two Point Incremental Forming (TPIF) with a partial die in this work. The enhanced control algorithm is able to correct the toolpath in both the vertical and horizontal directions. In the newly-added horizontal control module, intensive profile points in the evenly distributed radial directions of the horizontal section were used to estimate the horizontal error distribution along the horizontal sectional profile during the forming process. The toolpath correction was performed through properly adjusting the toolpath in two directions based on the optimised toolpath parameters at each step. A case study for forming a non-axisymmetric shape was conducted to experimentally validate the developed toolpath correction strategy. Experiment results indicate that the two-directional toolpath correction approach contributes to part accuracy improvement in TPIF compared with the typical TPIF process that is without toolpath correction. (C) 2017 Elsevier Inc. All rights reserved. In ISO 14405-1, the global sizes, such as least-squares diameter, minimum circumscribed diameter and maximum inscribed diameter are defined. The diameters above can be measured by using cylindrical coordinate measuring method like the circular section measuring method of cylindricity error. The determination method of the least-squares diameter was firstly given based on the cylindrical measuring system, and the optimization models of the minimum circumscribed diameter and the maximum inscribed diameter were built, respectively. The corresponding objective functions were unified as "minimax" expressions. For the four axis parameters of the cylinder with the minimum circumscribed diameter or the maximum inscribed diameter, the searching ranges of cylinder's axis parameters for their optimal solutions were defined numerically. Thereafter, the genetic, steepest decent and BFGS-0.618 algorithms were introduced, and the optimization evaluation algorithms of two kinds of diameters mentioned above were given. Based on many cylinders' profiles obtained by the circular section measuring method on a measuring instrument of cylinder's global sizes which was developed by Zhongyuan University of Technology, Zhengzhou, China. The accuracy, efficiency and suitability of three optimization algorithms were investigated through the evaluation of a lot of the minimum circumscribed diameters and the maximum inscribed diameters. The measurement uncertainty of the global sizes for the cylindrical specimen was analyzed, and the measurement uncertainties of the sizes in the radial and z directions are +/- 0.95 mu m and +/- 0.5 mu m, respectively. The total measurement uncertainties of the global sizes of the cylindrical specimens with the specifications of phi 10 x 120 mm and phi 100 x 300 mm are +/- 3.8 mu m and +/- 5.7 mu m, respectively. The investigation results showed that for the evaluation of the globe sizes, any one of three algorithms above is not absolutely prior to the other two algorithms while considering both evaluation accuracy and efficiency, and the difference of their evaluation results do not exceed +/- 0.5 mu m. On the other hand, many points between the maximum value and the least value do not affect the evaluation results in optimization process. For improving the evaluation efficiency, by de-selecting those points while considering the characteristic parameter was also studied based on the statistic method and experiment. Coefficient t should be less than 0.3 to ensure the evaluation accuracy. This research may be useful for developing the next generation measurement instrument for the global sizes and the way forward for the digital manufacturing. (C) 2017 Elsevier Inc. All rights reserved. Elliptical vibration cutting with single-crystalline diamond tools is applied to mirror surface machining of high-alloy steels such as cold work die steels and high-speed tool steels with a hardness of more than 60 HRC. Although practical mirror surface machining of hardened die steels such as Stavax (modified AISI 420) with a hardness of 53 HRC has been realized with the elliptical vibration cutting, lives of single crystalline diamond tools are not sufficiently long in machining of some high-alloy steels, that may be caused by a large amount of alloy elements, In order to clarify the influence of the alloy elements on the diamond tool damage, the elliptical vibration cutting experiments are conducted on six kinds of high alloy steels and four kinds of pure metals which are the same as the alloy elements. Mechanical properties of the alloy steels, i.e. difference in hardness between carbides and matrices, and the number of small carbides, are measured, and their influence on the micro-chippings are investigated. The chemical states of the alloy elements in high-alloy steels are analyzed using an X-ray diffraction (XRD) and an electron probe micro analyzer (EPMA), and their influence on the tool wear is discussed. Based on the investigation, a mirror surface machining of DC53, which has a high hardness of 62.2 HRC and the best machinability in the tested high-alloy steels, is demonstrated, and a mirror surface with a roughness of Rt 0.05 mu m is obtained successfully. (C) 2017 Elsevier Inc. All rights reserved. During the ECM process, the metal workpiece is dissolved and turns into sludge which contaminates the electrolyte. To realize precise ECM with high cost-effectiveness, an electrolyte treatment system which can realize reuse of the electrolyte and maintain the electrolyte quality constant is significantly important and essential. Especially, in the ECM of alloys containing a certain level of chromium, it is very likely chromium dissolves to the toxic carcinogen Cr(VI). Therefore, an electrolyte filtration system is required for removing not only the sludge but also residual toxic ions in the electrolyte for health and environment conservation reasons. In this study, activated carbon and scrap iron, which are low cost and easily available materials, were newly utilized to reduce and remove toxic Cr(VI) ions. Experiments clarified that use of activated carbon has no influence on the machining ability of NaNO3 aqueous solution serving as the electrolyte. By adjusting the pH of the electrolyte to acidic, activated carbon can remove Cr(VI) from the NaNO3 aqueous solution electrolyte to a concentration of less than 0.1 mg/L. On the other hand, scrap iron generated from metal cutting processes can be used to reduce Cr(VI) to non-toxic Cr(III). By mixing HNO3 into the electrolyte solution, the reduction efficiency of scrap iron on Cr(VI) improves significantly. (C) 2017 Elsevier Inc. All rights reserved. In this research work, microchannels have been fabricated utilizing multi-pass CO2 laser processing on Poly-methyl meth-acrylate (PMMA) substrates. CO2 laser engraving machines are cost effective and less time consuming compared to other tools and methods of fabricating microchannels on PMMA. However, the basic problem of low surface finish of the microchannel walls still restricts thus fabricated product from many potential applications. In this work, experimental and theoretical investigations of multi pass CO2 laser processing on PMMA have been conducted. A number of experiments were performed to establish the relationship between laser power and scanning speed with microchannel parameters like width, depth, heat affected zone, surface roughness and surface profiles. Experiments were conducted at four different power settings with 50 mm/s of constant scanning speed and seven numbers of passes in each setting. Changes in thermo-physical properties of PMMA were observed for as-received PMMA sample and PMMA sample residing in heat affected zone (HAZ) for first pass and secondary passes respectively. Effect of different numbers of passes on microchannel width, depth, HAZ and surface roughness were explored for different power setting. Microchannel profiles resulting from different numbers of passes have been compared. Energy dispersive X-ray analysis was performed to determine elemental composition after each pass. Many advantages of multi-pass processing over single-pass processing were recorded including high aspect ratio, low heat affected zone, smoother microchannel walls and reduced tapering of microchannels. An energy balance based simple analytical model was developed and validated with experimental results for predicting microchannel profiles on PMMA substrate in multi-pass processing. Multi-pass processing was found to be time and cost effective method for producing smooth microchannels on PMMA. (C) 2017 Elsevier Inc. All rights reserved. This study introduces a new technique for depositing photocatalytic TiO2 thin layers on a nanodiamond (ND) abrasive to polish a single-crystal 6H-SiC wafer with sol-gel tools. A controllable and simple method is utilized to prepare ND/TiO2 composite abrasives by isothermal hydrolysis without conventional post-heating treatment for crystallization. The composites are characterized by X-ray diffraction and transmission electron microscopy. The TiO2 layer appears on the ND surface completely and continuously, and anatase TiO2 can be synthesized directly in this manner. An electrochemical experiment is performed to oxidize NaNO2 utilizing an ND/TiO2 electrode. Results indicate that ND/TiO2 with a hydrolysis reaction time of 18 h exhibits the best oxidative activity. The generation of hydroxyl radical ((OH)-O-center dot) from the ND/TiO2 composite has increased upon exposure to UV irradiation. A comparative polishing experiment revealed that an ultra-smooth surface and a higher MRR were achieved when applying the ND/TiO2 composite abrasives, superior to those of pristine diamond abrasives. This scenario is due to the SiC being oxidized by.OH and forming a relatively soft SiO2 layer. (C) 2017 Elsevier Inc. All rights reserved. The objective of the research presented in the paper is the selection of suitable probe radius correction method in the case of coordinate measurements which can be applied for turbine blades. The investigations are based on theoretical analysis of geometric data and on further computer simulation of measurements and data processing. In the paper two methods for computing coordinates of corrected measured points are verified. Those so-called local methods of probe radius correction are based on Bezier curves. They are dedicated first of all to coordinate measurements of free-form surfaces which are characterized by big values of curvature, e.g. those surrounding the leading and trailing edges of a turbine blade. Numerical simulations are done for several models of the cross-sections of turbine blades with diversified magnitudes of radii of curvature. There are considered both manufacturing deviations and coordinate measurement uncertainty of each examined profile of a turbine blade. Numerical investigations based on the developed analytical models show the advantage of the algorithm which uses the second degree Bezier curves in probe radius correction. Moreover, in the paper there is explained the implementation of the developed algorithms for probe radius correction into the standard CMM software which amplifies the usability of the algorithms. (C) 2017 Elsevier Inc. All rights reserved. This paper presents auto-tracking single point diamond cutting, which can conduct precision cutting on non-planar brittle material substrates without prior knowledge of their surface forms, by utilizing a force controlled fast tool servo (FTS). Differing from traditional force feedback control machining based on a cantilever mechanism such as an atomic force microscope (AFM) that suffers from low-rigidity and limited machining area, the force controlled FTS utilizes a highly-rigid piezoelectric-type force sensor integrated with a tool holder of the FTS system to provide sufficient stiffness and robustness for force controlled cutting of brittle materials. It is also possible for the system to be integrated with machine tools to deal with the difficulties in the cutting of large area non-planar brittle materials, which requires not only high machining efficiency but also a high stiffness. Experimental setup is developed by integrating the force controlled FTS to a four-axis ultra-precision diamond turning machine. For the verification of the feasibility and effectiveness of the proposed cutting strategy and system, auto-tracking diamond cutting of micro-grooves is conducted on an inclined silicon substrate and a convex BK7 glass lens, while realizing constant depths of cuts under controlled thrust forces. (C) 2017 Elsevier Inc. All rights reserved. Porous silicon is receiving increasing interest from a wide range of scientific and technological fields due to its excellent material properties. In this study, we attempted ultraprecision surface flattening of porous silicon by diamond turning and investigated the fundamental material removal mechanism. Scanning electron microscopy and laser Raman spectroscopy of the machined surface showed that the mechanisms of material deformation and phase transformation around the pores were greatly different from those of bulk single-crystal silicon. The mechanism of cutting was strongly dependent on the direction of cutting with respect to pore edge orientation. Crack propagation was dominant near specific pore edges due to the release of hydrostatic pressure that was essential for ductile machining. Wax was used as an infiltrant to coat the workpiece before machining, and it was found that the wax not only prevented chips from entering the pores, but also contributed to suppress brittle fractures around the pores. The machined surface showed a nanometric surface flatness with open pores, demonstrating the possibility of fabricating high-precision porous silicon components by diamond turning. (C) 2017 Elsevier Inc. All rights reserved. This study investigated the influence of the rotary axis of a 5-axis machine tool on the tool-workpiece compliance. The evaluation focused on the influence of the rotation angle and clamping condition of the B axis on the compliance variation. A method was determined to calculate the tool-workpiece compliance in an arbitrary direction from compliances measured using orthogonal triaxial excitations. Then, the tool-workpiece compliance of a 5-axis machine tool was evaluated and displayed using a color map. The compliance map showed that the magnitude of the compliance varied by up to 60% with changes in the B axis rotation angle and its clamping condition. A drastic change in the negative real part of the compliance was also detected in the compliance map. The results of an experimental modal analysis are used to discuss the cause of the compliance variation. The bending mode of the B axis is an important mode because the change in the bending direction due to B axis rotation has a great influence on the direction dependency of the compliance magnitude and the stability limit. A cutting experiment was conducted to verify the correspondence between the evaluated compliance and the vibrational amplitude in a real cutting process. The compliance variation in the compliance map corresponded to the amplitude variation of the vibration in an end milling process. The compliance map revealed that the vibration synchronized with the passing cycle of cutters was decreased by 80% by unclamping the B axis. (C) 2017 Elsevier Inc. All rights reserved. This article is focused on the finite element modeling of burr formation in high speed micromilling of Ti6Al4V. Studies show that the burr produced at the up milling side at the exit of the micromilling tool is the biggest among burrs at other locations. Therefore, side exit burr at the up milling side has been modeled through finite element modeling. Johnson cook material constitutive model has been implemented in the formulation of burr formation. Experimental work has been performed to validate the developed model. It is found that the burr height and width obtained from the simulation has been validated experimentally with a maximum error of 15%. It was found from the literature review that the cutting speed is the factor, which influences the burr formation. Therefore, the model has been further extended to study the effect of cutting speed on the burr size. A maximum tool rotation of 200,000 rpm was considered with a tool diameter of 500 mu m. It is predicted from the simulation that, the burr size was reduced by 96% (both height and width) if cutting tool speed was increased from 10,000 to 200,000 rpm. Therefore, it is concluded that the cutting speed is the major factor to reduce the burr size in micromilling of Ti6Al4 V. This study shows that the high speed micromachining center can be helpful in producing the micro parts with less or no burrs. It is expected that further extension of the burr formation model can minimize the burr size to zero/near zero size. (C) 2017 Elsevier Inc. All rights reserved. We have investigated the cutting forces, the tool wear and the surface finish obtained in high speed diamond turning and milling of OFHC copper, brass CuZn39Pb3, aluminum AlMg5, and electroless nickel. In face turning experiments with constant material removal rate the cutting forces were recorded as a function of cutting speed between v(c) =150 m/min and 4500 m/min revealing a transition to adiabatic shearing which is supported by FEM simulations of the cutting process. Fly-cutting experiments carried out at low (v(c) =380 m/min) and at high cutting speed (v(c) =3800 m/min) showed that the rate of abrasive wear of the cutting edge is significantly higher at ordinary cutting speed than at high cutting speed in contrast to the experience made in conventional machining. Furthermore, it was found that the rate of chemically induced tool wear in diamond milling of steel is decreasing with decreasing tool engagement time per revolution. High speed diamond machining may also yield an improved surface roughness which was confirmed by comparing the step heights at grain boundaries obtained in diamond milling of OFHC copper and brass CuZn39Pb3 at low (v(c) =100 m/min) and high cutting speed (v(c) =2000 m/min). Thus, high speed diamond machining offers several advantages, let alone a major reduction of machining time. (C) 2017 Elsevier Inc. All rights reserved. Built-up edge (BUE) is generally known to cause surface finish problems in the micro milling process. The loose particles from the BUE may be deposited on the machined surface, causing surface roughness to increase. On the other hand, a stable BUE formation may protect the tool from rapid tool wear, which hinders the productivity of the micro milling process. Despite its common presence in practice, the influence of BUE on the process outputs of micro milling has not been studied in detail. This paper investigates the relationship between BUE formation and process outputs in micro milling of titanium alloy Ti6Al4V using an experimental approach. Micro end mills used in this study are fabricated to have a single straight edge using wire electrical discharge machining. An initial experimental effort was conducted to study the relationship between micro cutting tool geometry, surface roughness, and micro milling process forces and hence conditions to form stable BUE on the tool tip have been identified. The influence of micro milling process conditions on BUE size, and their combined effect on forces, surface roughness, and burr formation is investigated. Long-term micro milling experiment was performed to observe the protective effect of BUE on tool life. The results show that tailored micro cutting tools having stable BUE can be designed to machine titanium alloys with long tool life with acceptable surface quality. (C) 2017 Elsevier Inc. All rights reserved. Systematic errors of kinematic touch-trigger probes for CNC machine tools may exceed errors of the machine tool itself. As a result, the machining accuracy is strongly dependent on the probe's accuracy. Numerical correction of probes' systematic errors can be used. However, it requires executing calculations by the CNC machine tool controller. To avoid this troublesome requirement, a new method of errors compensation is proposed. In this approach, a modification of the probe's pre-travel in a given direction is achieved by modification of measurement speed in this direction. Because all measurement speeds can be calculated offline, the controller does not have to do any calculations. The proposed method has been tested for sample kinematic probes and the error reduction was at least 10-fold. (C) 2017 Elsevier Inc. All rights reserved. This paper presents a contactless system for automatic gauge blocks calibration based on combination of laser interferometry and low-coherence interferometry. In the presented system, the contactless measurement of the absolute gauge block length is done as a single-step operation without any change in optical setup during the measurement. The optical setup is combined with compact gauge block changer with a capacity of 126-ga blocks, which makes the resulting system fully automatic. The paper also presents in detail a set of optimization steps which have been done in order to transform the original experimental setup into the automatic system which meets secondary length metrology requirements. To prove the measurement traceability, we conducted a set of gauge block length measurement comparing data from the optimized system and the established reference systems TESA NPL A.G.I. 300 and TESA-UPC operated in Czech Metrology Institute Laboratory. (C) 2017 Elsevier Inc. All rights reserved. Abrasive slurry jet micro-machining (ASJM) was used to machine channels in glass, PMMA, zirconium tin titanate, and aluminum nitride. The channel roughness was measured as a function of the ASJM process parameters particle size, dose, impact velocity, and impact angle. The steady-state roughness of the channels was reached relatively quickly for typical ASJM abrasive flow rates. The roughness of channels having depth-to-width aspect ratios up to about 0.25 could be reduced by approximately 35% compared to the roughest channel by decreasing particle impact velocity and angle. However, machining at such conditions reduced the specific erosion rate by 64% on average. It was therefore quicker to post blast reference channels (225 nm average root mean square (R-rms) roughness) using process parameters selected for peak removal. It was also found that the roughness of reference channels could be reduced by about 78% by post-blasting using 3 mu m diameter silicon carbide particles at 15 jet incidence. The smoothest post-blasted channels had an R-rms roughness of about 23 nm in glass, PMMA, and zirconium tin titanate, and 170 nm in aluminum nitride. Computational fluid dynamics was used to predict the particle impact conditions that were used in a model to predict the steady-state roughness due to ductile erosion with an average error of 12%. (C) 2017 Elsevier Inc. All rights reserved. A measuring method was studied to further improve the manufacturing accuracy of the lead screw. Firstly, factors that can axially displace the screw shaft were analyzed and a relationship between factors and axial displacement was given. The axial displacement of the screw shaft was measured using a laser interferometer, and results show that the variation amplitude of the screw-shaft axial displacement was about 80 nm while the variation period was consistent with the rotation period of the screw shaft. Next, two methods of measuring the manufacturing accuracy of the lead screw were considered: a traditional method that measures the absolute position of the nut and an improved method that measures the displacement between the nut and screw shaft. A No. 4 screw shaft was manufactured under the guidance of results obtained using the improved measuring method. Experimental measurements were then made; results show that the pitch displacement errors obtained using the traditional and improved measuring methods were 0.6 and 0.4 mu m respectively, indicating that the improved measuring method is more exact. Finally, an echelle grating ruled by a grating ruling engine that used the No. 4 screw shaft as a macro-positioning element was introduced. Its excellent parameters indirectly show that the improved measuring method has better accuracy. (C) 2017 Elsevier Inc. All rights reserved. Abrasive jet micro-machining (AJM) uses a high speed jet of particles to mechanically etch features such as micro-channels into a wide variety of target materials. Since the resulting air-particle jet is divergent, erosion resistant masks are required for patterning. Because of their ease of application, 50 and 100 mu m thick commercially available ultraviolet (UV) light curing self-adhesive masks are potentially very useful in AJM. However, optimum curing parameters have until now been specified in terms of a curing time for a specific recommended curing unit, making extrapolation to other curing units impossible. Using masks to create straight 250-600 mu m wide reference channels in borofloat glass, this paper quantified the optimum curing UV light energy density, and investigated the effect of differing UV exposure units (flat and cylindrical-backed), UV light energy densities, and mask configuration during curing, on the pattern transfer accuracy (before AJM), and the eroded micro-channel feature size. As expected, as long as the masks were cured at the same energy density, the pattern transfer accuracy did not depend on the curing unit. The most accurate pattern transfer to the mask film (widths within 5-7% of design) corresponded to energy densities between 516-774 and 387-516 mJ/cm(2) for the thick (100 mu m) and thin (50 mu m) masks, respectively. Under these conditions and for both exposure units, the average widths of the eroded channels after AJM were found to be within 3-9% of the intended design. Curing the masks outside this range resulted in eroded features that were approximately 15-20% and similar to 5% larger than intended, for the thick and thin masks, respectively. The orientations of the channel patterns with respect to the curing cylinder axis did not affect the pattern transfer. However, when compared to the cylinder ends, curing at the midpoint along the cylinder length improved the pattern accuracy by approximately 3%, resulting in eroded features that were 10-20% closer to the design width. Finally, it was found that patterning multiple layers of masks improved the erosion resistance without compromising the feature width, enabling the AJM of higher aspect ratio features. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we present a comprehensive approach for accurate measurement of high-bandwidth three-dimensional (3D) micromachining forces through dynamic compensation of dynamometers. Accurate measurement of micromachining forces is paramount to gaining fundamental understanding on process mechanics and dynamics of micromachining. Multi-axis dynamometers are used to measure 3D machining forces. However, specified bandwidth of these devices is below the frequencies arising during micromachining while using ultra-high-speed (UHS) spindles. This limitation arises from the effects of the dynamometer's structural dynamics on the measured forces. Therefore, accurate measurement of micromachining forces entails high frequency correction of the signals acquired by the dynamometer by removing the influence of those effects. The presented approach involves: (1) accurate identification of 3D force measurement characteristics of the dynamometer within a 25 kHz bandwidth to capture the effects of the dynamometer dynamics; (2) design of a pseudo-inverse filter-based compensation technique to remove the influence of the dynamic response in 3D; and (3) validation of the compensation technique through custom-devised experiments. Subsequently, the compensation method is applied to the micromilling process to obtain accurate broadband 3D micromachining forces using a miniature multi axis dynamometer. It is concluded that the presented approach enables accurate determination of 3D micromachining forces. The presented compensation technique is also readily applicable for expanding the bandwidth of large dynamometers. (C) 2017 Elsevier Inc. All rights reserved. Accurate identification and measurement of internal voids and porosity is an important step towards improvement of production processes to obtain high quality materials and products. Recently, the importance of knowing the exact size, shape, volume and location of defects has become even higher as tighter requirements and new standards have been introduced in industry. There are several well-established methods for defects evaluation based on various principles (both destructive and non-destructive). However, all conventional methods have various deficiencies and the information about internal voids/porosity that can be extracted is limited. Most of these drawbacks can be overcome by using X-ray computed tomography (CT). Unlike other methods, CT provides full three-dimensional information about shape, size and distribution of internal voids and porosity; however, the accuracy of measurements is still under investigation. Hence, further evaluations on CT porosity measurements must be performed in order to consider X-ray computed tomography a reliable instrument for the assessment and detection of internal defects. A reference object with artificial defects was used in this research work in order to evaluate the accuracy of porosity measurements by CT. The reference object was manufactured by ultra-precision micro milling. The object contains dismountable components with embedded internal hemispherical features that simulate internal porosity. The artificial porosity was micro-milled on top surfaces of dismountable cylindrical inserts. The hemispherical calottes were thereafter calibrated by traceable coordinate measuring systems and calibrated values were compared to actual values measured by a CT system. The accuracy of CT porosity measurements was then evaluated based on results obtained on various measurands, using different software tools and measuring procedures, comparing real scans to numerical simulations and investigating the influence of CT system parameters settings on measurement results. (C) 2017 Elsevier Inc. All rights reserved. Low stiffness characteristics limit the application of industrial robots in the field of precision manufacturing. This paper focuses primarily on the stiffness properties of drilling robots by further studying the stiffness ellipsoid model. A Cartesian compliance model is proposed to describe the robot stiffness in Cartesian space. Based on the compliance model, a quantitative evaluation index of the robot's processing performance is defined. By choosing a proper drilling posture, the performance index in the cutting tool direction is optimized. Higher accuracy of the countersink depth and hole axial direction can be guaranteed. From the perspective of the robot processing mechanism, the key role of the per-load pressing force is first indicated. By applying a per-load pressing force, the performance index on the machining plane is enhanced. Hole diameter accuracy is improved significantly. A stiffness improving factor used to evaluate the stiffness promotion degree is also proposed. Finally, experiments were conducted to verify the correctness of the proposed model. Drilling experiments were performed to investigate the effectiveness of the robot processing performance index improving methods The principle of pressing force used in engineering applications is given based on processing parameters. (C) 2017 Elsevier Inc. All rights reserved. A miniature-positioning device with a large stroke motion has attracted more and more attentions in these years because of the intensive development in precision engineering. In this paper, we have achieved the large stroke actuating and the high precision positioning, as well as realized a multi-degree of-freedom in-plane motion using the developed Galfenol impact drive mechanism (IDM) actuator. In order to enhance the system robustness, two pieces of U-shape Galfenol (iron-gallium alloy) have been employed as the driving elements with a bias magnetic field contributed by a permanent magnet to generate the swing motion that amplifies the propelling inertia force. The current amplitude modulation has been applied in the precision positioning of the actuator under the quasi-static condition because of the motion step-size fineness. The results show that the actuator is able to achieve a sub-micrometer positioning accuracy that has reached the measurement limit of our setup. Meanwhile, the frequency modulation method has been explored in the large stroke actuation with a high motion speed. We have found out that this design is capable of achieving an accurate positioning without the frequency modulation because of the intrinsic fine step-size of the actuator. In addition, a rectangular in-plane motion has been realized with the image-based control for the multi-degree-of-freedom positioning. The actuator has an inductive impedance with a resistance of 3.796 Omega and an inductance of 0.4697 mH. Under the present driving ratings, the power consumption is smaller than 1.97 W while the reactive power can be ignored. Moreover, the experimental load analysis indicates that the design can achieve a maximum carry-load-to-weight ratio of 6.5. (C) 2017 Elsevier Inc. All rights reserved. The purpose of this study is to synthesize diamond onto Si, Cu, and Fe (SUS632J2) substrates and to analyze the effect of carbon diffusion on their surfaces. Diamond was synthesized using the in-liquid microwave plasma chemical vapor deposition (IL-MPCVD) as a novel method for synthesizing diamond on various base materials. The IL-MPCVD method is superior one due to its efficiency in terms of cost, space and speed as compared to a conventional gas phase microwave plasma CVD (MPCVD). Microwaves of 2.45 GHz generated plasma in a solution which was comprised of methanol: ethanol (M:E= 97:3). Evaluation of deposited diamond films was done by a Scanning Electron Microscope (SEM) and Raman spectroscopy. Results shows that the IL-MPCVD method can form diamond films on Cu, Si and Fe substrates. The minimum time of film formation of Cu, Si and Fe are 2.5, 3.5 and 5 min, respectively. The material that forms carbide layers such as Si is a better substrate to form diamond film by the IL-MPCVD than other metal substrates such as Cu and Fe. Synthesizing diamond directly on the Fe substrate results in poor quality layers. The effect of carbon diffusion influences diamond film nucleation and diamond growth. In order to alleviate the carbon diffusion and improve the quality of the diamond film on the Fe substrate, Si has been sputtered on the Fe substrate as an interlayer. It is found that the diamond film can be formed on a Fe substrate using a Si interlayer and that heat treatment and thickening the interlayer improve its quality. (C) 2017 Elsevier Inc. All rights reserved. Brittle material removal fraction (BRF) is defined as the area fraction of brittle material removed on machined surface. In the present study, a novel theoretical model of BRF was proposed based on indentation profile caused by intersecting of lateral cracks. The proposed model is related to surface roughness and the subsurface damage (SSD) depth of optical glass during precision grinding. To investigate the indentation profile, indentation tests of K9 optical glass were conducted using single random -shape diamond grains. The experimental results indicate that the indentation profile is an exponent function. To verify the proposed BRF model, BRF, surface roughness and SSD depth of K9 optical glasses were investigated by a series of grinding experiments with different cutting depths. The experimental results show that BRF is dependent on surface roughness and SSD depth. The relationship between BRF, surface roughness and SSD depth is in good accordance with the proposed theoretical model. The proposed BRF model is a reasonable approach for estimating surface roughness and SSD depth during precision grinding of optical glass. (C) 2017 Elsevier Inc. All rights reserved. Porous copper surfaces show their great merits in the applications of chemical reaction, sound absorption and heat transfer. In this study, a laser micromilling method is proposed to fabricate porous surfaces with homogeneous micro-holes and cavities of the size about 1-15 mu m on pure copper plates in a one-step process. The laser micromilling was performed by a pulsed fiber laser via the multiple-pass reciprocating scanning strategy. Based on the measurement of scanning electron microscope (SEM) and 3D laser scanning confocal microscope, the formation of surface structures was investigated together with the laser ablation mechanisms. The effects of laser processing parameters, i.e., laser fluence, scanning speed, number of scanning cycles and scanning interval, on the formation and surface morphology of porous surfaces were systematically assessed. Furthermore, the wettability of the porous copper surfaces was also evaluated by measuring the static contact angle of water. The results showed that the laser fluence played the most significant role on the formation of porous copper surfaces. The average depth and surface roughness of porous copper surfaces increased with increasing the laser fluence and number of scanning cycles while decreased with the increase in scanning interval. The scanning speed played little influence on the formation of porous copper surfaces. These results can be closely related to the variation of energy density and re-melting process during the laser micromilling process. Moreover, all the copper porous surfaces were found to be hydrophobic. The contact angle of porous copper surfaces was significantly dependent on laser fluence, but weakly affected by the scanning speed and number of scanning cycles. (C) 2017 Elsevier Inc. All rights reserved. Recent advances in versatile automated gauging have enabled accurate geometric tolerance assessment on the shop floor. This paper is concerned with the uncertainty evaluation associated with comparative coordinate measurement using the design of experiments (DOE) approach. It employs the Renishaw Equator which is a software-driven comparative gauge based on the traditional comparison of production parts to a reference master part. The fixturing requirement of each production part to the master part is approximately 1 mm for a comparison process with an uncertainty of 2 pm. Therefore, a number of experimental designs are applied with the main focus on the influence of part misalignment from rotation between master and measure coordinate frames on the comparator measurement uncertainty. Other factors considered include measurement mode mainly in scanning and touch-trigger probing (TTP) and alignment procedure used to establish the coordinate reference frame (CRF) with respect to the number of contact points used for each geometric feature measured. The measurement uncertainty analysis of the comparator technique used by the Equator gauge commences with a simple measurement task using a gauge block to evaluate the three-dimensional (3D) uncertainty of length comparative coordinate measurement influenced by an offset by tilt in one direction (two-dimensional angular misalignment). Then, a specific manufactured measurement object is employed so that the comparator measurement uncertainty can be assessed for numerous measurement tasks within a satisfactory range of the working volume of the versatile gadge. Furthermore, in the second case study, different types of part misalignment including both 2D and 3D angular misalignments are applied. The time required for managing the re-mastering process is also examined. A task specific uncertainty evaluation is completed using DOE.Also, investigating the effects of process variations that might be experienced by such a device in workshop environments. It is shown that the comparator measurement uncertainties obtained by all the experiments agree with system features under specified conditions. It is also demonstrated that when the specified conditions are exceeded, the comparator measurement uncertainty is associated with the measurement task, the measurement strategy used, the feature size, and the magnitude and direction of offset angles in relation to the reference axes of the machine. In particular, departures from the specified part fixturing requirement of Equator have a more significant effect on the uncertainty of length measurement in comparator mode and a less significant effect on the diameter measurement uncertainty for the specific Equator and test conditions. Crown Copyright (C) 2017 Published by Elsevier Inc. The recently published standard ISO 25178-2 distinguishes between field parameters and feature parameters for surface texture characterisation, whereby the main difference between these two types is due to the fact that the parameters belonging to the first group are deduced from all points of a scale-limited surface, while the parameters belonging to the second group are deduced from only a subset of pre-defined topological surface features. As specified in ISO 25178-2, two prerequisites are indispensable for the determination of the feature parameters, viz., an adequate data structure for surface characterisation and a suitable formal method for surface generalisation, i.e. for the successive elimination of the less important surface features. Within ISO 25178-2 change trees are proposed for describing the surface topography, while Wolf pruning is suggested for surface simplification (cf. also ISO 16610-85). Apart from the techniques specified in ISO 25178-2 and ISO 16610-85, the present paper describes a second geometrical-topological approach for the characterisation and generalisation of surfaces that has its origin within the geosciences and is based on weighted surface networks and w-contractions. In addition, it is revealed how the two approaches, both of which have their foundations in graph theory, are interrelated to each other and how, from a historical point of view, the GlScience approach forms the basis of the one applied within surface metrology. Finally, some applications within precision engineering are described. (C) 2016 Elsevier Inc. All rights reserved. Using the theory of analytic functions of several complex variables, we prove that if an analytic function in several variables satisfies a system of partial differential equations, then, it can be expanded in terms of the product of the bivariate Hermite polynomials. This expansion theorem allows us to develop a systematic method to prove the identities involving the bivariate Hermite polynomials. With this expansion theorem, we can easily derive, among others, the Mehler formula, Nielsen's formulas, Doetsch's formula, the addition formula, Weisner's formulas, Carlitz's formulas for the bivariate Hermite polynomials. (C) 2017 Elsevier Inc. All rights reserved. We consider the generalized Burgers equation in a class of Gevrey functions G(sigma,delta,s) (T). We show that the generalized Burgers equation is well-posed in this space. Furthermore, we show that the solution is Gevrey-sigma in the spacial variable and Gevrey-2 sigma in the time variable. (C) 2017 Elsevier Inc. All rights reserved. This paper is motivated by a long-standing conjecture of Dinculeanu from 1967. Let X and Y be Banach spaces and let Omega be a compact Hausdorff space. Dinculeanu conjectured that there exist operators S is an element of L(e(Omega),L(X,Y)) which are not associated to any U is an element of L(e(Omega, X),Y). We study this existence problem systematically on three possible levels of generality: the classical case e(Omega, X) of continuous vector-valued functions, p-continuous vector-valued functions, and tensor products. On each level, we establish necessary and sufficient conditions for an L(X,Y)-valued operator to be associated to a Y-valued operator. Among others, we see that examples, proving Dinculeanu's conjecture, come out on the all three levels of generality. (C) 2017 Elsevier Inc. All rights reserved. Consider a complex normed linear space (chi, parallel to . parallel to), and let chi, psi is an element of chi with psi not equal 0. Motivated by recent works on rectangular matrices and on normed linear spaces, we study the Birkhoff-James epsilon-orthogonality set of chi with respect to psi, give an alternative definition for this set, and explore its rich structure. We also introduce the Birkhoff-James epsilon-orthogonality set of polynomials in one complex variable whose coefficients are members of chi, and survey and record extensions of results on matrix polynomials to these vector-valued polynomials. (C) 2017 Elsevier Inc. All rights reserved. In this paper, by Karamata regular variation theory and constructing super and subsolutions, we obtain the asymptotic boundary behavior of convex solutions of Monge-Ampere equations with singular righthand sides. (C) 2017 Elsevier Inc. All rights reserved. The explosive solutions for a class of stochastic differential equations driven by Levy processes have been considered via Lyapunov method. How the jumping noise affects the explosion has been studied. (C) 2017 Elsevier Inc. All rights reserved. In this paper we consider continued fraction (CF) expansions on intervals different from [0,1]. For every x in such interval we find a CF expansion with a finite number of possible digits. Using the natural extension, the density of the invariant measure is obtained in a number of examples. In case this method does not work, a Gauss-Kuzmin-Levy based approximation method is used. Convergence of this method follows from [32] but the speed of convergence remains unknown. For a lot of known densities the method gives a very good approximation in a low number of iterations. Finally, a subfamily of the N-expansions is studied. In particular, the entropy as a function of a parameter alpha is estimated for N = 2 and N = 36. Interesting behavior can be observed from numerical results. (C) 2017 Elsevier Inc. All rights reserved. We consider an infinite dimensional optimization problem motivated by mathematical economics. Within the celebrated "Arbitrage Pricing Model", we use probabilistic and functional analytic techniques to show the existence of optimal strategies for investors who maximize their expected utility. (C) 2017 Elsevier Inc. All rights reserved. In the present paper, we consider the following Kirchhoff type problem with critical exponent (-(epsilon(2)a + epsilon b integral(R3)vertical bar del u vertical bar(2)dx)Delta u + V(x)u = f(u), in R-3 u is an element of H-1(R-3). By variational methods, we show the existence of the positive solutions concentrating around global minima of the potential V(x), as epsilon -> 0. We do not need the monotonicity of the function xi -> f(xi)/xi(3). (C) 2017 Elsevier Inc. All rights reserved. The main purpose of this paper is to study two scaling methods developed respectively by Pinchuk and Frankel. We introduce first a continuously -varying global coordinate system, and give an alternative proof to the convergence of Pinchuk's scaling sequence (and of our modification) on bounded domains with finite type boundaries in C-2. Using this, we discuss the modification of the Frankel scaling sequence on nonconvex domains. We also observe that two modified scalings are equivalent. (C) 2017 Elsevier Inc. All rights reserved. This paper constitutes the advantages of using the weighted Sobolev and weighted Lebesgue spaces in optimal control problems defined on an infinite time interval. Based on numerous examples, it demonstrates how the introduction of certain weight or density functions into the problem statement may influence the existence of an optimal solution and the optimality of a chosen candidate. The positive effects of the proper relation between the state and the co-state functional spaces for the necessary optimality conditions as well as for the development of numerical schemes are discussed. This paper is based on over one decade of research in this field and summarizes the main findings concerning the use of weight functions in optimal control problems. (C) 2017 Elsevier Inc. All rights reserved. The full hydrodynamic equations with quantum effects are studied in this paper. We obtain global solutions and optimal convergence rates by pure energy method provided the initial perturbation around a constant state is small enough. In particular, the optimal decay rates of the higher-order spatial derivatives of solutions are obtained. (C) 2017 Elsevier Inc. All rights reserved. Let X be a Banach space with an 1-unconditional Schauder basis and without isomorphic copies of l(1) circle plus 4c(o). We obtain an equivalent condition to weak compactness by means of a fixed-point theorem. Namely: a closed convex bounded subset C of X is weakly compact if and only if every cascading nonexpansive mapping T : C -> C has a fixed point. We particularize our results when C is the closed unit ball of the Banach space X, obtaining a new characterization of reflexivity. Note that weak compactness is independent of the underlying equivalent norm and that every Banach space with an unconditional Schauder basis can be renormed to be 1-unconditional. (C) 2017 Elsevier Inc. All rights reserved. We study a nonlocal biharmonic MEMS equation with the fringing field. It arises in the Micro-Electro Mechanical System (MEMS) devices. First, we establish the local solution and extend it globally in time. Next, we discuss the dynamical properties of omega limit set. Especially, we show that the omega limit set is included in the set of the stationary solution. Finally, we consider the convergence rate to the stationary solution. (C) 2017 Elsevier Inc. All rights reserved. According to Riesz decomposition theorem, superharmonic functions on the punctured unit ball are represented as the sum of generalized potentials and harmonic functions. In this paper we study growth properties near the origin of spherical means for generalized Riesz potentials of functions belonging to central variable Morrey spaces. We also deal with monotone Sobolev functions. (C) 2017 Elsevier Inc. All rights reserved. This paper introduces a new, transformation-free, generalized polynomial chaos expansion (PCE) comprising multivariate Hermite orthogonal polynomials in dependent Gaussian random variables. The second-moment properties of Hermite polynomials reveal a weakly orthogonal system when obtained for a general Gaussian probability measure. Still, the exponential integrability of norm allows the Hermite polynomials to constitute a complete set and hence a basis in a Hilbert space. The completeness is vitally important for the convergence of the generalized PCE to the correct limit. The optimality of the generalized PCE and the approximation quality due to truncation are discussed. New analytical formulae are proposed to calculate the mean and variance of a generalized POE approximation of a general output variable in terms of the expansion coefficients and statistical properties of Hermite polynomials. However, unlike in the classical PCE, calculating the coefficients of the generalized PCE requires solving a coupled system of linear equations. Besides, the variance formula of the generalized PCE contains additional terms due to statistical dependence among Gaussian variables. The additional terms vanish when the Gaussian variables are statistically independent, reverting the generalized PCE to the classical PCE. Numerical examples illustrate the generalized PCE approximation in estimating the statistical properties of various output variables. (C) 2017 Elsevier Inc. All rights reserved. In this paper we address local bifurcation properties of a family of networked dynamical systems, specifically those defined by a potential-driven flow on a (directed) graph. These network flows include linear consensus dynamics or Kuramoto models of coupled nonlinear oscillators as particular cases. As it is well-known for consensus systems, these networks exhibit a somehow unconventional dynamical feature, namely, the existence of a line of equilibria, following from a well-known property of the graph Laplacian matrix in connected networks with positive weights. Negative weights, which arise in different contexts (e.g. in consensus models in signed graphs or in Kuramoto models with antagonistic actors), may on the one hand lead to higher-dimensional manifolds of equilibria and, on the other, be responsible for bifurcation phenomena. In this direction, we prove a saddle-node bifurcation theorem for a broad family of potential-driven flows, in networks with one or more negative weights. The goal is to state the conditions in structural terms, that is, in terms of the expressions defining the flowrates and the graph theoretic properties of the network. Not only the eigenvalue requirements but also the nonlinear transversality assumptions supporting the bifurcation motivate an analysis of independent interest concerning the rank degeneracies of nodal matrices arising in the linearized dynamics; this analysis is performed in terms of the contraction deletion structure of spanning trees and uses several results from matrix analysis. Different examples illustrate the results; some linear problems (including signed graphs) are aimed at illustrating the analysis of nodal matrices, whereas in a nonlinear framework we apply the characterization of saddle-node bifurcations to networks with a sinusoidal (Kuramoto-like) flow. (C) 2017 Elsevier Inc. All rights reserved. We study quasi-Monte Carlo integration for twice differentiable functions defined over a triangle. We provide an explicit construction of infinite sequences of points including one by Basu and Owen (2015) as a special case, which achieves the integration error of order N-1(log N)(3) for any N >= 2. Since a lower bound of order N-1 on the integration error holds for any linear quadrature rule, the upper bound we obtain is best possible apart from the log N factor. The major ingredient in our proof of the upper bound is the dyadic Walsh analysis of twice differentiable functions over a triangle under a suitable recursive partitioning. (C) 2017 Elsevier Inc. All rights reserved. The fermionic signature operator is constructed in Rindler space-time. It is shown to be an unbounded self-adjoint operator on the Hilbert space of solutions of the massive Dirac equation. In two-dimensional Rindler space-time, we prove that the resulting fermionic projector state coincides with the Fulling Rindler vacuum. Moreover, the fermionic signature operator gives a covariant construction of general thermal states, in particular of the Unruh state. The fermionic signature operator is shown to be well-defined in asymptotically Rindler space-times. In four-dimensional Kindler space-time, our construction gives rise to new quantum states. (C) 2017 Elsevier Inc. All rights reserved. In this paper we study a Neumann problem with non-homogeneous boundary conditions for the p(x)-Laplacian. In particular we assume that p(.) is a step function defined in a domain Omega and equals to 1 in a subdomain Omega(1) and 2 in its complementary Omega(2). By considering a suitable sequence p(k) of variable exponents such that pk -> p and replacing p with p(k) in the original problem, we prove the existence of a solution u(k) for each of those intermediate ones. We also show, that under a hypothesis concerning the boundary data g, the limit of the sequence (u(k)) is a function u, which belongs to the space of functions of bounded variation and is a solution to the original p(.)-problem. (C) 2017 Elsevier Inc. All rights reserved. A simple uniform bound for the ratio of two modified spherical Bessel functions is derived. The arguments of the two functions are complex but their ratio is real. (C) 2017 Elsevier Inc. All rights reserved. The one-dimensional isothermal Euler equations are a well-known model for the flow of gas through a pipe. An essential part of the model is the source term that models the influence of gravity and friction on the flow. In general the solutions of hyperbolic balance laws can blow-up in finite time. We show the existence of initial data with arbitrarily large C-1-norm of the logarithmic derivative where no blow up in finite time occurs. The proof is based upon the explicit construction of product solutions. Often it is desirable to have such analytical solutions for a system described by partial differential equations, for example to validate numerical algorithms, to improve the understanding of the system and to study the effect of simplifications of the model. We present solutions of different types: In the first type of solutions, both the flow rate and the density are increasing functions of time. We also present a second type of solutions where on a certain time interval, both the flow rate and the pressure decrease. In pipeline networks, the bi-directional use of the pipelines is sometimes desirable. In this paper we present a classical solution of the isothermal Euler equations where the direction of the gas flow changes. In the solution, at the time before the direction of the flow is reversed, the gas flow rate is zero everywhere in the pipe. (C) 2017 Elsevier Inc. All rights reserved. This paper describes and discusses the phenomenon 'predatory publishing', in relation to both academic journals and books, and suggests a list of characteristics by which to identify predatory journals. It also raises the question whether traditional publishing houses have accompanied rogue publishers upon this path. It is noted that bioethics as a discipline does not stand unaffected by this trend. Towards the end of the paper it is discussed what can and should be done to eliminate or reduce the effects of this development. The paper concludes that predatory publishing is a growing phenomenon that has the potential to greatly affect both bioethics and science at large. Publishing papers and books for profit, without any genuine concern for content, but with the pretence of applying authentic academic procedures of critical scrutiny, brings about a worrying erosion of trust in scientific publishing. Recent professional guidelines published by the General Medical Council instruct physicians in the UK to be honest and open in any financial agreements they have with their patients and third parties. These guidelines are in addition to a European policy addressing disclosure of physician financial interests in the industry. Similarly, In the US, a national open payments program as well as Federal regulations under the Affordable Care Act re-address the issue of disclosure of physician financial interests in America. These new professional and legal changes make us rethink the fiduciary duties of providers working under new organizational and financial schemes, specifically their clinical fidelity and their moral and professional obligations to act in the best interests of patients. The article describes the legal changes providing the background for such proposals and offers a prima facie ethical analysis of these evolving issues. It is argued that although disclosure of conflicting interest may increase trust it may not necessarily be beneficial to patients nor accord with their expectations and needs. Due to the extra burden associated with disclosure as well as its implications on the medical profession and the therapeutic relationship, it should be held that transparency of physician financial interest should not result in mandatory disclosure of such interest by physicians. It could lead, as some initiatives in Europe and the US already demonstrate, to voluntary or mandatory disclosure schemes carried out by the industry itself. Such schemes should be in addition to medical education and the address of the more general phenomenon of physician conflict of interest in ethical codes and ethical training of the parties involved. This study examines the decisions of the French Conseil d'Etat (Supreme Administrative Court) and the European Court of Human Rights in the Lambert case concerning the withdrawal of life-sustaining treatments. After presenting the facts of this case, the main legal question will be analyzed from an ethical and medical standpoint. The decisions of the Conseil d'Aetat and then of the European Court of Human Rights are studied from a comparative legal perspective. This commentary focuses on the autonomous will of an unconscious patient and on the judicial interpretation of the right to life as recognized in article 2 of the European Convention on Human Rights. Furthermore, it medically classifies artificial nutrition and hydration (ANH) as a "treatment" which has ethical and legal implications. While the majority of the bioethical community considers ANH a medical treatment, a minority argues that ANH is basic care. This classification is ambiguous and has conflicting legal interpretations. In the conclusion, the author highlights how a French lawmaker in February 2016, finally clarified the status of ANH as a medical treatment which reconciled the different values at stake. In the near future developments in non-invasive prenatal testing (NIPT) may soon provide couples with the opportunity to test for and diagnose a much broader range of heritable and congenital conditions than has previously been possible. Inevitably, this has prompted much ethical debate on the possible implications of NIPT for providing couples with opportunities for reproductive choice by way of routine prenatal screening. In view of the possibility to test for a significantly broader range of genetic conditions with NIPT, the European Society of Human Genetics (ESHG) and American Society of Human Genetics (ASHG) recommend that, pending further debate, prenatal screening for reproductive choice should only be offered where concerning serious congenital conditions and childhood disorders. In support of this recommendation, the ESHG and ASHG discuss a number of ethical issues on which they prompt further debate: the informational privacy of the future child, the trivialization of abortion, the risk of information overload, and issues of distributive justice. This paper responds to this call with further reflection on each ethical issue and how it relates to the moral justification of providing couples with opportunities for meaningful reproductive choice. The paper concludes that whilst there may be good reasons for qualifying the scope of any unsolicited prenatal screening offer to serious congenital conditions and childhood disorders, if prenatal screening is justified for providing couples with opportunities for meaningful reproductive choice, then health services may have obligations to empower couples with the same opportunity where concerning other conditions. Clowns seem suspect when it comes to respect. The combination of clowning and people with dementia may seem especially suspicious. In this argument, I take potential concerns about clowning in dementia care as an opportunity to explore the meaning of a respectful approach of people with dementia. Our word 'respect' is derived from the Latin respiciAi, meaning 'looking back' or 'seeing again', as well as 'looking after' or 'having regard' for someone or something. I build upon this double meaning of respiciAi by examining how simultaneously we look to and after people with dementia. I do so empirically by studying how miMakkus clowns in their practice learn to look with new eyes to people and things around them. I call this clown's view and differentiate it from the predominant way of observing people in dementia care. I argue that respiciAi comes in two guises, each of which merges specific forms of looking to and looking after the other. By making conventional, solidified ways of seeing the other fluid again, clowns remind us of the value that comes with a veiled way of paying respect to people with dementia. In this article we explore the ethical issues raised by permitting patients to pay for participation (P4) in clinical trials, and discuss whether there are any categorical objections to this practice. We address key considerations concerning payment for participation in trials, including patient autonomy, risk/benefit and justice, taking account of two previous critiques of the ethics of P4. We conclude that such trials could be ethical under certain strict conditions, but only if other potential sources of funding have first been explored or are unavailable. Recent studies have revealed a drop in the ability of physicians to empathize with their patients. It is argued that empathy training needs to be provided to both medical students and physicians in order to improve patient care. While it may be true that empathy would lead to better patient care, it is important that the right theory of empathy is being encouraged. This paper examines and critiques the prominent explanation of empathy being used in medicine. Focusing on the component of empathy that allows us to understand others, it is argued that this understanding is accomplished through a simulation. However, simulation theory is not the best explanation of empathy for medicine, since it involves a limited perspective in which to understand the patient. In response to the limitations and objections to simulation theory, interaction theory is presented as a promising alternative. This theory explains the physicians understanding of patients from diverse backgrounds as an ability to learn and apply narratives. By explaining how we understand others, without limiting our ability to understand various others, interaction theory is more likely than simulation theory to provide better patient care, and therefore is a better theory of empathy for the medical field. GPs usually care for their patients for an extended period of time, therefore, requests to not only discontinue a patient's treatment but to assist a patient in a suicide are likely to create intensely stressful situations for physicians. However, in order to ensure the best patient care possible, the competent communication about the option of physician assisted suicide (PAS) as well as the assessment of the origin and sincerity of the request are very important. This is especially true, since patients' requests for PAS can also be an indicator for unmet needs or concerns. Twenty-three qualitative semi-structured interviews were conducted to in-depth explore this multifaceted, complex topic while enabling GPs to express possible difficulties when being asked for assistance. The analysis of the gathered data shows three main themes why GPs may find it difficult to professionally communicate about PAS: concerns for their own psychological well-being, conflicting personal values or their understanding of their professional role. In the discussion part of this paper we re-assess these different themes in order to ethically discuss and analyse how potential barriers to professional communication concerning PAS could be overcome. Victims of disaster suffer, not only at the very moment of the disaster, but also years after the disaster has taken place, they are still in an emotional journey. While many moral perspectives focus on the moment of the disaster itself, a lot of work is to be done years after the disaster. How do people go through their suffering and how can we take care of them? Research on human suffering after a major catastrophe, using an ethics of care perspective, is scarce. People suffering from disasters are often called to be in distress and their emotional difficulties 'medicalised'. This brings them often into a situation of long term use of medication, and one can wonder if medication is of help to them in the long run. In our paper, we will explore another moral perspective, focusing on the importance of the victims' narrative and their lived experiences. We will use Paul Ricoeur's phenomenological reflections from 'Suffering is not the same as pain' for conceptualizing human suffering and how to apply it to victims of disaster. Ricoeur suggests that suffering is not a quantity that can be measured, but a characteristic that should be studied qualitatively in interpersonal and narrative contexts. Above all, the perspective of care and listening could offer an opportunity to reconcile people from their loss and suffering. Falls can occur in any occupational setting. Occupational health professionals may focus on creating a safe work environment and training programs to prevent falls. However, an important aspect of safety management is identifying at-risk employees. The purpose of this article is to identify personal risk factors and offer interventions to prevent falls in the workplace. Because of their social isolation, irregular and unpredictable schedules, limited access to health care, and long periods of travel, long-haul truckers may benefit from the use of mobile health applications on Internet-capable devices. The purpose of this study was to determine Internet access and usage among a sample of long-haul truck drivers. In this cross-sectional study, truck drivers completed a pencil and paper survey with questions on demographics, work and health histories, and Internet access and usage for both personal and job reasons. A total of 106 truck drivers were recruited from trucking industry trade shows, by word of mouth, and directly from trucking companies. Overall, the truck drivers' use of the Internet was limited. Their usage for personal and job-related reasons differed. Social connectivity and access to health and wellness information were important during personal usage time. Job-related Internet use was highly practical, and applied to seeking information for directions and maps, fuel stops and pricing, and communicating with employers or transmitting documents. Age and experience were associated with Internet use. Younger, less-experienced drivers used the Internet more than older, experienced drivers. Targeted mobile health messaging may be a useful tool to inform truck drivers of health conditions and plans, and may provide links to primary care providers needing to monitor or notify drivers of diagnostic results or treatment plans. The purpose of this investigation was to analyze online discussions about parental leave in relation to the work lives and private lives of new fathers. A netnographic study of nearly 100 discussion threads from a freely accessible online forum for fathers was conducted. Data were coded, sorted, and categorized by qualitative similarities and differences. The results of the study indicate that new fathers seek Internet forums to discuss work-related topics. Parental leave can provoke worries and anxiety related to management and co-worker attitudes which can create concern that they should be back at work. The results are presented in two categories: (a) attitudes expressed by employers and colleagues and (b) leaving work but longing to be back. The phenomenon of parental leave for fathers is more complex than simply for or against attitudes. Fathers can use Internet forums to discuss their experiences, fears, and anxiety and provide reasonable accommodations for both work and family life. The purpose of this study was to identify factors predicting occupational health nurses' provision of smoking cessation services. Data were collected via a self-administered questionnaire distributed to 254 occupational health nurses in Thailand. Analysis by structural equation modeling revealed that self-efficacy directly and positively influenced smoking cessation services, and mediated the relationship between workplace factors, nurse factors, and smoking cessation services. The final model had good fit to the data, accounting for 20.4% and 38.0% of the variance in self-efficacy and smoking cessation services, respectively. The findings show that self-efficacy is a mediator that influences provision of smoking cessation services by occupational health nurses. Interventions to enhance nurses' self-efficacy in providing smoking cessation services are expected to promote provision of smoking cessation services to workers. Emergency departments are high-stress environments for patients and clinicians. As part of the clinical team, nurses experience this stress daily and are subject to high levels of burnout, which has been shown to lead to hypertension, depression, and anxiety. Presence of these diseases may also contribute to burnout, creating a cycle of stress and illness. This prospective qualitative study used a phenomenological approach to better understand factors associated with burnout among emergency department nurses. Burnout manifests itself in multiple modes, can affect nurses' decisions to leave the profession, and must be addressed to mitigate the phenomenon. Commercial workplace violence (WPV) prevention training programs differ in their approach to violence prevention and the content they present. This study reviews 12 such programs using criteria developed from training topics in the Occupational Safety and Health Administration's (OSHA) Guidelines for Preventing Workplace Violence for Healthcare and Social Service Workers and a review of the WPV literature. None of the training programs addressed all the review criteria. The most significant gap in content was the lack of attention to facility-specific risk assessment and policies. To fill this gap, health care facilities should supplement purchased training programs with specific training in organizational policies and procedures, emergency action plans, communication, facility risk assessment, and employee post-incident debriefing and monitoring. Critical to success is a dedicated program manager who understands risk assessment, facility clinical operations, and program management and evaluation. Global clothing production has given rise to fast fashion strategies adopted by the majority of fashion retailers. However, there is a production network located in London, one of the most expensive areas in the world. The Savile Row tailors using craft techniques slow by nature have never outsourced production to remain competitive. In line with the resource-based view, the relational view, and global production networks theories, the authors devise a conceptual framework as they seek to explore how competitiveness can be achieved within a slow production network. A single case study of London's Savile Row tailoring operations is adopted. This self-reliant network has managed to acquire capabilities and specialized knowledge and transform them into core competences, thus generating competitiveness. The perennial values of this slow craft and its recent international revival secure the tailors' longevity. Based on consumer behavior theories, the aim of this study is to provide a model supported by empirical evidence in order to improve knowledge of the antecedents of loyalty to online clothing retailers. The model has been verified through partial least squares analysis of the data obtained from a survey of a sample of 412 online clothing shoppers. The results show that, firstly, the affective and cognitive experiences have a positive effect on the degree of satisfaction, and the affective experiences also have a positive impact on trust. Secondly, it has been demonstrated that consumer satisfaction with online clothing retailers can be increased by both the hedonic and utilitarian values of shopping. Thirdly, an indirect relationship has been established between satisfaction and loyalty through trust and perceived value of service. These findings can improve our understanding of the determinants of online consumer loyalty. Discussion and implications are provided. During the 1800s, many textile manufacturers moved to the U.S. South for economic and geographic reasons, fueling economic growth in the South. Although extensive documentation exists about textile factories in the northeast, limited documentation was found about the thousands of textile factories built in the South. This study examined over 150 textile mills and plants (i.e., factories) in two U.S. states. The qualitative study was made with on-site photographic evidence, historical documents, and other primary and secondary sources. An examination resulted in five groups of factories from 1815 to 2015. The factory architecture was observed as reflective of technological and geographic effects. For example, the factory style from 1950 to 1969 was that of low rambling structures, reflecting environmental and economic conditions of the South, and the period's manufacturing technology. This study provides an outline for documenting these historic structures and a foundation for examining their characteristics and associated technologies. In this study, researchers provide a comprehensive model of how consumers process, remember, and evaluate positive fair labor-related brand messages that are congruent/incongruent to their existing brand expectations using both psychophysiological and self-reported measures. Data were collected across two different studies. Results indicated that consumers paid more attention to and better remembered messages for incongruity than congruity. Also attitude toward message was highest for congruent messages, followed by incongruity resolution and lowest for incongruity nonresolution. The combined study findings bridge the gap in literature between human attitude and cognition, helping both brands and consumer researchers understand consumers' reaction to brands schema-message congruity/incongruity and guide decisions when brands hope to revitalize or reinvent a brand's image. The present study was carried out to optimize discharge printing process for fashionable denim garments. Response surface methodology, involving a central composite design with three key factors, namely, potassium permanganate (KMnO4) concentration, pH of the printing paste, and reaction time, was successfully employed. The objective of this work was to develop a cost-effective, value-added process for denim fabric, where losses in tensile and tear strengths were to be minimized, while whiteness effect of discharge was to be maximized. The optimum conditions for discharge printing with potassium permanganate were found to be pH 6, KMnO4 concentration of 42 g/kg, and treatment time of 15 min. The validity of experimental values was found to be in good agreement with optimized combination of three variables. The western region of Saudi Arabia has its own unique traditional bridal garments. Little is known about these bridal costumes because they are handmade by a few families in the region. The purpose of this study was to investigate the history, significance, and meaning of the Hijazi bridal costumes. Symbolic interactionism was the theoretical starting point of this study. Qualitative data were collected via in-depth interviews from 22 married Saudi women. A purposive, snowball sampling strategy was used. The data were analyzed using the Miles and Huberman process. Four key themes emerged including (a) physical appearance and process of wearing the costumes, (b) meanings and beliefs related to the costumes' components, (c) appropriate occasions during which the costumes could be worn, and (d) motivation negotiated within families. The Hijazi bridal costumes have deep historical roots in Saudi culture, which continues to play a significant role in today's marriage rituals. Background: In simulation education, learning does not occur without certain debriefing activities. The purpose of this study was to identify debriefing practices in simulation-based nursing education in Korea. Method: Ninety-six nursing faculty members responsible for simulation education participated in this study from January to April, 2015. Data were collected using a revised version of Fey's Final Survey Questions: Debriefing Practices (2014) and analyzed by descriptive statistics. Results: Simulation education is a required course in the majority of Korean nursing colleges, and 52.7% of them have replaced the clinical practicum with simulation. Those who completed training for debriefing were more likely to support students' emotional reactions to simulation and provide feedback. The GathereAnalyzeeSummarize model and the Debriefing Assessment for Simulation in Healthcare model were most frequently used for debriefing. Conclusions: There is a need to develop more systematic and effective training programs that encompass theories for implementing and evaluating debriefing practices in simulation-based nursing education in Korea. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Medication administration is an important part of the nurse's role. Students and new nursing graduates often lack knowledge and competency to safely administer medications. Simulation can facilitate student learning about medication safety. Purpose: This simulation intervention study tested the differences in knowledge, competency, and perceptions of medication safety between students who did and did not participate in safety enhanced medication administration simulations. Method: This was a two-group pretest-posttest design. Participants completed the Medication Safety Knowledge Assessment (MSKA) and the Healthcare Professionals Patient Safety Assessment (HPPSA) pretests at the start of the semester. The control group participated in the usual simulations/debriefings; the intervention group participated in one additional medication administration simulation, as well as medication safety enhanced simulations. During the final simulation of the semester, participants' competency in medication administration safety was rated using the Medication Safety Critical Element Checklist (MSCEC). All participants completed the MSKA and HPPSA posttests. Results: Data for the MSKA were analyzed using a Knowledge Pass/Fail cut score of 21 correct answers or more to pass. The HPPSA scores were analyzed using t tests and MSCEC between groups scores were compared. There were statistically significant differences in student knowledge (MSKA) and competency (MSCEC) for students who participated in the medication safety enhanced simulations. Conclusions/Implications: Medication safety is essential to ensuring patient safety; it is important to ensure that nursing graduates are well-prepared to provide safe care. Outcomes of this study support the evidence that simulation is an effective strategy to improve student learning. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Simulation-based interprofessional education programs can have variable objectives for different participating professional teams. Methods: In this study, through a qualitative research design, we report the medical and midwifery students' approach to their learning and attitude towards each other's team, assessed through thematic analysis of independently run focus groups three months after the attendance of the Women's Health Interprofessional Learning Through Simulation program and their respective clinical placements. Results: Medical students reported the importance of "learning by doing'' through simulation as the key theme. The feedback obtained from midwifery students was focused on "relationship of power'' compared with the other discipline. Conclusions: Interprofessional learning had a positive influence on the attitude of medical and midwifery students, in spite of the disparity in their background knowledge and experience. IPE competencies are better appreciated at a relatively mature level of clinical practice. Core skills in women's health taught through simulation were found to be helpful by both midwifery and medical students. However, the key learning was about developing respect and a supportive relationship "of equals'' with each other. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Increasing diversity of populations worldwide emphasizes the need for culturally appropriate communication that addresses the needs of health care consumer and provider. While cultural competency training using simulation is reported, the prevalence is low and only a few studies include simulated patients. Often these studies report small-scale interventions involving one or two simulated patients. The Cultural Respect Encompassing Simulation Training (CREST) program aimed to develop cultural communication education using simulation with simulated patients. The program funded by the Commonwealth Government of Australia recruited and trained 30 culturally and linguistically diverse simulated patients. Aim: This research, part of a larger study, aimed to evaluate the learning, training and teaching experience of the culturally and linguistically diverse (CALD) simulated patients in the CREST project. Methods: Thirty simulated patients differentiated by age, gender, ethnicity, religion, sexual orientation, and arrival mechanism to Australia were recruited and trained. Through a co-construction process simulation scenarios were developed with the SP. Simulations were undertaken with entry to practice students across the disciplines of nursing, medicine, paramedicine, physiotherapy and social work as well as with practitioners from a range of health disciplines. Results: Evaluation data included a 14-item survey completed after simulation experiences assessing aspects of pre-simulation, simulation and post-simulation experience using a 5-point Likert scale. A focus group discussion centered on the SP experience and was thematically analyzed. Conclusion: SPs felt well prepared for simulation experience and were grateful for the opportunity to participate. The rehearsal held prior to each SPs portrayal was identified as particularly important and useful. SPs were surprised and pleased to find the simulation participants interested and engaged with them focusing on issues of their culture and ethnicity. This was contrary to many of the real experiences SPs had encountered and these earlier experiences influenced both their desire to participate as SPs and their expectations. Portraying safe but authentic content was positive and empowering for CALD SPs. (C) 2017 Published by Elsevier Inc. on behalf of International Nursing Association for Clinical Simulation and Learning. Background: Increasingly, simulation is replacing some clinical hours as nursing schools struggle to find quality clinical placements for students. Methods: An experimental study was conducted to compare a virtual gaming simulation with a laboratory simulation regarding three outcomes: students' pediatric knowledge, self-efficacy, and satisfaction. Results: Both groups made modest knowledge gains. They made significant gains in self-efficacy scores with the gaming group making greater gains. Satisfaction survey scores were high for both groups. Conclusions: Virtual gaming simulation combined with hands-on simulation could become part of the suite of best teaching and learning practices we offer students. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Supporting the initiation and uptake of simulation-based learning in university or hospital settings requires strategising for human as well as equipment resources. If activities require use of highly technical simulation and audiovisual equipment, faculty may be reticent to engage with the learning strategies that rely on managing "complex'' equipment. Sourcing technical support can be an expensive component of simulation business plans. An alternate source of technical support can be realised from undergraduate engineering students. Insights are shared about the initiation and current status of a symbiotic partnership between two university faculties to meet respective needs for workplace experience and simulation technical support. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: The International Nursing Association for Clinical Simulation and Learning (INACSL) has released Standards of Best Practice (TM) for simulation. The purpose of this paper is to discuss the use of the INACSL Simulation Design Standard (SDS) to create a novice simulation experience for sophomore nursing students. Method: This pilot study was conducted with sophomore nursing students over several semesters (n = 143) in a foundational nursing course to prepare students for fundamental respiratory concepts. Expert certified healthcare simulation educators facilitated and debriefed each simulation-based experiences (SBE). The Creighton Clinical Evaluation Instrument (C-CEI (c)) was used to provide feedback to students. Students also completed a post Simulation Perspective Survey after each experience. Results: Eighty percent of the students believed that the simulations were a valuable learning experience and helped to stimulate critical thinking. The SDS was an effective method providing clear guidelines for designing SBE for novice nursing students. Conclusions: The SDS provides a clear format for designing SBE for novice nursing students. This article provides a concrete example on the implementation of the SDS framework in SBE, promoting the integration of best practice in simulation. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: The National Survey on Drug Use and Health reported that the rate of illicit drug use among pregnant women ranged from 3.2% to 14.6%. Maternal drug use during pregnancy may result in neonatal abstinence syndrome (NAS). This study explored whether simulation training is more effective than traditional didactic and video instruction in teaching nursing students' assessment skills for scoring NAS. Methods: Twenty-six nursing students participated in this randomized intervention study that incorporated simulation scenarios. The simulation/experimental group received a lecture/video and hands-on training using a high-fidelity simulation manikin. The didactic/control group received lecture/video training only. Both groups were asked to score the infant's withdrawal symptoms using the NAS scoring system. The student scores were compared with the nurse expert rater NAS scores. Results: An independent sample t-test was conducted to compare the tendency score, dangerous score, and overall discrepancy score between the experimental group and the control group. There were no significant differences in NAS scores between the experimental group and the control group. However, the overall discrepancy scores were lower among the experimental group indicating that the experimental group scored closer to the nurse expert. Conclusion: The findings will be used to develop an evidence-based clinical experience using highfidelity simulation training to enhance patient safety. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Veterans receive care in multiple health systems, requiring a nursing workforce to recognize their unique needs. Methods: Through our University's relationship with the Veteran's Administration Nursing Academic Partnership, an in situ simulation was designed and implemented to enhance nursing students learning of veteran-centered care. Results: METI Simulation Effectiveness Tool responses (n = 23) were overwhelmingly positive. Paired t-test measured student knowledge gains. Post-simulation test scores were 16.6% higher (+/- 15.2%) than on pre-test; statistically significant (t = 4.09, df = 13, and p = .001). Conclusion: Leveraging clinical partnerships to perform simulation experiences in-situ is an effective teaching strategy to teach nursing students veteran-centered care. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Simulation learning outcomes and learner satisfaction may be influenced by the facilitation methods employed. This mixed-methods study explored differences between instructor-led simulation with in-scenario feedback and postscenario debriefing and student-led simulation with postscenario debriefing only. Methods: Novice nursing students experienced both facilitation methods and completed (a) Health Assessment Educational Modality Evaluation Simulation Subscale, (b) Facilitation Style Preference Survey, and (c) multiple choice quiz, and provided qualitative feedback on what they liked/disliked about each facilitation style. Results: Novice learners preferred instructor-led to student-led simulation (p < .001); there was no association between simulation facilitation methods and knowledge scores. Four main themes emerged: (a) guidance and clarification, (b) avoiding error reinforcement, (c) realism, and (d) collaborative problem solving. Conclusion: Instructor-led simulation is the preferred facilitation method for novice nursing students; however, a progression from instructor-led to student-led simulation may enhance learning by providing increased autonomy as knowledge and confidence grow. Crown Copyright (C) 2017 Published by Elsevier Inc. on behalf of International Nursing Association for Clinical Simulation and Learning. All rights reserved. Background: The need to identify critical areas for faculty and program development in simulation-based education (SBE) is evident as nursing programs move toward clinical redesign. Method: The design was a descriptive mixed method study using a structured interview process with a survey based on the NCSBN Simulation Guidelines for Prelicensure Nursing Programs. Simulation coordinators were interviewed at 25 prelicensure nursing programs in one mid-Atlantic state. Results: The findings revealed key areas for faculty and program development: theory, standards, methods, curriculum integration, debriefing and evaluation. Conclusions: The NCSBN guidelines were effective in assessing simulation programs identifying important elements in the development of a statewide curriculum for SBE. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Background: Simulation is a successful method to enhance learning, especially how to handle infrequent high-risk events, such as patient codes. However, no studies were found that examined methods for educating nurses in code documentation. The purpose of this study was to evaluate the impact of simulation on nurses' knowledge, confidence, and accuracy of code documentation. Methods: A one-group pre-and posttest quasi-experimental design was used. Nurses completed a knowledge test and confidence survey before and after participating in two code simulations. Accuracy in documenting the simulated code events was also evaluated. The impact of the simulations on code documentation knowledge pre-and postsession and on the accuracy of code documentation for the two simulations per session was determined by paired-sample t-test and the impact on confidence pre-and postsession was determined by Wilcoxon signed-rank test. Results: Forty-eight pediatric acute care nurses from three units participated in nine simulation sessions. There was a statistically significant increase in knowledge test scores from presimulation (M = 3.45, SD = 1.46) to postsimulation (M = 5.68, SD = 0.73) (t[47] = 9.47, p < .001 [one tailed]). Documentation accuracy improved from the first simulation (M = 4.59, SD = 1.63) to the second simulation (M = 6.36, SD = 1.66) (t[43] = 8.33, p < .001). A statistically significant increase in confidence was found following participation in the simulation session (z = -6.206, p < .0001) with a large effect size (r =-0.64). The median confidence level also increased from presimulation (Mdn = a little confident) to postsimulation (Mdn = somewhat confident). Conclusion: Simulation can be used to increase nurses' knowledge, confidence, and accuracy with code documentation. Background: Neonatal critical care knowledge is beyond that required for nursing licensure. Knowledge gaps among neonatal intensive care unit (NICU) nurses for pulmonary, cardiovascular, and neurology areas were addressed using simulation in an effort to strengthen knowledge, improve clinical judgment, and affect patient outcomes. Methods: The Tanner Model of Clinical Judgment (2006) was used as the conceptual framework. The quasi-experimental preepost test design included n = 130. Participants received three sessions of simulated scenarios and structured debriefing. Results: Overall differences in knowledge scores were noted: p = .0167 (Year 1) and p = .0021 (Year 2). Trended patterns demonstrated improvement over time for clinical judgment (Year 2) for both self-and evaluator ratings using the Lasater Clinical Judgment Rubric (2007). Clinical outcome trends of decreased ventilator days, increased utilization of alternative oxygen delivery methods, and stable intraventicular hemorrhage rate were realized. Conclusions: Simulation-based learning can be effective in advancing knowledge and clinical judgment for NICU nurses as evidenced in preepost assessment scores/ratings. Clinical outcomes have been favorably impacted in areas influenced by nursing knowledge and clinical judgment. (C) 2017 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. In women across the world, the most common type of cancer is breast cancer. Among medical treatments, endocrine therapy based on aromatase inhibitors (AI) is expected to be effective against not only post-menopausal but also pre-menopausal breast cancer. In this study, we examined the structure-activity relationship between the aromatase inhibitory effects of 7-diethylaminocoumarin derivatives with a substituent at position 3 and coumarin derivatives with a substituent at position 7. Consequently, we found that 7-(pyridin-3-yl) coumarin (IC50 values 30.3 nM) and 7,7'-diethylamino-3,3'-biscoumarin (28.7 nM) are the most potent inhibitors of aromatase. These inhibitors were found to be comparable to the existing CYP19 inhibitor exemestane (42.5 nM). (C) 2017 Elsevier Ltd. All rights reserved. Factor VIIa (FVIIa) inhibitors have shown strong antithrombotic efficacy in preclinical thrombosis models with limited bleeding liabilities. Discovery of potent, orally active FVIIa inhibitors has been largely unsuccessful due to the requirement of a basic P1 group to interact with Asp189 in the S1 binding pocket, limiting their membrane permeability. We have combined recently reported neutral P1 binding substituents with a highly optimized macrocyclic chemotype to produce FVIIa inhibitors with low nanomolar potency and enhanced permeability. (C) 2017 Elsevier Ltd. All rights reserved. The formation of 1,4-disubstituted 1,2,3-triazoles through copper-catalyzed azide-alkyne cycloaddition (CuAAC) in oligonucleotides bearing 1-deoxy-1-ethynyl-beta-D-ribofuranose (R-E) can have a positive impact on the stability of oligonucleotide duplexes and stem-loop structures. (C) 2017 Elsevier Ltd. All rights reserved. The ATM- and Rad3-related (ATR) kinases play a key role in DNA repair processes and thus ATR is an attractive target for cancer therapy. Here we designed and synthesized sulfilimidoyl- and sulfoximidoyl-substituted analogs of the sulfone VE-821, a reported ATR inhibitor. The properties of these analogs have been investigated by calculating physicochemical parameters and studying their potential to specifically inhibit ATR in cells. Prolonged inhibition of ATR by the analogs in a Burkitt lymphoma cell line resulted in enhanced DNA damage and a substantial amount of apoptosis. Together our findings suggest that the sulfilimidoyl- and sulfoximidoyl-substituted analogs are efficient ATR inhibitors. (C) 2017 Elsevier Ltd. All rights reserved. Described herein is the design, synthesis and biological evaluation of a series of N-(1H-pyrazol-3-yl) quinazolin-4-amines against a panel of eight disease relevant protein kinases. The kinase inhibition results indicated that two compounds inhibited casein kinase 1 delta/epsilon (CK1 delta/epsilon) with some selectivity over related kinases, namely CDK5/p25, GSK-3 alpha/beta, and DYRK1A. Docking studies with 3c and 3d revealed the key interactions with desired amino acids in the ATP binding site of CK1 delta. Furthermore, compound 3c also elicited selective cytotoxic activity against the pancreas ductal adenocarcinoma (PANC-1) cell line. Taken together, the results of this study establish N-(1H-pyrazol-3-yl) quinazolin-4-amines especially 3c and 3d as valuable lead molecules with great potential for CK1 delta/epsilon inhibitor development targeting neurodegenerative disorders and cancer. (C) 2017 Elsevier Ltd. All rights reserved. The estrogen receptor (ER) has played an important role in breast cancer development and progression and is a central target for anticancer drug discovery. In order to develop novel selective ER alpha modulators (SERMs), we designed and synthesized 18 novel 3-aryl-4-anilino-2H-chromen-2-one derivatives based on previously reported lead compounds. The biological results indicated that most of the compounds presented potent ER alpha binding affinity and possessed better anti-proliferative activities against MCF-7 and Ishikawa cell lines than the positive control tamoxifen. The piperidyl substituted compounds such as 16d and 18d demonstrated strong ER alpha binding affinities and excellent anti-proliferative activities respectively. Compound 18d displayed the most potent ER alpha binding affinity with RBA value of 2.83%, while 16d exhibited the best anti-proliferative activity against MCF-7 cells with IC50 value of 4.52 +/- 2.47 mu M. Further molecular docking studies were also carried out to investigate binding pattern of the newly synthesized compounds with ER alpha. All these results together with the structure-activity relationships (SARs) indicated that these 3-aryl-4-anilino-2H-chromen-2-one derivatives with basic side chain could serve as promising leads for further optimization as novel SERMs. (C) 2017 Elsevier Ltd. All rights reserved. A novel series of coumarin derivatives 6a-o, bearing isoxazole moieties were designed and synthesized. After that, they were evaluated for melanin synthesis in murine B16 cells and inhibitory effect on the growth of CA (Candida albicans), EC (Escherichia coli), SA (Staphylococcus aureus). It was found that eleven compounds (6b-f, 6j-o) showed a better activity on melanin synthesis than positive control (8-MOP). Among them, compounds 6d (242%) and 6f (390%), with nearly 1.6 and 2.6-fold potency compared with 8-MOP (149%) respectively, were recognized as the most promising candidate hits for further pharmacological study of anti-vitiligo. Seven halogen substituted compounds exhibited moderate antimicrobial activity against CA. It is interesting that 6e-f and 6l-m, which had two halogens on the benzene showed a comparable activity with Amphotericin B against CA. The evaluation of melanin synthesis in B16 cells and inhibitory effect on bacteria of above structurally diverse derivatives had also led to an outline of structure-activity relationship. (C) 2017 Elsevier Ltd. All rights reserved. Guided by co-crystal structural information obtained from a different series we were exploring, a scaffold morphing and SBDD approach led to the discovery of the 1,4-disubstituted indazole series as a novel class of GKAs that potently activate GK in enzyme and cell assays. anti-diabetic OGTT efficacy was demonstrated with 29 in a rodent models of type 2 diabetes. (C) 2017 Elsevier Ltd. All rights reserved. Studies on human genetics have suggested that inhibitors of the Na(v)1.7 voltage-gated sodium channel hold considerable promise as therapies for the treatment of chronic pain syndromes. Herein, we report novel, peripherally-restricted benzoxazolinone aryl sulfonamides as potent Na(v)1.7 inhibitors with excellent selectivity against the Na(v)1.5 isoform, which is expressed in the heart muscle. Elaboration of initial lead compound 3d afforded exemplar 13, which featured attractive physicochemical properties, outstanding lipophilic ligand efficiency and pharmacological selectivity against Na(v)1.5 exceeding 1000-fold. Key structure-activity relationships associated with oral bioavailability were leveraged to discover compound 17, which exhibited a comparable potency/selectivity profile as well as full efficacy following oral administration in a preclinical model indicative of antinociceptive behavior. (C) 2017 Elsevier Ltd. All rights reserved. A new class of betulin-derived a-keto amides was identified as HIV-1 maturation inhibitors. Through lead optimization, GSK8999 was identified with IC50 values of 17 nM, 23 nM, 25 nM, and 8 nM for wild type, Q369H, V370A, and T371A respectively. When tested in a panel of 62 HIV-1 isolates covering a diversity of CA-SP1 genotypes including A, AE, B, C, and G using a PBMC based assay, GSK8999 was potent against 57 of 62 isolates demonstrating an improvement over the first generation maturation inhibitor BVM. The data disclosed here also demonstrated that the new alpha-keto amide GSK8999 has a mechanism of action consistent with inhibition of the proteolytic cleavage of CA-SP1. (C) 2017 Elsevier Ltd. All rights reserved. A series of substituted indoles were examined as selective inhibitors of tropomyosin-related kinase receptor A (TrkA), a therapeutic target for the treatment of pain. An SAR optimization campaign based on ALIS screening lead compound 1 is reported. (C) 2017 Elsevier Ltd. All rights reserved. Potent inhibitors of Trypanosoma brucei methionyl-tRNA synthetase were previously designed using a structure-guided approach. Compounds 1 and 2 were the most active compounds in the cyclic and linear linker series, respectively. To further improve cellular potency, SAR investigation of a binding fragment targeting the "enlarged methionine pocket" (EMP) was performed. The optimization led to the identification of a 6,8-dichloro-tetrahydroquinoline ring as a favorable fragment to bind the EMP. Replacement of 3,5-dichloro-benzyl group (the EMP binding fragment) of inhibitor 2 using this tetrahydroquinoline fragment resulted in compound 13, that exhibited an EC50 of 4 nM. (C) 2017 Elsevier Ltd. All rights reserved. The quinazoline scaffold is the main part of many marketed EGFR inhibitors. Resistance developments against those inhibitors enforced the search for novel structural lead compounds. We developed novel benzo-anellated 4-benzylamine pyrrolopyrimidines with varied substitution patterns at both the molecular scaffold and the attached residue in the 4-position. The structure-dependent affinities towards EGFR are discussed and first nanomolar derivatives have been identified. Docking studies were carried out for EGFR in order to explore the potential binding mode of the novel inhibitors. As the receptor tyrosine kinase VEGFR2 recently gained an increasing interest as an upregulated signaling kinase in many solid tumors and in tumor metastasis we determined the affinity of our compounds to inhibit VEGFR2. So we identified novel dually acting EGFR and VEGFR2 inhibitors for which first anticancer screening data are reported. Those data indicate a stronger antiproliferative effect of a VEGFR2 inhibition compared to the EGFR inhibition. (C) 2017 Elsevier Ltd. All rights reserved. Thiosemicarbazides and their analogs have shown potential medical applications as antiviral, antibacterial and anticancer drugs. We designed, synthesized and evaluated in vitro anticancer activity against ovarian (A2780), cervix (HeLa), colon (LoVo), breast (MCF-7) and brain (MO59J) human cancer cell lines of seven novel compounds -S-glycosylated thiosemicarbazones. We assessed the cyto- and genotoxic properties of all novel compounds using a variety of methods including comet assay, XTT assay, various fluorescent assays and toxicology PathwayFinder expression array. We tried to evaluate their possible mechanism of action with particular attention to induction of DNA damage and repair, apoptosis, oxidative stress analysis and cellular response in terms of changes in gene expression. The most sensitive cell line was human ovarian cancer. The results revealed that the major activity against A2780 cancer cell line displayed by our compounds is induction of DNA damage. This effect is not associated with apoptosis or oxidative stress induction and the resulting damage will not lead to cell cycle arrest. We also observed up-expression of heat shock related genes and NQO1 gene in response to our compounds. The second effect seems to be specific to glycosylated S-bond compounds as we observed it earlier. Upregulation of heat shock protein encoding genes suggest that our compounds induce stressful conditions. The nature of this phenomena (heat shock, pH shift or hypoxia) needs further study. (C) 2017 Elsevier Ltd. All rights reserved. Interleukin-1 receptor associated kinase 4 (IRAK4) has been implicated in IL-1R and TLR based signaling. Therefore selective inhibition of the kinase activity of this protein represents an attractive target for the treatment of inflammatory diseases. Medicinal chemistry optimization of high throughput screening (HTS) hits with the help of structure based drug design led to the identification of orally-bioavailable quinazoline based IRAK4 inhibitors with excellent pharmacokinetic profile and kinase selectivity. These highly selective IRAK4 compounds show activity in vivo via oral dosing in a TLR7 driven model of inflammation. (C) 2017 Elsevier Ltd. All rights reserved. The reference standard methyl (2-amino-5-(benzylthio) thiazolo[4,5-d] pyrimidin-7-yl)-D-leucinate (5) and its precursor 2-amino-5-(benzylthio) thiazolo[4,5-d] pyrimidin-7-yl)-D-leucine (6) were synthesized from 6-amino-2-mercaptopyrimidin-4-ol and BnBr with overall chemical yield 7% in five steps and 4% in six steps, respectively. The target tracer [C-11] methyl (2-amino-5-(benzylthio) thiazolo[4,5-d] pyrimidin- 7-yl)-D-leucinate ([C-11] 5) was prepared from the acid precursor with [C-11] CH3OTf through O-[C-11] methylation and isolated by HPLC combined with SPE in 40-50% radiochemical yield, based on [C-11] CO2 and decay corrected to end of bombardment (EOB). The radiochemical purity was >99%, and the specific activity (SA) at EOB was 370-1110 GBq/mu mol with a total synthesis time of similar to 40-min from EOB. The radioligand depletion experiment of [C-11] 5 did not display specific binding to CX(3)CR1, and the competitive binding assay of ligand 5 found much lower CX(3)CR1 binding affinity. (C) 2017 Elsevier Ltd. All rights reserved. In the course of our continuing studies on the 2-(benzo[b] thiophene-3'-yl)-6,8,8-triethyldesmosdumotin B (TEDB-TB) series, we designed and synthesized nine amino-TEDB-TB derivatives to improve pharmaceutical properties, identify structure activity relationships, and discover novel antitubulin agents. Among all newly synthesized amino-TEDB-TBs, the 5'- and 6'-amino derivatives, 6 and 7, exhibited significant antiproliferative activity against five human tumor cell lines, including an MDR subline overexpressing P-gp. The IC50 values of 0.50-1.01 mu M were 3-6 times better than those of previously reported hydroxy-TEDB-TBs. Compounds 6 and 7 inhibited tubulin polymerization, induced both depolymerization of interphase microtubules and multiple spindle formations, and caused cell arrest at prometaphase. Among all compounds, compound 7 scored best pharmaceutically with LogP 2.11 and biologically with greater antiproliferative activity and induction of cell cycle arrest at prometaphase. (C) 2017 Published by Elsevier Ltd. An unprecedented spinaceamine-bearing pregnane namely scleronine (1) was isolated from a Chinese soft coral Scleronephthya sp. Its structure was determined on the basis of 1D and 2D NMR spectroscopic analyses in association with the HRESIMS data, while the absolute configurations were deduced by the single-crystal X-ray diffraction analysis. In addition, a dehydrogenated analogue (3) was synthesized through six steps with pregna-1,20-dien-3-one (2) as a precursor. The significantly inhibitory effects of 1 and 3 against the migration of tumor cells A549 and B16 accompanying the down-regulation of key genes (TGF beta, TNF alpha, IL-1 beta, and IL-6) were observed. These findings suggested that both 1 and 3 are potential for therapeutic usage aiming at cancer metastasis inhibition. (C) 2017 Elsevier Ltd. All rights reserved. We recently reported oxazatricyclodecane derivatives 1 as delta opioid receptor (DOR) agonists having a novel chemotype, but their DOR agonistic activities were relatively low. Based on the working hypothesis that the dioxamethylene moiety in 1 may be an accessory site and that it may interfere with the sufficient conformational change of the receptor required for exerting the full agonistic responses, we designed and synthesized new oxazatricyclodecane derivatives 2-4 lacking the dioxamethylene moiety. As we expected, the designed compounds 2-4 showed pronouncedly improved agonistic activities for the DOR. Compound 2a with the 17-cyclopropylmethyl substituent was a potent agonist with the highest selectivity for the DOR and was expected to be a lead compound for novel and selective DOR agonists. (C) 2017 Elsevier Ltd. All rights reserved. Synthesis and evaluation of new scaffold phenylisoserine derivatives connected with the essential functional groups against SARS CoV 3CL protease are described. The phenylisoserine backbone was found by simulation on GOLD software and the structure activity relationship study of phenylisoserine derivatives gave SK80 with an IC50 value of 43 mu M against SARS CoV 3CL R188I mutant protease. (C) 2017 Elsevier Ltd. All rights reserved. Ipomoeassin F is a plant-derived macrocyclic glycolipid with single-digit nanomolar IC50 values against cancer cell growth. In previous structure-activity relationship studies, we have demonstrated that certain modifications around the fucoside moiety did not cause significant cytotoxicity loss. To further elucidate the effect of the fucoside moiety on the biological activity, we describe here the design and synthesis of several fucose-truncated monosaccharide analogues of ipomoeassin F. Subsequent biological evaluation strongly suggests that the 6-membered ring of the fucoside moiety is essential to the overall conformation of the molecule, thereby influencing bioactivity. (C) 2017 Elsevier Ltd. All rights reserved. A structure-activity relationship study of a K-Ras(G12D) selective inhibitory cyclic peptide, KRpep-2d was performed. Alanine scanning of KRpep-2d focusing on the cyclic moiety showed that Leu(7), Ile(9), and Asp(12) are the key elements for K-Ras(G12D) selective inhibition of KRpep-2d. The cysteine bridging was also examined to identify the stable analog of KRpep-2d under reductive conditions. As a result, the KRpep-2d analog (12) including mono-methylene bridging showed potent K-Ras(G12D) selective inhibition in both the presence and the absence of dithiothreitol. This means that mono-methylene bridging is an effective strategy to obtain a reduction-resistance analog of parent disulfide cyclic peptides. Peptide 12 inhibited proliferation of K-Ras(G12D)-driven cancer cells significantly. These results gave valuable information for further optimization of KRpep-2d to provide novel anti-cancer drug candidates targeting the K-Ras(G12D) mutant. (C) 2017 Elsevier Ltd. All rights reserved. Natural products are an abundant source of structurally diverse compounds with antibacterial activity that can be used to develop new and potent antibiotics. One such class of natural products is the pseudopyronines. Here we present the isolation of pseudopyronine B (2) from a Pseudomonas species found in garden soil in Western North Carolina, and SAR evaluation of C3 and C6 alkyl analogs of the natural product for antibacterial activity against Gram-positive and Gram-negative bacteria. We found a direct relationship between antibacterial activity and C3/C6 alkyl chain length. For inhibition of Gram-positive bacteria, alkyl chain lengths between 6 and 7 carbons were found to be the most active (IC50 = 0.04-3.8 mu g/mL) whereas short alkyl chain analogs showed modest activity against Gram-negative bacteria (IC50 = 223-304 mu g/mL). This demonstrates the potential for this class of natural products to be optimized for selective activity against either Gram-positive or Gram-negative bacteria. (C) 2017 Elsevier Ltd. All rights reserved. Resveratrol is a common polyphenol of plant origin known for its cancer prevention and other properties. Its wider application is limited due to poor water solubility, low stability, and weak bioavailability. To overcome these limitations, a series of 13 novel resveratrol triesters were synthesized previously. In this paper, we describe the synthesis of 3 additional derivatives and the activity of all 16 against primary acute lymphoblastic leukemia cells. Of these, 3 compounds were more potent than resveratrol (IC50 = 10.5 mu M) namely: resveratryl triacetate (IC50 = 3.4 mu M), resveratryl triisobutyrate (IC50 = 5.1 mu M), and resveratryl triisovalerate (IC50 = 4.9 mu M); all other derivatives had IC50 values of >10 mu M. Further studies indicated that the active compounds caused G1 phase arrest, increased expression of p53, and induced characteristics of apoptotic cell death. Moreover, the compounds were only effective in cycling cells, with cells arrested in G1 phase being refractory. (C) 2017 Elsevier Ltd. All rights reserved. NTRK1/2/3 fusions have recently been characterized as low incidence oncogenic alterations across various tumor histologies. Tyrosine kinase inhibitors (TKIs) of the tropomyosin receptor kinase family TrkA/B/C (encoded by NTRK1/2/3) are showing promises in the clinic for the treatment of cancer patients whose diseases harbor NTRK tumor drivers. We describe herein the development of [F-18] QMICF ([F-18]-(R)-9), a quinazoline-based type-II pan-Trk radiotracer with nanomolar potencies for TrkA/B/C (IC50 = 85-650 nM) and relevant TrkA fusions including TrkA-TPM3 (IC50 = 162 nM). Starting from a racemic FLT3 (fms like tyrosine kinase 3) inhibitor lead with off-target TrkA activity ((+/-)-6), we developed and synthesized the fluorinated derivative (R)-9 in three steps and 40% overall chemical yield. Compound (R)-9 displays a favorable selectivity profile on a diverse set of kinases including FLT3 (>37-fold selectivity for TrkB/C). The mesylate precursor 16 required for the radiosynthesis of [F-18] QMICF was obtained in six steps and 36% overall yield. The results presented herein support the further exploration of [F-18] QMICF for imaging of Trk fusions in vivo. (C) 2017 Elsevier Ltd. All rights reserved. Anti-inflammatory effects of peroxisome proliferator-activated receptor gamma (PPRA gamma) ligands are thought to be largely due to PPAR gamma-mediated transrepression. Thus, transrepression-selective PPAR gamma ligands without agonistic activity or with only partial agonistic activity should exhibit anti-inflammatory properties with reduced side effects. Here, we investigated the structure-activity relationships (SARs) of PPAR gamma agonist rosiglitazone, focusing on transrepression activity. Alkenic analogs showed slightly more potent transrepression with reduced efficacy of transactivating agonistic activity. Removal of the alkyl group on the nitrogen atom improved selectivity for transrepression over transactivation. Among the synthesized compounds, 3l exhibited stronger transrepressional activity (IC50: 14 mu M) and weaker agonistic efficacy (11%) than rosiglitazone or pioglitazone. (C) 2017 Elsevier Ltd. All rights reserved. Niemann-Pick disease type C is a fatal, progressive neurodegenerative disease mostly caused by mutations in Nieamnn-Pick type C1 (NPC1), a late endosomal membrane protein that is essential for intracellular cholesterol transport. The most prevalent mutation, I1061T (Ile to Thr), interferes with the protein folding process. Consequently, mutated but intrinsically functional NPC1 proteins are prematurely degraded via proteasome, leading to loss of NPC1 function. Previously, we reported sterol derivatives as pharmacological chaperones for NPC1, and showed that these derivatives can normalize folding-defective phenotypes of I1061T NPC1 mutant by directly binding to, and stabilizing, the protein. Here, we report a series of compounds containing a phenanthridin-6-one scaffold as the first class of non-steroidal pharmacological chaperones for NPC1. We also examined their structure-activity relationships. (C) 2017 Elsevier Ltd. All rights reserved. On the basis of our prior structure-activity relationship (SAR) results, our current lead optimization of 1,5-diarylanilines (DAANs) focused on the 4-substituent (R-1) on the central phenyl ring as a modifiable position related simultaneously to improved drug resistance profiles and drug-like properties. Newly synthesized p-cyanovinyl-DAANs (8a-8g) with different R-1 side chains plus prior active p-cyanoethyl-DAANs (4a-4c) were evaluated not only for anti-HIV potency against both wild-type HIV virus and rilpivirine-resistant (E138K, E138K+M184I) viral replication, but also for multiple drug-like properties, including aqueous solubility, lipophilicity, and metabolic stability in human liver microsomes and human plasma. This study revealed that both ester and amide R-1 substituents led to low nanomolar anti-HIV potency against wild-type and rilpivirine-resistant viral strains (E138K-resistance fold changes < 3). The N-substituted amide-R-1 side chains were superior to ester-R-1 likely due to improved aqueous solubility, lipophilicity, and higher metabolic stability in vitro. Thus, three amide-DAANs 8e, 4a, and 4b were identified with high potency against wild-type and rilpivirine-resistant viral strains and multiple desirable drug-like properties. (C) 2017 Elsevier Ltd. All rights reserved. The copper(II) cation, sucrose, and hydroxychloroquine were complexed with the chemotherapy agent paclitaxel and studied for medicinal activity. Data (GI(50), LD50) from single dose and five dose National Cancer Institute sixty cell line panels are presented. Analytical measurements of different complexes were made using Nuclear Magnetic Resonance (H-1 NMR), Matrix Assisted Laser Desorption Ionization-Time of Flight-Mass Spectrometry (MALDI-TOF-MS) and Fourier Transform-Ion Cyclotron Resonance (FT-ICR). Molecular modeling is utilized to better understand the impact that species could have on physical parameters associated with Lipinski's Rule of Five, such as logP and TPSA. On average, Cu(II) and hydroxychloroquine decreased GI(50) values, while sucrose increased GI(50) values of paclitaxel. (C) 2017 Elsevier Ltd. All rights reserved. Fleximers, a novel type of flexible nucleoside that have garnered attention due to their unprecedented activity against human coronaviruses, have now exhibited highly promising levels of activity against filoviruses. The Flex-nucleoside was the most potent against recombinant Ebola virus in Huh7 cells with an EC50 = 2 mu M, while the McGuigan prodrug was most active against Sudan virus-infected HeLa cells with an EC50 of 7 mu M. (C) 2017 Elsevier Ltd. All rights reserved. A series of deuterated apalutamide were designed and prepared. Compared to its prototype compound 18, deuterated analogues 19 and 21 showed obviously higher plasma concentrations and better PK parameters after oral administration in mice. In rats, N-trideuteromethyl compound 19 displayed 1.8-fold peak concentration (C-max), and nearly doubled its drug exposure in plasma (AUC(0-infinity)) compared to compound 18. Unsurprisingly, compounds 18 and 19 had similar affinity for AR in vitro. In summary, the deuteration strategy could obviously improve PK parameters of apalutamide. (C) 2017 Elsevier Ltd. All rights reserved. New alkaloids, houttuynamide B and C (1, 2) and houttuycorine (14), were isolated from the aerial parts of Houttuynia cordata Thunb. in addition to eighteen known alkaloids. Their structures were elucidated through extensive spectroscopic analysis. All the isolates were tested for their inhibitory activity against NO production in RAW 264.7 cells stimulated by LPS. Of the tested compounds, compound 15 showed the most potent anti-inflammatory activity with an IC50 value of 8.7 mu M. (C) 2017 Elsevier Ltd. All rights reserved. We have successfully established AS model using thoracic aortas vascular ring which evaluated by the morphological changes of blood vessels, the proliferation of VSMC, and the expression of inflammation factors VEGF, CRP, JNK2 and p38. This AS model has the advantages of low cost, convenient and short period of established time. Moreover, we investigated the anti-AS activities of 7 flavonoids Narirutin (1), Naringin (2), Eriodictyol (3), Luteolin (4), Galuteolin (5), Astragalin (6), Kaempferol (7) from flowers of Helichrysum arenarium L. MOENCH by examining the vascular morphology, the inhibition on the expression of inflammation factors CRP, VEGF, JNK2, p38. In addition, we investigated the anti-AS activities of these 7 flavonoids by examining NO secretion of RAW264.7 cells in response to LPS. All above inflammation factors have been proved to be involved in the formation of AS. After comprehensive analysis of all results to discuss the structure-activity relationship, we summarized the conclusions at follow: compounds 1-7 could inhibit the expression of VEGF, CRP, JNK2, p38 and NO at different level, and we evaluated that flavonol aglycone have more significant anti-inflammation than it's glycoside, and the anti-AS activity of flavonols were stronger than flavanones and flavones, which means that 3-group might be the effective group. Eventually, we supposed the main anti-inflammatory mechanism of these compounds was to reduce the expression of CRP, inhibit the kinases activity of JNK2 and p38, and then the MAPK pathway was suppressed, which resulted in the decrease of NO synthesis, VEGF expression and endothelial adhesion factor expression. And eventually, the scar tissue and vascular stenosis formations were prevented. This conclusion suggested flavonoids have the potential of preventing AS formation. (C) 2017 Elsevier Ltd. All rights reserved. A new series of Deacetylsarmentamide A and B derivatives, amides and sulfonamides of 3,4-dihydroxypyrrolidines as alpha-glucosidase inhibitors were designed and synthesized. The biological screening test against alpha-glucosidase showed that some of these compounds have the positive inhibitory activity against alpha-glucosidase. Saturated aliphatic amides were more potent than the olefinic amides. Among all the compounds, 5o/6o having polar -NH2 group, 10f/11f having polar -OH group on phenyl ring displayed 3-4-fold more potent than the standard drugs. Acarbose, Voglibose and Miglitol were used as standard references. The promising compounds 6i, 5o, 6o, 10a, 11a, 10f and 11f have been identified. Molecular docking simulations were done for compounds to identify important binding modes responsible for inhibition activity of alpha-glucosidase. (C) 2017 Elsevier Ltd. All rights reserved. Measurement is a fundamental but often overlooked component of research design and scientific inquiry. In quantitative study designs, the data that are collected, the statistical analyses conducted, and the conclusions drawn from those analyses all hinge on the validity, reliability, and appropriateness of measurements taken during the investigation. Missteps in the measurement process can undermine the validity of a study's findings and in doing so, efforts to advance the science of nursing education will fall short. This month's Methodology Corner article highlights several important measurement-related considerations for nursing education researchers seeking to build the evidence base-the body of knowledge about nursing educationon which instructional, curricular, policy, and planning decisions are made. Background: The number of RN-to-baccalaureate nursing (BSN) programs is increasing; however, nurses continue to voluntarily withdraw at higher rates than expected. Method: A Heideggerian hermeneutic approach was used to interpret the meaning of the experience of RNs, who voluntarily withdraw from their baccalaureate nursing programs. The research aims were to generate a comprehensive understanding of (a) the experiences of RN-to-BSN noncompleters, (b) the meaning noncompleters ascribe to the experience of dropping out, and (c) the interplay between factors that influence dropout decisions. Results: Two overarching patterns of understanding emerged: Withdrawing as Revisiting Failure, and Withdrawing as Impasse: On One Side of the Divide. The factors that influence whether an RN finishes a baccalaureate nursing program are many, but the effect on dignity and well-being are immeasurable. Conclusion: Voluntary withdrawal from an RN-to-BSN program leaves nurses professionally placebound, affecting not only the individual nurse but also the profession. Background: The Dual Degree Partnership in Nursing (DDPN) is a unique articulation model created in 2005 between two nursing programs that provides a seamless pathway for students to earn both an associate's degree and a bachelor's degree in nursing while benefiting from the strengths of each program. Method: Archival data has been systematically collected for a decade on admission, progression, retention, satisfaction, graduation, and NCLEX-RN pass rates to measure the reliability, validity, and integrity of this DDPN model for nursing education. Results: The findings demonstrate consistent performance and positive outcomes on all factors measured, which have been benchmarked against available state and national results. Conclusion: This innovative approach to academic progression in nursing is replicable and serves as a prototype to educate more nurses at the baccalaureate level, which directly contributes to the Institute of Medicine's goal of 80% of RNs having a minimum of a bachelor's degree by 2020. Background: The use of serious gaming in a virtual world is a novel pedagogical approach in nursing education. A virtual gaming simulation was implemented in a health assessment class that focused on mental health and interpersonal violence. The study's purpose was to explore students' experiences of the virtual gaming simulation. Method: Three focus groups were conducted with a convenience sample of 20 first-year nursing students after they completed the virtual gaming simulation. Results: Analysis yielded five themes: (a) Experiential Learning, (b) The Learning Process, (c) Personal Versus Professional, (d) Self-Efficacy, and (e) Knowledge. Conclusion: Virtual gaming simulation can provide experiential learning opportunities that promote engagement and allow learners to acquire and apply new knowledge while practicing skills in a safe and realistic environment. Background: Nursing faculty members strive to use optimal clinical learning environments that educate students for clinical competence and sense of salience. The purpose of this study was to offer insight into the perceptions of students, preceptors, and faculty in three clinical models: traditional, precepted, and a hybrid blend. Method: One hundred fifty students, seven preceptors, and 12 faculty members responded to open-ended survey questions about their experience in one of the models. Conventional content analysis revealed themes for each group and theme intersections across groups. Results: The students' themes included Making Connections (traditional), The Land of Opportunity (precepted), and The Total Package (hybrid). Preceptor themes included Giving of Self and Reflection on Practice. The Value of the Nurse theme emerged from faculty responses across all models. Students desired additional skill performance, and preceptors suggested improved communication and role clarity. Conclusion: Clinical models that maximize faculty and preceptor expertise should be formalized and studied. Background: It has been nearly a decade since findings revealed that a sample of U.S. nurses routinely used only 30 physical assessment techniques in clinical practice. In a time of differentiating nice-to-know from need-to-know knowledge and skills, what has changed in nursing education? Method: This cross-sectional, descriptive study examines the physical assessment skills taught and used among nursing students at one baccalaureate nursing education program located in the midwestern United States. Results: Findings highlight the similarities and differences from previous studies and offer insight as to how closely nursing education mirrors the skills needed for clinical practice. Conclusion: Nurse educators must continue to discriminate content taught in prelicensure nursing education programs and should consider the attainment of competency of those essential skills that most lend to optimal patient outcomes. Background: Schools of nursing have moved to multiple choice test questions to help prepare students for licensure and practice. However, students can buy test banks to help them "get through" nursing school. Accurate assessment of nursing students' knowledge and judgment comprises access to test banks. Method: The purpose of this exploratory study was to gain an understanding about nursing faculty's knowledge concerning test bank security issues, to assess whether publishers were aware of this issue, and vendor's reasons for supplying test banks to students. Results: Overall, the results indicated that the majority of faculty were unaware of student access to test banks, and although most do not use test banks verbatim, general consensus existed that test bank security is a concern. Conclusion: Implications include increasing faculty awareness of test bank access by students, supporting educators to develop their own test bank items, and promoting security of all examinations. Background: Although the number of men entering the nursing profession over the past century has increased incrementally, the proportion of men remains low in contrast to the U.S. population. On matriculation into nursing school, men face stereotypes about the nursing profession and the characteristics of the men who enter it. Men may also face a number of gender-based barriers, including lack of history about men in nursing, lack of role models, role strain, gender discrimination, and isolation. Method: This article describes each of these barriers and provides strategies to improve male students' learning experience. Results: The efforts of one nursing school to address many of these barriers are also described. Conclusion: Through acknowledging gender barriers and taking intentional steps to address them with prenursing and nursing students, schools of nursing may create a more inclusive environment and enhance the profession's diversity. Background: Effective communication with patients and families is essential for quality care in the pediatric environment. Despite this, the current structure and content of undergraduate nursing education often contributes to novice RNs feeling unprepared to manage complex pediatric communication situations. Method: By merging the characteristics of the Harlequin persona with the structure of story-based learning, undergraduate students can be introduced to increasingly advanced pediatric communication scenarios in the classroom. Although story-based learning encourages students to identify and address the contextual and emotional elements of a story, the Harlequin encourages educators to challenge assumptions and upset the status quo. Results: Nursing students can develop advanced communication abilities and learn to identify and cope with the emotions and complexities inherent in pediatric practice and communication. Conclusion: Harlequin-inspired story-based learning can enable nurse educators to create interesting, realistic, and challenging pediatric nursing stories designed to push students outside their comfort zones and enhance their advanced pediatric communication abilities. Background: Little is known about the teaching and learning implications of instructional storytelling (IST) in nursing education or its potential connection to nursing theory. Method: The literature establishes storytelling as a powerful teaching-learning method in the educational, business, humanities, and health sectors, but little exploration exists that is specific to nursing. Results: An example of a story demonstrating application of the domains of Tanner's clinical judgment model links storytelling with learning outcomes appropriate for the novice nursing student. Conclusion: Application of Tanner's clinical judgment model offers consistency of learning experience while preserving the creativity inherent in IST. Further research into student learning outcomes achievement using IST is warranted as a step toward establishing best practices with IST in nursing education. Background: The SMILE: Student Managed Initiative in Lifestyle Education program is an arts and health workshop that runs for 2 hours per day for 8 weeks. Health care students and community members are invited to participate. SMILE was developed to provide undergraduate nursing and health care students with an opportunity to practice and improve on their communication, group facilitation, and leadership skills. SMILE also provides community participants access to an arts and health education workshop. Method: The SMILE project was evaluated using a qualitative approach to identify effects to student and community participant learning. Results: The SMILE evaluation highlighted a key theme: Helping to Learn, Learning to Help. Students identified SMILE as an opportunity to learn how to help and community members recognized and valued their role in helping students learn. Conclusion: This article provides an overview of the SMILE program and report on the evaluation findings. Mentoring exists in formal and informal processes in many areas of nursing practice. This article explores the mentoring relationship between the two editors of this column. Nursing is a practice profession, and nurse educators have the responsibility to ensure learner attainment of competency at all levels of nursing education. This two-part article provides basic information about designing structured-option test questions to assess and evaluate competency attainment. Part one discusses the uses, advantages, and limits of using structured-option test questions and explains how to develop a structured-option test question. Part two provides information about developing a test blueprint, administering the test, interpreting test results, and revising the test questions. Background: It is unknown if completing educational modules on understanding, reviewing, and synthesizing research literature is associated with higher value of, attitudes toward, and implementation of evidence-based practices. Method: Nurses completed valid, reliable questionnaires on the value of, attitudes toward, and implementation of evidence-based practice 6 months after four educational modules were introduced. Multivariable modeling was used to learn associations of education modules and evidence-based practice themes. Results: Of 1,033 participants, 54% completed at least one education module; 22% completed all modules. Value and attitude about evidence-based practice were moderately high, but implementation was low (mean = 15.15 +/- 15.72; range = 0 to 72). After controlling for nurse characteristics and experiences associated with evidence-based practice value, attitudes, and implementation scores, education modules completion was associated with the implementation of evidence-based practice (p = .001), but not with value or attitude of evidence-based practices scores. Conclusion: Education on reviewing and synthesizing literature strengthened implementation of evidence-based practices. Background: Studies reveal that most nurse preceptor preparation programs do not meet nurse preceptors' training needs. This study thus developed a nurse preceptor-centered training program (NPCTP) in Taiwan. Method: The ADDIE model was used for the instructional design. On the basis of the nurse preceptors' training needs assessment, the research team developed the NPCTP. Content was adopted from the authentic experiences of preceptors and new graduate nurses (NGNs) using interview data to make 81 videos with computer avatars and 10 live actor films. Each course was taught as nine instructional events. The NPCTP was evaluated using reflection quizzes, preceptors' self-evaluations, NGNs' evaluations, and focus group interviews. Results: The NPCTP enhanced preceptors' clinical teaching behaviors and had a positive influence on NGNs. The NGN evaluation was even better than the preceptors' self-evaluation. Conclusion: This article provides the what and how for an NPCTP in Taiwan. Tracking the effectiveness of CE over time beyond simply confirming its efficacy in continuing education (CE) in nursing is crucial. However, research evidence on the analysis of change of the effectiveness of CE over time is limited, particularly in the context of case management. This methodological study aimed to introduce both a growth curve modeling and an intra-individual variability index and demonstrate step-by-step procedures and interpretations of those analyses to assess case manager competency over time, using secondary data analysis. Data were collected from 22 case managers affiliated with the Korean National Health Insurance Corporation who attended three series of CE to improve their competency between May 2008 and August 2009. Unexpected results revealed a negative fixed effect of education level in the overall estimation of case managers' competency trajectory and a negative correlation between education level and case managers' intra-individual competency inconsistency over time. Background: Delirium is an acute brain dysfunction associated with poor outcomes in intensive care unit (ICU) patients. Critical care nurses play an important role in the prevention, detection, and management of delirium, but they must be able to accurately assess for it. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) instrument is a reliable and valid method to assess for delirium, but research reveals most nurses need practice to use it proficiently. Method: A pretest-posttest design was used to evaluate the success of a multimodal educational strategy (i.e., online learning module coupled with standardized patient simulation experience) on critical care nurses' knowledge and confidence to assess and manage delirium using the CAM-ICU. Results: Participants (N = 34) showed a significant increase (p < .001) in confidence in their ability to assess and manage delirium following the multimodal education. No statistical change in knowledge of delirium existed following the education. Conclusion: A multimodal educational strategy, which included simulation, significantly added confidence in critical care nurses' performance using the CAM-ICU. With a documented shortage in youth mental health services, pediatric primary care (PPC) providers face increased pressure to enhance their capacity to identify and manage common mental health problems among youth, such as anxiety and depression. Because 90% of U.S. youth regularly see a PPC provider, the primary care setting is well positioned to serve as a key access point for early identification, service provision, and connection to mental health services. In the context of task shifting, we evaluated a quality improvement project designed to assist PPC providers in overcoming barriers to practice-wide mental health screening through implementing paper and computer-assisted clinical care algorithms. PPC providers were fairly successful at changing practice to better address mental health concerns when equipped with screening tools that included family mental health histories, next-level actions, and referral options. Task shifting is a promising strategy to enhance mental health services, particularly when guided by computerassisted algorithms. Introduction: This exploratory study examined maternal attitudes, normative beliefs, subjective norms, and meal selection behaviors of mothers of 2- and 3-year-old children. Methods: Guided by the Theory of Reasoned Action, we had mothers complete three surveys, two interviews, and a feeding simulation exercise. Data were analyzed using descriptive and bivariate statistics and multivariate linear regression. Results: A total of 31 mothers (50% Latino, 34% Black, 46.9% < high school education, 31.3% poor health literacy)of 32 children (37.5% overweight/ obese) participated in this study. Maternal normative beliefs (knowledge of U. S. Department of Agriculture recommendations) did not reflect actual U. S. Department of Agriculture recommendations. Collectively, regression models explained 13% (dairy) to 51% (vegetables) of the variance in behavioral intent, with normative belief an independent predictor in all models except grain and dairy. Discussion: Meal selection behaviors, on average, were predicted by poor knowledge of U. S. Department of Agriculture recommendations. Dietary guidance appropriate to health literacy level should be incorporated into well-child visits. The persistence of acute rheumatic fever continues to be seen globally. Once thought to be eradicated in various parts of the world, the disease came back with a vengeance secondary to a lack of diligence on the part of providers. Today, the global burden of group A streptococcal infection, the culprit of the numerous sequelae manifested in acute rheumatic fever, is considerable. Although a completely preventable disease, rheumatic fever continues to exist. It is a devastating disease that involves long-term, multisystem treatment and monitoring for patients who were unsuccessful at eradicating the precipitating group A streptococcal infection. Prevention is the key to resolving the dilemma of the disease's global burden, yet the method to yield its prevention still remains unknown. Thus, meticulous attention to implementing proper treatment is the mainstay and remains a top priority. Purpose: Duchenne muscular dystrophy (DMD) is a rare neuromuscular disease with no known cure. We sought to update over 30 years of research reporting on the diagnostic delays in DMD. Methods: Through personal interviews, this study qualitatively explored parents' experiences regarding receipt of the DMD diagnosis and the guidance for care provided. Thematic analysis identified themes and provided answers to the research questions being addressed. Results: Four themes emerged: (a) Dismissive illustrates little consideration of parent concern in the diagnostic process; (b) Limited Knowledge describes misunderstandings about clinical signs, recommended screenings, and testing to achieve a diagnosis of DMD; (c) Careless Delivery reports on the manner in which the diagnosis was given; and (d) Lack of Guidance describes the follow-up that occurred after the diagnosis. Conclusion: Despite marked medical progress over the past several decades, substantial barriers to arriving at the diagnosis of DMD and the provision of care guidance remain. Introduction: Attrition in pediatric weight management is a substantial problem. This study examined factors associated with short-and long-term attrition from a lifestyle and behavioral intervention for parents of children with overweight or obesity. Method: Fifty-two families with children ages 6 to 12 years old and body mass index at or above the 85th percentile participated in a randomized controlled trial focused on parents, comparing parent-based cognitive behavioral therapy with parent-based psychoeducation for pediatric weight management. We examined program attrition using two clinical phases of the intervention: short-term and long-term attrition, modeled using the general linear model. Predictors included intervention type, child/parent weight status, sociodemographic factors, and health of the family system. Results: Higher self-assessed health of the family system was associated with lower short-term attrition; higher percentage of intervention sessions attended by parents was associated with lower long-term attrition. Discussion: Different variables were significant in our shortand long-term models. Attrition might best be conceptualized based on short-and long-term phases of clinical, parent-based interventions for pediatric weight management. Introduction: The purpose of this study was to examine the effect of a youth-centered assessment, the Sexual Risk Event History Calendar (SREHC), compared with the Guidelines for Adolescent Preventive Services (GAPS) assessment, on sexual risk attitudes, intentions, and behaviors. Methods: The Interaction Model of Client Health Behavior guided this participatory research-based randomized control trial. Youth participants recruited from university and community clinics in the Midwestern United States were randomized to a health care provider visit using either the SREHC or GAPS and completed surveys at baseline, postintervention, and 3, 6, and 12 months. Results: Participants included 181 youth (15-25 years old) and nine providers. Findings showed that youth in the SREHC group reported stronger intentions to use condoms compared with those in the GAPS group. Age and race were also significant predictors of sexual experience. Discussion: This study highlights the importance of using a youth-centered, systematic approach in the assessment of sexual risk behaviors. Introduction: This project evaluated an evidence-based selfinstructional program aimed at improving cardiopulmonary resuscitation (CPR) knowledge and confidence in parents with children in swim lessons. Method: A prospective, repeated-measures design evaluated the CPR Anytime Child program. Twenty-nine parents completed questionnaires before, immediately after, and 1 month after the program. Results: Knowledge and confidence scores improved significantly over time. Compared with a baseline knowledge mean score of 47.3%, mean score immediately after the program was 93.5% (t = - 12.176, p <.01) and at 1 month was 80.9% (t = - 8.459, p <.01). Confidence in determining CPR need increased from a baseline of 2.52 to 3.18 points immediately after the program (t = - 2.88, p =.013) and 3.20 at 1 month (t = 4.759, p <.01). Confidence in performing CPR increased from a baseline of 2.14 to 3.18 immediately after the program (t = 4.759, p <.01) and 2.73 at 1 month (t = -2.88, p =.013). Discussion: The CPR Anytime Child program had a significant sustained effect on improving knowledge and confidence in parents of children in swim lessons. The simplicity of this program makes it replicable and sustainable in this setting. Introduction: One third of the approximately 23,000 undergraduates in the United States are overweight or obese. College students appear to be more vulnerable to disproportionate weight gain during this time. Method: Cross-sectional. Diet, body mass index, and appetitive responsiveness were assessed in 80 undergraduates enrolled in three different meal plans, unlimited access, points, and none. Results: Appetitive responsiveness was positively correlated with fat (r = 0.34, p =.002) but not added sugars across groups. Unlimited access-plan students had higher fat consumption than no-plan students, regardless of appetitive responsiveness. Unlimited access-plan students had higher fruit and vegetable consumption and higher dairy consumption than point-plan students. There were no group differences for body mass index. All groups were below the U.S. Department of Agriculture guidelines for dairy and fruit and vegetable intake. Discussion: Optimizing the college campus food environment toward healthful, affordable choices is likely to improve dietary habits and might minimize college weight gain. Introduction: Emerging adults (EA) with disordered eating behaviors (DEBs) and Type 1 diabetes (T1D) are at increased risk for severe complications of T1D, and these behaviors have been reported in EA women with T1D. Few studies, though, have included men. This study assessed the prevalence of DEB in both EA men and women with T1D. Methods: DEB was measured with the diabetes-specific Diabetes Eating Problem Survey-Revised (DEPS-R); scores of 20 or greater indicate need for further evaluation for DEB. Results: A total of 27 women and 33 men (age range = 21 +/- 2.5 years) completed the DEPS-R; 27% of women and 18% of men had scores of 20 or greater (p = .23). Hemoglobin A1c level was significantly higher in subjects with elevated DEPS-R scores (10.4 +/- 2.1% vs. 7.8 +/- 1.3%; p < .001), and DEPS-R scores correlated with increased body mass index values (r = 0.27, p < .05). Discussion: Clinicians should assess for DEB in both male and female emerging adults with T1D, especially overweightpatients with poor glycemic control. Nonsuicidal self-injury (NSSI) in youth is a major public health concern. A retrospective chart review was conducted within a hospital system to examine (a) youth self-reports of reasons for engaging in NSSI and (b) additional contextual circumstances that may contribute to youth NSSI. Detailed history, physical examination, and treatment/discharge data were extracted by thoroughly reviewing all electronic documents in each medical record. The final sample (N = 135) were predominantly female (71.1%), and well over half (63.8%) reported Medicaid or uninsured status. Qualitative content analysis of youth self-reports and hospital progress notes showed that NSSI served as an emotional and functional coping mechanism. Five primary themes characterized the contextual influences on youth engaging in NSSI: (1) Personal Emotions, (2) Trauma, (3) Relationship Quality, (4) Sense of Loss, and (5) Risk Behaviors. Practical clinical practice suggestions for working with youth are discussed using these themes as a template for assessing risk and protective factors. Introduction: Measurement of cotinine, a biomarker of tobacco smoke exposure, can accurately identify children at risk of health consequences from secondhand smoke. This study reports perspectives from pediatric health care providers on incorporating routine cotinine screening into well-child visits. Methods: Key informant interviews (N = 28) were conducted with pediatric primary care providers: physicians, nurse practitioners, and registered nurses. Results: Themes identified in the interviews included the following: (a) Cotinine screening would assess children's exposure to tobacco smoke more reliably than parental report; (b) Addressing positive cotinine screening results might require additional resources; (c) Wheezing and a history of emergency department visits increased the salience of cotinine screening; and (d) A better understanding of the significance of specific cotinine test values would improve utility. Discussion: Pediatric providers see advantages of biomarker screening for tobacco smoke exposure at well-child visits, especially for children with wheezing, but have concerns about limited capacity for follow-up with parents. Introduction: The purpose of this qualitative study was to understand the feasibility and acceptability of implementing eHealth Familias Unidas, an Internet-based, family-based, preventive intervention for Hispanic adolescents, in primary care. Methods: Semistructured individual interviews with clinic personnel and facilitators (i.e., physicians, nurse practitioners, administrators, and mental health workers; n = 9) and one focus group with parents (n = 6) were audiorecorded, transcribed verbatim, and analyzed using a general inductive approach. Results: Nine major themes emerged, including recommendations to minimize disruption to clinic flow, improve collaboration and training of clinic personnel and the research team, promote the clinic as a trusted setting for improving children's behavioral health, and highlight the flexibility and convenience of the eHealth format. Discussion: This study provides feasibility and acceptability findings, along with important considerations for researchers and primary care personnel interested in collaborating to implement an eHealth preventive intervention in pediatric primary care. Objective: Knowledge of asthma home management from the perspective of poor, minority children with asthma is limited. Method: Convenience sampling methods were used to recruit families of low-income children who are frequently in the emergency department for uncontrolled asthma. Thirteen youths participated in focus groups designed to elicit reflections on asthma home management. Data were analyzed using grounded theory coding techniques. Results: Participants (Mean age = 9.2 years) were African American (100%), enrolled in Medicaid (92.3%), averaged 1.4 (standard deviation = 0.7) emergency department visits over the prior 3 months, and resided in homes with at least 1 smoker (61.5%). Two themes reflecting multifaceted challenges to the development proper of self-management emerged in the analysis. Discussion: Findings reinforce the need to provide a multipronged approach to improve asthma control in this high-risk population including ongoing child and family education and self-management support, environmental control and housing resources, linkages to smoking cessation programs, and psychosocial support. Every child is a unique individual. This individuality is evident in children exposed to psychosocial trauma or adverse childhood experiences. There exists wide variation in the way children respond to toxic stressors in their lives. Some children appear to be relatively unaffected, while others develop a variety of psychological, behavioral, and physical consequences. What is the explanation for this phenomenon? Resiliency has been suggested to explain this variation in pathology expressions in trauma-exposed children. It is vital for pediatric nurse practitioners to understand the concept of resilience. This continuing education offering will define concepts of resilience and stress, explore the neurobiology of resilience, and examine interventions that promote resilience in children. This case study examines some common complementary and alternative treatments used in the management of behavioral and gastrointestinal symptoms associated with autism including food selectivity, abdominal pain, nausea, gastroesophageal reflux, constipation, and diarrhea. The current literature on the safety and efficacy of these treatments for pediatric patients is reviewed. This study examines therapies including gluten-free and casein-free diet, probiotics, vitamin B12, omega-3 fatty acid supplementation, chelation therapy, acupuncture, and chiropractic manipulations used in treating these core symptoms of autism. Neuroscientists continue to document the behaviors and actions that cause the brain to release powerful chemicals that generate those significant human actions and behaviors. Leaders who understand the power of connection to purpose, trust, and vulnerability can unleash the power of neurobiology to improve employee well-being and engagement. As noted in part one of this article, structured-option test questions are an efficient and effective way to evaluate learner attainment of competency. Part two of this article explains how to develop a test plan, administer the test, interpret the test results, and revise the test questions based on evidence. Background: Evidence-based practice (EBP) education is important to overcome barriers to evidence use in practice. Method: The authors conducted a cross-sectional study to evaluate the EBP knowledge, attitudes, and practices (KAP) of RNs and midwives who had participated in an EBP workshop and compared their results with those of nonparticipants. Results: A total of 198 nurses and midwives responded to the survey, 91 who had received EBP education and 107 who had not. There was a significant difference in terms of mean total KAP score which was significantly higher in the education group, indicating greater KAP in those respondents than those who had not received education (p = .004). Conclusion: This study has shown that participation in a single day of EBP education covering the basic steps of EBP results in nurses who have more positive attitudes, and greater knowledge and practice abilities in EBP than those who had not participated. Background: The continuing education needs of rural nurses are not well understood. Rural hospitals face special challenges that serve as barriers to the attainment of continuing education. The purpose of this study was to assess the educational needs and barriers identified by rural nurses in two midwestern states. Method: A 12-item needs assessment survey developed by the researchers and administered at forums in nine rural hospitals was completed by 119 nurses. Results: The areas identified as the highest need included postpartum hemorrhage, preterm labor, pediatric care, preeclampsia, shoulder dystocia, and embolism. The barriers to obtaining updated education included distance to travel for educational opportunities and lack of staff for coverage while attending the programs. Conclusion: Identifying the learning needs, preferred learning strategies, and potential barriers to continuing education of rural nurses is a critical step in designing educational offerings that enhance their knowledge. Interprofessional simulation-based learning may be particularly helpful in enhancing the clinical competence and confidence necessary to provide safe patient care in rural settings. Background: Conferences-formal meetings for learning-are a common venue for nurses to receive continuing education. This study used multimodal strategies, such as storytelling, lecture, case presentation, and discussions, to a deliver conference presentation. Method: Seventy-five and 69 rehabilitation nurses completed pretest and posttest surveys, respectively. Using an evaluative research design, seven questions measured the change in knowledge for Parkinson's disease (PD) and PD patient caregiver's needs. Two additional questions measured the change in comfort level with both topics. Results: For knowledge questions, the mean (6 SD) number of correct questions significantly increased from 3.4 (+/- 1.0) to 5.2 (+/- 0.9) (t = -10.0, p < .001). Participants reported increased comfort with PD and caregivers' needs, which was also statistically significant. Conclusion: Multimodal education strategies can provide robust conference experiences and improve learning. For the successful transfer of knowledge to diverse learners, careful planning of conference content must include attention to diverse teaching strategies. Background: Medication management has long been a role for nurses. Yet how new graduate nurses apply pharmacology knowledge to practice has been an issue not well identified in the literature. This article reports a survey undertaken in 2013 in one large urban New Zealand hospital that explores new graduate nurse's perception of applying their pharmacology knowledge to the clinical practice of medication management. Method: Survey research was employed for this study with a postal survey in 2013 distributed to 128 nurses who had graduated within the previous 24 months. Twenty-five questionnaires were returned, giving a response rate of 19.53%. Results: Newly graduated nurses were found to understand the importance of applying pharmacology knowledge in their practice but also acknowledged the need to increase their knowledge of the medications they are administering. However, understanding of how medications work and are eliminated was a concern. Conclusion: This study highlights the need for ongoing support and education for newly graduated nurses as they continue to develop and apply their pharmacology knowledge. Patients admitted to the acute care setting with persistent pain are at risk for inadequate pain control. Nursing education and pain management perceptions affect partnership with the patient to achieve effective pain management. Traditional educational approaches that are of didactic nature do not necessarily promote problem-solving and critical thinking skills. This pilot study used a mixed-media vignette approach to provide real-life pain examples to influence both the knowledge and attitudes of staff nurses through problem-based learning. This article describes and evaluates the feasibility of using mixed-media vignettes to deliver pain education to bedside nurses. Two mixed-media vignettes were delivered through nine in-service sessions provided to day, night, and weekend shifts in rolling 30-minute intervals. A feasibility framework was used to evaluate project implementation. The results from this pilot study suggest that this educational approach has the potential to improve patient satisfaction scores in relation to pain management. Novel antitumor drug methotrexate (MTX) and fluorescence photosensitizer pyropheophorbide-a (PPa) conjugated multifunctional magnetic fluorescence nanoparticles Fe3O4@SiO2-NH2@MTX@PPa (MFMTXPPa) were designed and prepared for chemotherapy, photodynamic therapy (PDT) and medical fluorescence imaging. Fe3O4@SiO2-NH2 nanoparticles were firstly synthesized by a sol-gel process. Then methotrexate and pyropheophorbide-a were successively anchored on the surface of core-shell Fe3O4@SiO2-NH2 nanoparticles via covalently imide bond. The phase structure, composition, morphology, zeta potential, magnetic properties of nanoparticles were characterized by X-ray powder diffraction, Fourier transform infrared spectrometer, transmission electron microscopy, zeta potential, vibration sample magnetometer, thermogravimetric analysis, respectively. Meanwhile, we studied the effect of pH and different polar solvents on the absorption and fluorescence spectrum of MFMTXPPa. Our results indicated that MFMTXPPa preferentially aggregated in the acidic conditions but maintained monomeric form in alkaline conditions. Aggregations exhibit markedly influence on the absorbance and fluorescence intensity. In addition, we found that MFMTXPPa has high magnetization (46.7 emu/g), good water-solubility and dispersity in polar solvents, strong fluorescence intensity in near neutral conditions, and excellent stability in the acid-base system, which may be a potential candidate in the field of chemotherapy, PDT and fluorescence imaging. Here, we report the preparation of rather spherical akaganeite (beta-FeOOH) nanoparticles with a mean diameter of similar to 5 nm by means of a hydrolysis procedure. The obtained material was widely characterized by several techniques such as X-ray powder diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR) and Transmission Electron Microscopy (TEM). The textural and surface properties were studied through the analysis of the surface area (BET), pore size distribution (BJH) and the point of zero charge, pH(PZC). The beta-FeOOH nanoparticles were used to evaluate the removal of Cr(VI) from aqueous systems. The results reveal that the optimal adsorption conditions were under acidic pH values (pH similar to 4.5). The adsorption kinetic data fit well with a pseudo-second-order model while the Langmuir isotherm describes the adsorption equilibrium conditions of the system. The maximum adsorption capacity of the beta-FeOOH was estimated in 56.5 mg/g from the Langmuir isotherm at 25 degrees C. Discrete, well-shaped monoclinic Fe-doped ZrO2 nanoparticles were prepared in the range of nominal compositions Zr1-xFexO2-x/2, being 0 <= x <= 0.1 by a hydrothermal approach and their optical and chemical properties evaluated for potential application as photocatalysts. Monoclinic Fe-containing ZrO2 nanoparticles sized around 20 nm were obtained straightaway from the reagent mixture through the hydrothermal aging at 140 degrees C for 3 days. The thermal stability of those monoclinic Fe-ZrO2 solid solutions was followed by X-ray diffraction, infrared and Raman spectroscopies, and scanning and transmission electron microscopy. Further annealing at 800 degrees C brought monosized, monoclinic single-phase Fe-containing ZrO2 nanocrystals without significant microstructural changes. On comparing with other not conventional synthetic approaches to zirconia based solid solutions, it is to remark that the present hydrothermal approach lead to monoclinic form of zirconia solid solution. Some preliminary results on potential advanced applications of the prepared monoclinic Fe-ZrO2 solid solutions as photocatalysts will be discussed. CuFe2O4-encapsulated ZnO nanoplates have been synthesized by hydrothermal procedure and characterized by scanning (SEM) and transmission (TEM) electron microscopies, selected area electron (SAED) and X-ray (XRD) diffractometries, vibrating sample magnetometry (VSM) and energy dispersive X-ray (EDS), Raman, solid state electrochemical impedance (EIS), UV-visible diffuse reflectance (DRS), photoluminescence (PL) and time-correlated single photon counting (TCSPC) lifetime spectroscopies. The lifetime of charge carriers produced in CuFe2O4/ZnO is not less than that in ZnO nanoplates. So is the photocatalytic activity. The magnetically recoverable CuFe2O4/ZnO nanoplates show sustainable photocatalytic activity on recycling. The composite nanoplates are bactericide as well. They inactivate E. coli effectively, even in the absence of direct light. The synthesized photostable CuFe2O4-encapsulated ZnO nanoplates are three-in-one material. The electrochemical method was used to synthesize hematite (alpha-Fe2O3) nanoparticles in the 0.1 M NaCl solution. The alpha-Fe2O3 were synthesized by dissolution the iron anode with formation the Fe(OH)(x) and its calcinations at different temperatures. The samples were characterized by means by X-ray diffraction (XRD), Raman and UV-vis spectroscopy, electron microscopy, EDX and FTIR. The photocatalytic oxidation of the mordant dye Chrome blue (CB) using electrochemical synthesize hematite was studied under ultraviolet light (UV-light) irradiations. The effect of hematite calcination temperature and hydrogen peroxide concentration on photocatalytic degradation of chrome blue was carried out. Increasing the annealing temperature are increases the degree of dye decolorization which connected with the gradually increasing proportion of the crystalline phase at the expense of the amorphous phase. As electromagnetic pollution is becoming more and more serious, novel composite microwave absorbents are gaining much attention. In this work, Fe-Fe3C/C fibers were successfully synthesized by using low-cost cotton as carbon source. The synthetic route was an easy and flexible two-step approach consisting of immersion and subsequent carbothermal reduction in N-2 atmosphere. The results showed that the Fe-Fe3C nanoparticles were uniformly loaded on the cotton-based carbon fibers. Resulting from the synergistic effect of magnetic loss from Fe-Fe3C nanoparticles and dielectric loss from cotton-based carbon fibers, a wide region of microwave absorption was achieved. When the thickness is 1.5 mm, the minimum reflection loss (RL) is -28.3 dB, and the effective bandwidth of RL less than -10 dB could reach up to 4.2 GHz. Owing to the characteristics of cost-effective synthetic route, low density, low filling rate and good microwave absorption with thin thickness, the Fe-Fe3C/C fibers could be used as a practical microwave absorbent. Hematite nanostructures have been electrochemically grown by ultrasound-assisted anodization of iron substrates in an ethylene glycol based medium. This hematite nano-architecture has been tuned from a 1-D nanoporous layer (grown onto a bare iron foil substrate) to a high aspect self-organized nanotube one (grown onto a pretreated iron foil). Well-developed hematite nanotube arrays perpendicular to the substrate with a 1 mu m in length have been obtained. The nanoporous sample was characterized by pores of a mean diameter of 30 nm and an interpore distance of 150 nm, whereas the self-organized nanotube layer consisted of nanotube arrays with a single tube inner diameter of approximately 50 nm and average spacing of approximately 90 nm. The wall thickness of the hematite nanotubes was of approximately 30 nm. A comparative study of the photoelectrochemical properties of these two different hematite nanostructures under water-splitting conditions have been studied through EIS and PEIS methods. The strong correlation between the C-SS increase with the R-SS,R-ct decrease and the photocurrent development as the potential is made more anodic, indicated that holes transfer for the water splitting reaction takes place through the surface states and not directly from valence band holes. From the PEIS spectra the rate constants of the elementary reactions responsible for the competing processes of interfacial charge transfer (k(tr)) and electron-hole recombination (k(rec)) have been determined. A better photoresponse kinetic was observed from the hematite nanotubular structure as compared to the nanoporous one. The last indicates that in the hematite nanotubular structure it exists a very well length scale matching between the nanotube wall thickness and the hole diffusion length (maximize light absorption while maintaining the bulk within hole collection length), diminishing then the recombination processes. The sandwich structured FeS2-graphite composites have been firstly developed for the application as anode materials for lithium-ion batteries. XRD and TEM techniques reveal that FeS2 nanoparticles are intercalated into the interlayer of graphite and well dispersed. Assured by the combined advantages of high capacity of FeS2, high conductivity of graphite, and specific sandwiched structures, the obtained composite anode materials deliver excellent electrochemical lithium storage properties, such as high reversible capacity (640 mAh g(-1) at 100 mA g(-1)), stable cyclic performance and long cyclic life (capacity retention of 89% over 1000 cycles at 1 A g(-1)), as well as good rate capability. This paper reported a core-shell structured site-specific light-controlled magnetic nanocomposite for drug delivery. Its core was composed of superparamagnetic Fe3O4 nanoparticles for magnetic guiding purposes. Its outer shell consisted of mesoporous silica molecular sieve MCM-41 which offered highly ordered hexagonal tunnels for drug molecules. A ligand N1-(5H-cyclopenta [1,2-b:5,4-b'] dipyridin-5-ylidene) benzene-1,4-diamine (denoted as Dafo-Ph-NH2) was coupled to the MCM-41shell. The Dafo end can flip over under 510 nm light therefore will act as a light stimulated acceptor. The final composite was analyzed by electron microscope images, XRD, IR spectra, thermogravimetry, MTT and N-2 adsorption/desorption. Our Dafo-MCM-41@Fe3O4 composite shows light controlled release property for vitamin B-12 in vitro. A novel redox-sensitvie core cross-linked micelles based on disulfide-linked PEG-polypeptide hybrid polymer were prepared and demonstrated for anticancer drug delivery, where N,N-bis(acrylate) cystamine (BAC) served as cross-linker, allyl-terminated poly(gamma-benzyl-L-glutamate) (PBLG) and polyethylene glycol (PEG) methyl ether methacrylate acted as comonomers. The molecular structure and characteristics of the cross-linked micelle and the precursor were confirmed by H-1 NMR and FT-IR. The cross-linked micelles could be easily degraded into individual linear short chains (M-n = 1800) in the presence of 20 mM glutathione (GSH) by the cleavage of the disulfide linkages from the cross-linker BAC. Doxorubicin (DOX) was selected as a model anticancer drug and encapsulated into the micelles with a decent drug loading content of 21.6 wt%. Compared to the burst release of free DOX in first 6 hours, the in vitro release studies revealed that the micelles exhibited a sustained and high cumulative drug release in GSH (up to 86%) within 24 h, rather than the relatively low release rate of 62% in PBS (pH 7.4). Cell cytoxicity experiments showed that the obtained micelles exhibited nontoxic, and the drug-loaded micelles exhibited high anti-cancer efficacy. All the results showed that the designed cross-linked PEG-polypeptide hybrid micelles may be a promising vehicle for anticancer drug delivery with stimuli-triggered drug release behavior in reducing environment. Fabrication of a kind of glypican 3 targeted nanobubbles for contrast enhanced ultrasound imaging was reported in this work. The targeting nanobubbles were formatted using PLGA as shell and perfluorocarbon as core, with a double emulsion (water/oil/water) evaporation process, and conjugating peptide as targeting ligand. The Peptide/PLGA/NBs were characterized in vitro. Results: The mean diameter of Peptide/PLGA NBs was 773.6 +/- 185.4 nm and zeta potential was -22.3 +/- 3.3 mV, their sizes were nanoscale in general, their uniform was not very well, and diameters were discrete. Ultrasonic imaging evaluation of Peptide/PLGA NBs in vitro showed which had significant differences of echo intensities with different concentration gradients. Peptide/PLGA NBs could target GPC3 of HCC cells in vitro. Conclusions: The GPC3 targeted nanobubbles (Peptide/PLGA NBs) have been developed, which have potential to develop into ultrasound contrast agents. Optimization of the Peptide/PLGA NBs is necessary to decrease the size of NBs and to narrow dispersity of NBs, and further research needs to be done to evaluate the characteristics of Peptide/PLGA NBs in vivo. The gold nanoparticles (AuNPs) have been synthesized by a green chemical approach using the biopolymer pectin which is isolated from the orange peel. The pectin-AuNPs nanoconjugates are attached on the PLA-PEG-PLA triblock copolymer matrix by the ultra-sonication induced water-inoil (W/O) emulsion method. The pectin-AuNPs-PLA-PEG-PLA nanoconjugates are characterized by UV-Visible absorption spectroscopy, X-Ray diffraction (XRD), transmission electron microscopy (TEM), selected area electron diffraction (SAED), energy dispersive X-Ray spectroscopy (EDS) and X-Ray photo electron spectroscopy (XPS). The XPS results confirm the zero oxidation state of Au, while the TEM images confirm the spherical shape of 7-13 nm of AuNPs. The XRD analysis reveals that the AuNPs exhibit fcc crystal structure with an average crystallite size of 12 nm. The biocompatibility of the pectin-AuNPs-PLA-PEG-PLA nanoconjugates has been investigated by the in vitro cell line study on South African green monkey's kidney cells. The anti-inflammatory activity of the nanoconjugates has been established by the membrane stabilization and inhibition of protein denaturation methods. The pectin-AuNPs-PLA-PEG-PLA nanoconjugates exhibit better anti-inflammatory activity than pectin and pectin-AuNPs. Amphiphilic self-assembled phytosterol-alginate nanoparticles (PA NPs) were prepared by hydrophobic modification of alginate with phytosterols. Lactobionate, a tumor-cell-specific ligand, was grafted to PA NPs for targeting the liver cancer cells (HepG2 cells) that overexpress asialoglycoprotein receptor (ASGP-R). The physicochemical properties of lactobionate-phytosterolalginate (LPA) NPs were characterized. Doxorubicin (DOX), a broad spectrum anticancer agent, was encapsulated into prepared NPs by dialysis method. The recognition of prepared LPA NPs to HepG2 cells was ascertained by cytotoxicity and lactobionate competition assays. Because of ASGP-R-mediated endocytosis process, the DOX/LPA NPs had lower IC50 value to HepG2 cells than pure DOX and DOX/PA. The cytotoxicity of DOX/LPA NPs to HepG2 cells could be competitively inhibited by free lactobionate. The cellular uptake manner of pure DOX and DOX/LPA NPs was recognized by confocal laser scanning microscopy image and the quicker intracellular uptake of drug for DOX/LPA NPs over pure DOX was confirmed. The LPA NPs had the latent force as a promising drug carrier to target drugs to cancer cells overexpressing ASGP-R and avoid killing the cells of normal tissues. In this work, the graphene quantum dots (GQDs) based fluorescent sensors for detection of clenbuterol (CLB) using the fluorescence resonance energy transfer (FRET) effect were developed and characterized. In the sensors, GQDs played as donor meanwhile the conjugated molecule formed from N-1-naphthylethylen diamine (NED) and diazonium-CLB was the acceptor. A good overlapping between the acceptor's UV-vis spectrum and donor's PL spectrum was observed. The CLB detection limit of the sensors was found at the CLB concentration of 10(-10) g/ml. A wide linearity relationship between the PL intensities of the sensors and the CLB concentrations in the range from 10(-4) g/ml to 10(-10) g/ml was observed. This report contributes to the design of GQDs based fluorescent sensor for detection of growth promoters. Current work entails the development and assessment of a well-tolerated colloidal carrier system containing immunosuppressant drug tacrolimus (TL) for transdermal delivery to study its efficacy against arthritis. TL-NEs of different composition from phase diagram were prepared by high sheer homogenization and a comprehensive physico-chemical characterization of the novel NEs were performed using different techniques in order to get most optimum NE. Optimized NE was incorporated in to carbopol-934 gel and subjected to ex vivo skin permeation studies, droplet size analysis, zeta potential measurement, TEM examination, Rheology, and stability study. Moreover, we have evaluated the in vivo anti arthritic potential of same formulation and compared it with a marketed ointment (Protopic (R), 0.03%) for the first time. Optimized TL-NG formulation composed of Capryol 90 (5.0% w/w), tween-20 (15.0% w/w), Transcutol HP (7.5% w/w), water (72.5%) w/w, carbopol-934 (1.0%) and found to have permeation flux (68.88 mu g/cm(2)/h), release (1621.46 mu g/24 h), small droplet size (12.72 nm), and viscosity of 1.07 Pa-s. The results of ZP indicated that formulation was stable and shelf life at room temperature was calculated as 1.59 years. The in vivo investigation demonstrated direct evidence on significant reduction (41.80%) in inflammation over a period of 24 h. On the basis of these preliminary finding, we conclude that developed TL-NG has good anti-inflammatory action and may provide promising perspective for treatment of Arthritis. Hydrogels based on chitosan and gelatin, were prepared by freeze-drying and ionic crosslinking with sodium tripolyphosphate. The anionic crosslinker interact with the protonated amino groups of chitosan and gelatin via electrostatic attractions. The hydrogels with a three-dimensional structure are macroporous with elongated or spherical pores, randomly or uniformly distributed as a function of the crosslinking degree; the uniformity increases by increase the time of crosslinking. The hydrogels absorb large quantities of phosphate buffered solution and simulated wound fluid. They are degraded by collagenase, the specific enzyme for collagens. Antimicrobial drug, Norfloxacin was loaded with good efficiency into hydrogel network and kinetics studies for drug release were carried out in phosphate buffered solutions. The drug release data analysis by mathematical models suggested a combination of swelling and matrix relaxation as major drug release mechanisms. Cytotoxicity assays were performed in order to establish the possibility to use the hydrogels in contact with human body; the prepared hydrogels are biocompatible and stimulate the cells proliferation. The controlled material distribution is important for creating anisotropic building blocks and introduces an extra design parameter. Recently, Janus particles have garneredmuch interest due to their anisotropic properties for many applications in research and technology. In this paper, we reported a one-step-two-phase emulsion synthesis approach as a universal route to prepare Janus nanoparticles with multifunctional (like plasmonic, magnetic and fluorescent) properties. By using magnetic and fluorescent Janus nanoparticles to replace the fixed primary capture antibodies on microtiter plates and enzyme-linked secondary antibodies for colorimetric detection, respectively, the in vitro detection of prostate specific agents conducted under the consideration of simplifying the traditional enzyme-linked immune sorbent assay, and the linear concentration-dependent detection range can fit for the current clinical diagnostic requirements. We hope this mass yield multifunctional Janus nanoparticles' method could shed light to those fields of both traditional colorimetric biosensing and modern precision medicine. This article describes a new alternative approach to the fabrication of electrochemical sensors based on the graphene quantum dots (GQDs) functionalized with chitosan and beta-cyclodextrin. The graphene quantum dots functionalized with chitosan and beta-cyclodextrin are shown to exhibit electrochemical performance that rivals that of GQDs modified glassy carbon electrode and display excellent properties against severe electrochemically sensors. The graphene quantum dots functionalized with chitosan and beta-cyclodextrin electrodes is further extended to the demonstration of novel electrochemical sensors through the transfer of the electrode fabricated by GQDs. For comparison of the recognition efficiency, other electrodes including bare glassy carbone electrode (GCE), and GQDs modified glassy carbone electrode (GQDs-GCE) were used for the control experiments. The resulting sensors demonstrate a wide range of usability, from the detection of various physiological analytes, including uric acid, ascorbic acid, dopamine to the identification of some amino acids. Comparison of recorded cyclic voltamograms in the presence of L-cysteine, dopamine, uric acid, L-Tyrosine, L-Phenylalanine, and ascorbic acid using GQD-CS-GCE, beta-CD-GQDs-GCE shows Delta E-p was decreased as beta-CD-GQDs-GCE > CS-GQD-GCE > GQDs-GCE. The larger lowering of the overvoltage observed in the presence of beta-CD-GQDs-GCE clearly indicates the essential role of beta-CD in the observed electrocatalytical behaviour. In general, the attachment of chitosan and beta-cyclodextrin to structure of GQDs provides new opportunities within the personal healthcare, fitness, forensics, homeland security, and environmental monitoring domains. Salmonella is one of the most common foodborne pathogens, and Salmonella outbreaks are mostly associated with the intake of contaminated food or drink. Therefore, the rapid and sensitive on-site detection of Salmonella is very important. We report a naked eye detection method for Salmonella typhimurium using scanometric antibody probe. The antibody-attached glass substrate was treated with Salmonella typhimurium and the scanometric antibody probe was applied. After Ag enhancement of the probe, Salmonella typhimurium could be detected with the naked eye. The scanometric antibody probe was prepared by simply mixing Au nanoparticles, gold binding peptide-protein G, and antibody against Salmonella typhimurium. This probe can act as a signal enhancer and thus allows for an extremely simple, rapid, and efficient analysis of Salmonella typhimurium by the naked eye. We detected Salmonella typhimurium at a low concentration of 10(3) CFU/ml and clearly distinguished this bacterium from other foodborne pathogens. Furthermore, we successfully detected Salmonella typhimurium in milk, suggesting that this method can be useful in real-life samples. Because the scanometric antibody probe can be expanded to various types of antibodies, this naked eye detection method could be employed for the detection of various types of pathogens. Silver Nanoparticles (AgNps) have been widely used in the field of Medicine. This in vitro study aimed to test (1) the bacterial viability after treating with AgNps and Chlorhexidine (CHX) against E. faecalis biofilm and (2) the anti-bacterial mechanism of action of AgNps. Sixty single rooted mandibular premolars were selected. The cylindrical midroot sections were enlarged with Gates Gliden dril no: 3. Dentin blocks were contaminated with E. faecalis (ATCC 29212). The samples were divided into three groups based on the medicament that was packed. Group 1: Saline (Control), Group II-2% CHX and Group III-AgNps. At the end of Days 1 and 3, assessment of live and dead cells were carried out by using confocal laser scanning microscopy (CLSM). The mechanism of antibacterial action was studied by membrane damage of E. faecalis, using Scanning Electron microscopy. It was also confirmed by membrane permeabilization assay. AgNps showed higher percentage of dead cells compared to CHX. There was no statistically significant difference between placing AgNps as a medicament for Day 1 and Day 3. The membrane damage of E. faecalis was proved after treating with AgNps. The current study proved the antibacterial efficacy of AgNps in the reduction of the E. faecalis biofilm. The most common treatment to trichomoniasis is the use of metronidazole; however several studies have shown that at least 5% of clinical cases of trichomoniasis are caused by parasites resistant to the drug. Lipophilic bismuth nanoparticles (BisBAL NPs) have an important antimicrobial activity; however the influence of BisBAL NPs on human parasites has not been studied. The objective of this study was to determine the effect of bismuth lipophilic nanoparticles on Trichomonas vaginalis growth. The bismuth nanoparticles synthesized by colloidal method are composed of >= 100 nm crystallites and have a spherical structure, agglomerating into clusters of small nanoparticles. Based on cell viability assays and fluorescence microscopy, Trichomonas vaginalis growth was inhibited with the addition of 62.5-125 mu g/mL of BisBAL NPs. Fluorescence micrographs showed live T. vaginalis in absence of any drug treatment and after exposed for 24 h. with 500 mu g/mL of BisBAL NPs or 1.3 mu g/mL metronidazole a dark background was observed with cellular debris stain. In summary, here we present evidence for first time of the antiparasitic activity of lipophilic bismuth nanoparticles being as effective as metronidazole to interfere with T. vaginalis growth. BisBAL nanoparticles could be an interesting alternative to treat and prevent parasitic infections. Nanoemulsions (NEs) applied as drug delivery systems are particularly convenient to increase solubility and the release of the drug with poor water solubility in water. The aim of this study was to prepare NEs containing Ketoconazole (KTZ), a poorly water-soluble drug, characterize them and evaluate their antifungal activity against Candida albicans. The solubility of KTZ was studied in different essential oils. Clove and sweet fennel essential oils were chosen as the oily phase and Pluronic F127 (R) and Cremophor RH40 (R) as the non-ionic surfactants to take part in the water phase. Formulations were prepared by a high-energy emulsification method and they were evaluated in terms of physical appearance, mean droplet size and polydispersity index. The NEs were successfully obtained and the droplets mean size was less than 100 nm. To verify efficacy of the NEs, the viability and the growth of C. albicans were measured by the propidium iodide influx analysis and by counting colony-forming units (CFU/mL). The results showed that the loss of the fungal membrane integrity and growth decrease were obtained after the treatment with the KTZ NEs containing clove oil (NEs-CL-KTZ) and with the free KTZ. However, the number of non-viable cells were significantly (p < 0.01) higher on the cells treated with NEs-CL-KTZ compared to the free KTZ. The in vitro drug release profile demonstrated that NEs-CL-KTZ formulation could increase more than nine times the release of KTZ when compared to KTZ cream. To conclude, NEs-CL-KTZ tested in the present study has demonstrated the most efficient formulation for the treatment of C. albicans infections. Atorvastatin, an inhibitor of 3-hydroxy-3-methylglutaryl coenzyme A reductase, is known to exert lipid-lowering, but also anti-inflammatory effects in extra-arterial locations. In order to avoid its toxicity associated with long-term oral treatment, in this study we have proposed novel lipid nanostructures containing atorvastatin to improve its efficiency and bioavailability after local application in periodontitis inflammation therapy. The physico-chemical characterization of the nanostructures was performed using dynamic light scattering technique and morphological observations were made by light microscopy. The encapsulation efficiency was determined by high performance liquid chromatography analysis of loaded atorvastatin. In vitro cytotoxicity and anti-inflammatory activity were evaluated in human premonocytic THP-1 cell line and a model of lipopolysaccharide-induced inflammation in macrophages, respectively. The results showed that the population of atorvastatin lipid nanostructures presented a mean diameter of 178 nm and a good homogeneity after sonication and extrusion treatments applying, as indicated by the low polydispersity index of 0.223. The efficiency of atorvastatin encapsulation was high (87.3%) and the nanostructures cytotoxicity was reduced for lipid concentrations ranging from 50 mu M to 500 mu M. Experiments in THP-1 cells differentiated to macrophages demonstrated that atorvastatin liposomal formulation led to a higher inhibition of lipopolysaccharide-induced proinflammatory cytokines (interleukin 6, tumor necrosis factor alpha and interleukin 8) release, compared to free drug. In conclusion, atorvastatin lipid nanostructures could be used to develop an efficient local treatment of periodontitis inflammation. Lysosomes have reported that lysosomal intensity increased depending by aging. Lysosome and melanosome have some relations because melanosome has same producing pathway with lysosomes. Therefore we evaluated lysosomal intensity with efficiency of melanin reduction. We found that lysosomes can decrease the amount of melanin by their specialized functions in vivo. The melanin quantity compared in melanocytes after exposure to ultraviolet (UV) radiation and alpha-melanocyte stimulation hormone (alpha-MSH) for evaluation expression level on different influence. Additionally, we compared the ability to reduce the amount of melanin after exposure to oxidative stress. Lastly we treated lysosomes isolated from HeLa cells to reduce melanin in melanocytes. 2% lysosomes were effective in reducing the amount of melanin in cells. Therefore, we conclude that lysosomes are a useful bio agent for decreasing the amount of melanin. Ion exchange hydrogels are largely used in pharmaceutical field due to high loading capacity and controlled delivery of drugs. In this paper, cation exchange hydrogels based on natural polymer (sulfopropyl dextran, SPD, carboxymethyl dextran, CMD, dextran sulfate, SD) or synthetic matrix (styrene-divinylbenzene) (Vionit CS-3) were evaluated as support for loading and controlled delivery of hypoglycemic basic drug buformin, taken as model drug. The hydrogels, except dextran sulfate, proved high drug loading capacity and efficiency, due to the strong drug/cation exchange hydrogel interactions and good accessibility of the drug to the loading sites. All hydrogels released the buformin in a prolonged time. The results concerning LD50 on rats and on mice demonstrated that there is no significant difference between the toxicity of the buformin itself and its complexes with the studied cation exchange hydrogels. Also, it was demonstrated that buformin-cation exchange complexes are pharmacodynamically active and induce a significant decrease of the glycemia. This paper describes a trans-deposition method for the size-controlled preparation of metal oxide-supported gold nanoclusters (AuNCs) varying from 1 to 8 nm in diameter. Colloidal AuNCs of various sizes were successfully deposited on hydroxyapatite (HAP), without undergoing a significant change in size from the initial polymer-supported AuNCs. The resulting AuNCs deposited on HAP (Au:HAP) were characterized by transmission electron microscope, powder X-ray diffraction, X-ray photoelectron spectroscope, inductively coupled plasma atomic emission spectroscopy, and X-ray absorption spectroscopy. The Au: HAP thus obtained catalyzed the aerobic oxidation of 1-indanol to give 1-indanone with excellent yield and recyclability. BaTiO3 and polypyrrole (ppy)-BaTiO3 hybrid nanocomposites have been synthesized by chemical oxidative polymerization method. Microstructure and crystallinity of the hybrids are studied using field emission scanning electron microscope (FE-SEM), high resolution transmission electron microscope (HRTEM) and X-ray diffraction (XRD) technique. As prepared BaTiO3 are rod-like, while PPY-BaTiO3 nanocomposites indicate the formation of bulging agglomerates of spherical particles with various sizes (40-50 nm). Dielectric constants at room temperature of the composites have largely enhanced (up to 6000). The hybrid composite shows grain boundary relaxation in the frequency range (42 Hz-5 MHz). Three dimensional (3D) variable range hopping (VRH) with high localization of charge carriers (Mott temperature approximate to 8725658 K) is observed in the temperature dependent conductivity evaluation of composite system. Negative magnetorestance (MR approximate to 4.3%) has been measured at 1 T. The observed MR is explained with the help of forward interference model. Understanding innovation trends and strategies of significant scientific instrument is an important issue not only for researchers in technology development, but also for company directors for their successful management of product competitiveness. Therefore, we took mass spectrometry as case study. Multiple patent quantitative indexes and several quality indicators were introduced. The methods of patent bibliometrics, technology field analysis, technology decomposition, scenarios of growth were chose to investigate the trends for this technology. Patent layout was expressed from micro (inventors), meso (organizations), macro (countries)-three levels. The results show that key points for the research of mass spectrometry are focused on ionization and separation of mass analyzer. The application of mass spectrometry was the major area. US owned the most patents in raising the efficiency and expanding application for mass spectrometry, while Japan owned the most patents in improving reliability and convenient usage. The forecasting showed that this technology has entered fast growth stage, a lot of accumulating for mass spectrometry has been achieved, it would be another 20 years before the limit could be reached for mass spectrometry technology. Total patent collaborative ratio was 74.84%: inventors kept a close cooperation in the field of mass spectrometry and inventor's cooperation is a common form for the research and development of mass spectrometry worldwide. All the organizations from US had higher external collaborative ratios, but Japan pay less attention on the external cooperation. US, Japan, and developed European countries were the focused technological market. The fabrication of polymer nanocomposites based on polycaprolactone (PCL) and Titanium dioxide (TiO2) nanoparticles was successfully carried out via solution casting technique. The structural properties of prepared nanocomposites such as lattice parameters, crystallite size etc have been studied by X-ray diffraction (XRD), the interactions between PCL and TiO2 was confirmed by Fourier transform infrared (FT-IR) spectroscopy and surface morphology was evaluated by field emission scanning electron microscope (FE-SEM) analysis. In this study, the inorganic nanofiller TiO2 was successfully incorporated in PCL matrix with different weight percentage such as 1, 3 and 5 wt%. The electrical properties of the corresponding nanocomposites (PCL + 1% v, PCL + 3% v and PCL + 5% TiO2) were investigated and showed good results when compared to pure PCL. The dielectric constant, dielectric loss, electric modulus and electrical conductivity were measured in the frequency range of 10(-2) to 10(7) Hz. Hydrogenated silicon nitride (SiNx:H) films were fabricated using plasma-enhanced chemical vapor deposition at high (13.56 MHz), low (400 kHz), and dual (13.56 MHz+400 kHz) radio frequencies. The antireflection and passivation qualities of each film were investigated before their application as the front surface layer of crystalline silicon solar cells. The use of a high radio frequency was observed to have a positive effect on the cell performance, which increased by 0.4% in comparison with the minimum value obtained for a 156 mm x 156 mm cell. We report a simple synthetic route for the preparation of titanium dioxide (TiO2), cobalt ferrite (CoFe2O4) nanoparticle and reduced graphene oxide (RGO) based nanocomposites TiO2/CoFe2O4/RGO. This is the first time a non-hydrothermal technique is reported for preparation of TiO2/CoFe2O4/RGO nanocomposites. Moreover, unlike other reported methods only water was used as reaction medium. Catalytic activity of synthesized nanocomposites towards degradation of various dyes, such as Methyl Orange, Rhodamine B, Methylene Blue under visible light were investigated. Catalysis reactions were monitored by using UV-vis spectroscopy. TiO2/CoFe2O4/RGO exhibited its high capability to act as an excellent magnetically separable catalyst towards degradation of various dyes under visible light. The high catalytic activity and simple preparation methodology make TiO2/CoFe2O4/RGO nanocomposites an attractive catalyst for degradation of hazardous organic dyes. In the present work, cobalt nanoparticles were synthesised in the range of 4 to 72 nm through the chemical reduction method, in which cobalt nitrate, oleic acid and sodium borohydride were used as the precursor, stabiliser and reducing agent, respectively. The syntheses were carried out by varying the synthesis parameters, such as the amount of reducing agent, amount of stabiliser and turbulence level of the reaction medium, with the aim of evaluating their influence on the control of cobalt nanoparticle size distribution. The nanoparticles size decrease with high concentrations of reducing agent and stabiliser, due to an increase in the reaction rate and viscosity of the medium, respectively. It was also verified that the high level of turbulence used during the synthesis could provide anomalous diffusion of cobalt nanoparticles in the reaction medium. Under this condition, the growth kinetics may vary significantly with regards to two distinct reaction paths, causing opposite growth behaviour in relation to the synthesis occurring in the normal distribution of the nanoparticles. In this paper, a process scheme for polymer TSV fabrication is proposed to solve the difficulty of polymer liner formation in TSV technology. By using ring trench etching and BCB deposition in a vacuum environment, uniform BCB liner can be successfully fabricated. Electrical measurement by using Kelvin structure is completed to ensure the performance of fabrication. However, a new phenomenon causing tiny resistance changes with the arrangement of TSV is discovered during the measurement of TSV chain. In this research, this effect is discussed in detail by designing different design of experiment (DOE) to pursue good electrical connection and reduce the variance of delay property of each TSV. Based on statistical results, impacts of TSV manufacturing process variance are discussed and the guideline in high density TSV design is found. The silver modified TiO2 photocatalysts were prepared by a rapid sol-gel method and used for the photocatalytic oxidization of different pollutants. The photocatalytic activity of as-prepared samples was evaluated by photo-degradation of organic methylene blue (MB), toluene, nitric oxide (NO) and phenol. The addition of Ag to TiO2 exhibits enhanced photocatalytic degradation of different pollutants due to the possibility of the prolonged lifetimes of photogenerated electron-hole pairs. As a result, we found that the photocatalytic activity of the Ag-TiO2 (1:25) powder was superior to that of pure TiO2 under both simulated sunlight and visible light irradiation. Furthermore, the Ag-TiO2 (1:25) also exhibits the highest photocatalytic activity for the degradation of toluene, NO and phenol. Under the similar conditions, the degradation efficiency of 80% for toluene and 92% for NO gas-phase pollutants was achieved after 100 min of irradiation. In this paper, the objective was to employ the probability of amylose inclusion complex formation with linoleic acid and estimate the thermodynamic compatibility of complex of amylose with linoleic acid at water. We used Gromacs to make molecular dynamics simulation of amylose-linoleic acid complex behavior in water. Firstly, the amylose alone in water has been simulated by Gromacs, it was found that amylose alone in 373 K had a more extensive chain than that in 300 K, and the former has a larger radius of gyration than the latter, which means the former hydrated more adequately than the latter. Then, the amylose and linoleic acid had been simulated by Gromacs at 300 K and 373 K respectively, after 1 ns simulation, it can be clearly seen that linoleic acid can be included into amylose helical cavity at 373 K but can't be included at 300 K, through configuration after 1 ns (1 ns = 10(-9) s) with or without solvent, root-mean-square deviation, solvent accessible surface area, molecules separation distance, trajectories of molecules movement, the values of those complex have been compared, it was concluded that the complexation of amylose and linoleic acid was much more thermodynamically favored at 373 K than at 300 K. These results suggested that the founded method could help us to prove its probability of amylose inclusion complex formation with small ligands and also estimate the thermodynamic compatibility of complex of amylose with different small ligands complexes at water. Aminotriazole is among the most popular carbamates herbicide that are widely used to boost food production, and its residues pose a great threat to human health and the environment. Herein, we presented a new colorimetric sensor for the detection of aminotriazole residues based on the aggregation of glutathione functionalized gold nanoparticles (GSH-AuNPs). The AuNPs were synthesized by the reduction of chloroauric acid (HAuCl4 center dot 4H(2)O) via trisodium citrate (C6H5Na3O7 center dot 2H(2)O) and characterized by UV-visible spectrometer and TEM. And GSH-AuNPs were prepared using simple green one-pot method. Determination of aminotriazole were carried out under optimal detection condition based on the aggregation of AuNPs induced by the synergistic effects of electrostatic interaction, hydrogen bond and size selectivity between GSH and aminotriazole, which gave the clear color and UV-vis spectrum change. The sensor showed good ability of anti-disturbance and displayed wide linear relationship with aminotriazole concentrations varying from 0.59 to 21.40 mu M with low detection limit of 0.27 mu M. The recovery in the spiked cabbages and apples were measured to be between 91.96% and 105.90%. These preliminary results demonstrate that as-prepared AuNPs sensor would be a promising candidate for fast aminotriazole analysis in real application. Tobacco mosaic virus (TMV) is used as an organic biotemplate to prepare hybrid materials with silver nanoparticles (AgNPs). In this study, the native rod-like TMV structure was transformed into spherical particles using a thermal treatment; subsequently, the external surface of the thermally-treated TMV (TT-TMV) were in-situ covered with AgNPs through cycles of sequential reduction using sodium borohydride. The as-grown nanoparticles show a bimodal distribution on the surface of the TT-TMVs after the fifth nucleation cycle. Results of characterization of the metallized TT-TMV by dynamic light scattering, transmission electron microscopy, atomic force microscopy, and energy-dispersive X-ray spectroscopy accordingly show the effect of cycling on the morphology of the AgNPs/TT-TMV hybrids. Thin Ti-metal protected stainless steel metal mesh coated with mesoporous TiO2 as flexible photoanode has been used to fabricate back contact transparent conductive oxide-less dye-sensitized solar cells. TiO2 nanoparticle having a particle size of 15-20 nm sensitized with dye cocktail of two indoline dyes D-205 and D-131 were first utilized owing to their complementary light harvesting properties. Short-circuit photocurrent density (J(sc)) for the dye cocktail combination of D-205 and D-131 (1: 1) was found to be increased due to the increased photon harvesting in the 400-500 nm mainly associated with the contribution from D-131 dye. In addition, the electron recombination was suppressed when dye cocktail was employed as confirmed by the dark current measurement leading to higher open-circuit voltage (V-oc). The enhanced J(sc) accompanied with increased V-oc resulted in to an improved efficiency of 3.59% for this cocktail combination. To enhance the efficiency even further, we have utilized TiO2 nanoparticle having a larger particle size of 30 nm facilitating the mass transport of the bulky [Co(bpy)(3)](3+/2+) redox species. In order to enhance the photon harvesting window TiO2 nanoparticles were sensitized with porphyrin dye (YD2-o-C8) along with different dye cocktail combinations with another complementary dye (Y123). Utilization of a dye cocktail of YD2-o-C8 and Y123 (4: 1) led to improved photoconversion efficiency of 5.25% under simulated solar irradiation. A rapid combustion process for the preparation of magnetic MgFe2O4 nanoparticles was introduced; the structure and properties of the as-prepared product were investigated by XRD, SEM, TEM and VSM techniques. The experimental results revealed that the volume of absolute alcohol and the calcination temperature were two key parameters for the preparation of MgFe2O4 nanoparticles. With the volume of absolute alcohol increasing from 10 mL to 25 mL, the particle size of MgFe2O4 nanoparticles decreased from 70 nm to 52 nm; while with the calcination temperature increasing from 400 degrees C to 800 degrees C, the particle size was increased from 52 nm to 123 nm. The magnetic MgFe2O4 nanoparticles calcined at 400 degrees C for 2 h were characterized with the average particle size of around 50 nm and the specific magnetization of 247.2 Am-2/kg. The magnetic MgFe2O4 nanoparticles were employed to remove methyl blue (MB) from aqueous solution, the adsorption kinetics and adsorption isotherm of MB onto MgFe2O4 nanoparticles at room temperature were investigated, and the regression equation was found in good agreement with the pseudo-second-order model in the initial MB concentrations of 100-400 mg/L; the adsorption equilibrium data of MB onto MgFe2O4 nanoparticles at room temperature were analyzed with Langmuir, Freundlich and Temkin models, and the adsorption isotherm was more effectively described by the Temkin model based on the value of the correlation coefficient (0.9981). The precursors of CuCo2O4 (CCO) were synthesized by ball-milling method and solvothermal method, respectively. The precursors were further heat-treated at a low temperature (450 degrees C) in air to obtain CuCo2O4. The SEM results showed that the CCO (B-CCO) whose precursor was synthesized by ball-milling method present uneven particles, while the CCO (S-CCO) whose precursor was synthesized by solvothermal method takes on microsphere morphology. When tested as anode materials for lithium ion battery (LIB), the as-prepared CuCo2O4 (S-CCO) synthesized by solvothermal method followed with pyrolysis exhibits better electrochemical performance than that for CuCo2O4 (B-CCO) synthesized by ball-milling method followed with pyrolysis. Both the B-CCO and S-CCO exhibit high discharge capacities, good cycling stability and rate performance as anode materials for LIB. Especially, S-CCO behaves a large initial capacity of 1230 mAh/g, whose reversible capacity can matain as high as 635 mAh/g after 100 cycles, showing better rate performance compared with the B-CCO. The good electrochemical performance suggests that CuCo2O4 could be a promising candidate as a novel anode material for LIB applications, and our present work provides a new and simple approach to the fabrication of binary metal oxides (BMOs). This study presents some rheological investigations concerning the temperature induced gelation for mixtures of Pluronic F127 and poly(vinyl alcohol) in aqueous solutions. At room temperature the polymer mixtures behave as viscous fluids and under physiological conditions they exhibit solid-like properties (elastic hydrogels). For a better understanding of the interactions between the two polymers in aqueous solutions, viscometric measurements were performed at 37 degrees C for different polymer compositions. It was found that the polymer-polymer interactions between like or unlike macromolecules influence the final properties of hydrogels. The transition from the sol state to soft or hard hydrogel depends on the polymer mixture composition and thermodynamic conditions. A Ni-Co compound (9: 1) alloy of nanowires (NWs) synthesized by TiO2 nanotube templates was treated by N-2/H-2 annealing to enhance the immobilization of penta-histidine-tagged (5xHis-tagged) biotin. Based on the theory of immobilized metal affinity chromatography (IMAC), the chelator of transition, including Ni+2 and Co+2 me tallic ions, was used to capture the 5xHis-tagged biotin by nanowire structure in this study. Physical properties and electrical resistivity in one dimension of annealed Ni-Co alloy NWs were characterized and then, the protein capture efficiencies were observed by measuring fluorescence intensities. By appropriate treatment on metallic Ni-Co NWs ligand surfaces, it was found that lower electrical resistivity combined with a higher saturation magnetic flux density (Bs) may be useful for immobilizing 5xHis-tag biotins. Furthermore, there is not only a higher contact probability but the surface-to-volume ratio of one dimensional Ni-Co alloy NW ligands proposes higher detected fluorescence sensitivity than that of Ni-Co film. The submerged arc in liquid nitrogen method was used to produce carbon nanohorns (CNH) for hydrotreating application. In this paper the effects of current, time and system design modification were investigated to maximize CNH production. A current setting of 90 A was found to be the best condition for CNH production. Additionally, CNH production was impacted by processing time and design setup used in synthesizing the samples. For each batch, 0.12 g of CNH was obtained for 30 mins of processing time. The properties of CNH were evaluated using Brunauer-Emmett-Teller (BET) method, transmission electron microscopy (TEM), Fourier transform infrared (FTIR), Raman Spectroscopy and X-ray diffraction (XRD). BET results revealed mesoporous pore diameters for all pristine CNH samples under the different current (50-100 A) settings. Dahlia-like and budlike structures with aggregate diameters ranging from similar to 50-110 nm were observed in the TEM images. Although current and processing times were found to be two crucial parameters affecting the yield of CNH production, the entire equipment design was a major key factor in improving the yield by being capable of retaining more CNH particles during production. A series of flower-like AgBr/BiOCOOH composite photocatalysts have been successfully synthesized by a simple two-step solvothermal method. The techniques of XRD, SEM, TEM, DRS, XRF and XPS were used to characterize the phases, compositions and microstructures of the composite photocatalysts. The photocatalytic activities of the composites were evaluated by degradation of Rhodamine B (RhB) dye and Methylene blue (MB) dye as the simulated reaction under visible light irradiation. The results indicated that the AgBr/BiOCOOH composite photocatalysts had higher photodegradation efficiency than the pure AgBr and BiOCOOH under visible light irradiation. Among them, the 20% AgBr/BiOCOOH sample exhabits the best photocatalytic activity. Adresses herein, monodisperse Pt nanoadsorbents have been reproducibly prepared by microwave assisted method and consist of activated carbon and platinum called as Mw-Pt NPs@AC nanocomposites. Mw-Pt NPs@AC nanocomposites were found to be extraordinary adsorption capacity for methylene blue (MB) removal from aqueous solutions. Their characterization was performed by the help of X-ray diffraction (XRD), transmission electron microscopy (TEM), high resolution transmission electron microscopy (HR-TEM) and X-ray photoelectron spectroscopy (XPS). The results revealed that the Mw-Pt NPs@AC nanocomposites are highly efficient adsorbents for MB removal from aqueous solutions and provide remarkable adsorption capacity (195.69 mg/g). The MB adsorption equilibrium was attained in similar to 75 min. In addition, reusability tests represented that Mw-Pt NPs@AC nanocomposites are quite promising for MB removal keeping 91.9% of its initial efficiency after 5 cycles of adsorption-desorption. In this work, the influence of UV irradiation (254 nm) on the precipitation of ZnO nanoparticles was studied. UV irradiation was found to induce photocorrosion of originating ZnO nanoparticles, which resulted in the formation of oxygen and zinc vacancies confirmed by photoluminescence and positron annihilation spectroscopy. The oxygen vacancies were formed by the reactions of hydroxyl radicals with oxygen atoms of the ZnO lattice. Positron annihilation spectroscopy also revealed the presence of oxygen and zinc divacancies and the clusters of 5 zinc vacancies and 10 oxygen vacancies, which enhanced the orange-red photoluminescence of ZnO nanoparticles. Using electron transmission microscopy and high resolution electron transmission microscopy a median of the ZnO nanoparticle sizes was estimated at about 17 nm while a median of the ZnO nanocrystal sizes was about 7 nm, respectively. No effects of the photocorrosion on the nanoparticle and nanocrystal sizes were observed. A new mechanism of the Zn(OH)(2) formation based on the reactions of hydroxyl radicals with zinc precursors was suggested. In dye-sensitized solar cells (DSCs), TiO2 nanoparticles (TNPs) were conventionally used as a photoelectrode. The photoelectrode has shown electron scattering problem. TiO2 nanofibers (TNFs) and carbon nanotubes (CNTs), which have one dimensional structure, high surface area, good electrical, and catalytic properties, is expected to be able to greatly improve the performances of DSCs. To decrease the photoelectron scattering and electron-hole pair recombination in the TNP photoelectrode, electrospun TNFs and multi-wall CNT were added in TNP paste. The diameters of TNFs and multi-wall CNTs were similar to 400 nm and similar to 20 nm, respectively. The short circuit current and power conversion efficiency were increased by adding TNFs and decreased by adding CNTs. In DSCs with 15 wt% TNF, best values of 12.92 mA/cm(2) and 4.79% were observed and electron life time was 0.09 sec. Organic-inorganic halide Perovskite solar cells have extremely high power conversion efficiencies of 22.1%, as officially recognized by the US National Renewable Energy Laboratory (NREL). These solar cells are therefore attractive candidates for use in next generation photovoltaic applications. Perovskite materials for incorporation into solar cells are prepared by a two-step solution fabrication process to generate polycrystalline structures with diverse grain sizes. PbI2 is the main material used in the synthesis of organic-inorganic hybrid perovskite, particularly when the sequential deposition method is employed, as it facilitates the formation of a well-organized perovskite layer. To improve the performance of these devices, the properties of PbI2 properties need to be investigated. We therefore examined the effects of PbI2 concentration in mesoporous perovskite solar cells prepared using a two-step solution deposition method. Samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM), UV-Vis spectrometry, and atomic force microscopy (AFM). The results of this work confirm the importance of the PbI2 layer in high efficiency perovskite solar cells. Non-stoichiometric indium zinc tin oxide (IZTO) thin films were deposited onto glass substrates at 400 degrees C by DC magnetron sputtering using non-stoichiometric (In0.5Zn0.25-xSn0.25+xO1.5 and In0.4Zn0.3-xSn0.3+xO1.5, where x = +/- 0.05) IZTO ceramic targets. After deposition, all of the films were given annealing treatment at 450 degrees C in argon. The crystallization behavior was examined by X-ray diffraction, while the optical properties were measured by ultraviolet-visible spectroscopy, in which the average transmittance values higher than 80% were observed from all films. The minimum resistivity of approximately 6.3x10(-4) Omega center dot cm was observed from the Sn-rich In0.5Zn0.2Sn0.3O1.5 film. The resistivity values of Sn-rich IZTO thin films were a little smaller than those of Zn-rich IZTO films. An OPV cell with the so-called inverted structure was fabricated using various IZTO films deposited under optimized conditions as the cathode electrode. It was found that the solar cell efficiency could reach up to 7.9% which is the same with the OPV cell with conventional ITO film. The present work deals with the study of obtaining the spherical particles containing brewer yeast cells immobilized in a hydrogel matrix based on gellan and gellan/carboxymethyl cellulose (sodium salt). The influence of some factors of the process for obtaining the particles, by the ionic gelation with magnesium acetate used as crosslinking agent, by an extrusion process through capillary, on the particles properties such as structural stability, on morphological characteristics and on behavior in aqueous and alcoholic medium of the particles with or without yeast cells immobilized was studied. The particles were obtained from the mixtures of the two polysaccharides and theirs properties were compared to those based only on gellan. One can observe a certain influence of the CMCNa amount on the above mentioned properties essentially determined by their higher crosslinking density of the polymers mixture compared to those prepared only with gellan. Biocatalytic properties of the particles with yeast cells immobilized depends also on the crosslinking density; the bioreactors with a higher concentration of CMCNa present a slightly lower biocatalytic activity. The particles with immobilized yeast cells can be used for a higher number of hydrolitic cycles; the number of cycles decreases when the CMCNa content increases in the particles composition due to the higher crosslinking degree leading to the inhibition of the yeast cells proliferation. The wires from aluminum alloy 6101 (AA-6101) used in power cables were covered by carbon nanotubes (CNTs) and graphite powders, and then they were subjected to solubilization heat treatment at a temperature of 550 degrees C and aged at 180 degrees C. The effects of the processing temperature on the mechanical and electrical properties of the wires based on CNTs@AA-6101 and graphite@AA-6101 composites were investigated by electron microscopes, thermogravimetric analysis, tensile tests, conductor tests and Raman spectroscopy. The results show that CNTs were successfully incorporated on the surface of aluminum wires; the tensile strength of CNTs@AA-6101 increased by 30% and 34% as compared to graphite@AA-6101 and standard AA-6101 wire without CNTs, respectively. Moreover, the resistivity had a decrease 13.7% less than conventional wires. The solubilization process added with the incorporation of CNTs represents a new way for manufacturing nanostructured power cables to achieve high-performance energy transmission lines. In this work, zinc oxide nanostructured films were developed by a facile treatment of the sputter deposited zinc thin films in boiling de-ionized water. Arrays of zinc oxide nanostructures with rod and flower-like shapes were obtained by changing the boiling water treatment time and manipulating the thickness of zinc film. Results of the scanning electron microscopy, energy-dispersive X-ray spectroscopy, and X-ray diffraction, ultraviolet-visible-near infrared spectroscopy, and photoluminescence techniques indicate that the obtained nanostructures are crystallized stoichiometric zinc oxide. Zinc oxide nanorods and nanoflowers start to form on the surface and progress across the zinc film's thickness as a function of treatment time in boiling de-ionized water, which allow a controllable method of fabricating semiconducting/metal multilayer coatings. The obtained nanostructured films showed different optical properties depending on starting zinc film's thickness and boiling water treatment time. In addition, photoluminescence analysis showed that the nanostructured films possess a major energy band gap of 3.28 eV along with two other secondary defects-related energy band gaps of 2.27 and 1.67 eV. Ethambutol, an antituberculosis drug, was used as a reducing agent in the fabrication of gold nanoparticles via a one-step, one-pot process. The resulting pink-colored colloidal solution exhibited the characteristic surface plasmon resonance band at 529 nm, which clearly indicated the presence of gold nanoparticles. High-resolution transmission electron microscopy images revealed spherical nanoparticles with an average diameter of 45.95 +/- 7.21 nm. High-resolution X-ray diffraction analysis confirmed the face-centered cubic structure of gold nanoparticles. Thermogravimetric analysis revealed that the prepared product consisted of 30.5% organic components and 69.5% gold nanoparticles. Fourier transform infrared spectroscopy showed that hydroxyl and amine functional groups contributed to the fabrication process. The colloidal stability assessment indicated that the prepared gold nanoparticles retained the lambda(max) of the surface plasmon resonance band and their hydrodynamic size up to 28 days in cell culture medium. This synthetic strategy affords the possibility of using gold nanoparticles as an ethambutol delivery vehicle through a facile, one-step process. We present an analysis on molecular dynamics between H-2 molecule interacting with one carbon nanotube section at low initial-temperature of simulation, corresponding to 10(-3) K, and under constant electric field effects, in order to verify the performance of the carbon nanotube like a H-2 sensor, and consequently, indicating its use as an effective internal coating in storage tanks of hydrogen gas. During simulations, the H-2 was relaxed for 40 ps inside and outside of carbon nanotube, describing each possible arrangement for the capture of H-2, and electric field was applied over the system, longitudinally to the carbon nanotube length, promoting the rise of an evanescent field, able to trap H-2, which orbited the carbon nanotube. Simulations for electric fields intensities in a range of 10(-8) au up to 10(-6) au were performed, and mean orbit radius are estimated, as well as, some physical quantities of the system. The quantities calculated were: kinetic energy, potential energy, total energy and temperature in situ, among molar entropy variation. Our results indicates that a combination of electric field and van der Walls interactions derivatives of carbon nanotube is enough to create an evanescent field with attractive potential, showing it system as a good H-2 sensor. In this study, we investigate the electrical and optoelectronic property of the amorphous indium zinc oxide (a-IZO) thin-film phototransistor. We realize an ultraviolet (UV) photodetector by using a wide-bandgap a-IZO thin film as the channel layer material of the TFT. The fabricated device has a threshold voltage of -1.5 V, on-off current ratio of 10(4), subthreshold swing of 0.4 V/decade, and mobility of 12 cm(2)/Vs under dark environment. As a UV photodetector, the responsivity of the device is 5.87 A/W and the rejection ratio is 5.68x10(5) at a gate bias of -5 V under 300 nm illumination. Photoresponsive delivery systems that are activated by high energy photo-triggers have been accorded much attention because of the capability to achieve reliable photoreactions at short irradiation times. However, the application of a high energy photo-trigger (UV light) is not clinically viable. Meanwhile, the process of photon-upconversion is an effective strategy to generate a high energy photo-trigger (in-situ) through exposure to clinically relevant near-infrared (NIR) light. In this regard, we synthesized photon upconverting nanocrystals (UCNCs) that were subsequently loaded into photoresponsive nanoparticles (NPs) that were prepared using alkoxyphenacyl-based polycarbonate homopolymer (UCNC-APP-NPs). UCNC loading affected resultant NP size, size distribution, colloidal stability but not the zeta potential. The efficiency of NIR-modulated drug delivery was impacted by the heterogenetic nature of the resultant UCNC-APP-NPs which was plausibly formed through a combination of UCNC entrapment within the polymeric NP matrix and nucleation of polymer coating on the surface of the UCNCs. The biocompatibility of UCNC-APP-NPs was demonstrated through cytotoxicity, macrophage activation, and red blood cell lysis assays. Studies in tumor-bearing (nu/nu) athymic mice showed a negligible distribution of UCNC-APP-NPs to reticuloendothelial tissues. Further, distribution of UCNC-APP-NPs to various tissues was in the order (highest to lowest): Lungs> Tumor > Kidneys > Liver > Spleen> Brain > Blood > Heart. In all, the work highlighted some important factors that may influence the effectiveness, reproducibility and biocompatibility of drug delivery systems that operate on the process of photon-upconversion. Development of nano size drug delivery and targeting system without any side effect is main concern of researcher in the field of biomedical application. In this work, polyoxometalate based one-dimensional nano-tubular arrays have been designed by layer-by-layer (LBL) deposition method. These nanotubes were characterized by FT-IR, UV-vis., techniques and the morphology was studied by using SEM. The particle size analyzer was used to obtain the size range of the nanotubes which was found to be in the range is 80 to 200 nm. These nanotubes based polyoxometalates were further investigated to test the cytotoxicity and changes in the cell morphology under different concentrations. The results indicate that synthesized polyoxometalate based one-dimensional nano-tubular arrays are feasible for effective drug carriers without any lethal effect on cell and could be potentially useful vector for delivery of genes, antibodies, and proteins in future. Liposomes (phospholipid vesicles) including Chlorophyll a (Chla) have practically been utilized as a simplified photochemical energy conversion system. However, poor knowledge about the detailed liposomal membrane properties has hampered the progress in this field. In this paper, we therefore clarified the liposomal membrane properties based on the analyses of the membrane fluidity, the Chla absorption bands, and the Chla polarization. Moreover, we examined the influence of the membrane properties on the photoreduction with methylviologen. Consequently, it was revealed that the kinetic constant value of the reduced species formation decreased with the increase in temperature over the phase transition temperature of the membrane, which means that the membrane properties largely influence the photochemical reaction. Based on our results, we conclude that the behaviors and the orientations of Chla molecules on the liposomal membrane are significantly important in the photochemical energy conversion system. Graphene, as a new nanomaterial with excellent mechanical properties, has raised great interest among researchers. In this work, 9,10-dihydro-9-oxa-phosphaphenanthrene-10-oxide (DOPO), has been covalently grafted onto the surface of graphene through two different methods to get a new modified graphene (G-DOPO) which owns great mechanical property and flame retardancy. Fourier transform infrared (FTIR) spectroscopy, Raman spectroscopy, and X-ray photoelectron spectroscopy (XPS) confirmed that DOPO covalently bonded to the surface of graphene. Transmission electron microscopy (TEM) was used to observe the microstructure of G-DOPO. The thermal stability of G-DOPO was evaluated by thermogravimetric analysis (TGA). The G-DOPO showed a higher thermal stability than graphite oxide (GO), and this material could be utilized as an excellent potential flame retardant. In this work, tigapotide (PCK3145) was incorporated into novel block copolymer polyelectrolyte nanocarriers in order to maximize the advantages of the delivery process and possibly its biological properties. PCK3145 was incorporated into complexes with the block polyelectrolyte poly(vinyl benzyl trimethylammonium chloride)-b-poly(oligo(ethylene glycol) methyl ether methacrylate) (PVBTMAC-POEGMA). Light scattering techniques (dynamic, static and electrophoretic) are used in order to examine the size, the size distribution, the morphology and the z-potential of the nanocarriers incorporating PCK3145 in different aqueous media. The concentration of PVBTMAC-POEGMA is kept constant throughout the series of aqueous solutions of different ionic strength investigated. The values of the scattering intensity, I-90, which is proportional to the mass of the species in solution, increase gradually as a function of C-PCK3145 providing proof of the occurring complexation. It was shown that the structure and solution behavior of the formed complexes depend on the ratio of the two components, as well as on the pH and the ionic strength of the solution. The physicochemical characteristics of the complexes, as a function of PCK concentration, remained unaffected at higher ionic strength (0.154 M). This investigation provides interesting and useful new insights into the interaction mechanism between oppositely charged block polyelectrolytes and therapeutic peptides. Ni has been widely used as one of the most important metal catalysts for graphene growth with the chemical vapor deposition (CVD) method. Despite the many advantages, it is difficult to obtain high quality graphene from Ni due to the complexity in controlling the uniformity and thickness of multilayer graphene films. In this study, the thickness of multilayer graphene films grown on Ni was controlled by single-step oxygen plasma etching. The results show that the thickness of the graphene films proportionally decreased with an increase in the plasma etching time where the thickness was controlled from thick graphite films to thin graphene films with a transparency over 80%. Surface topology after oxygen plasma etching showed that uniform etching was achieved, for which the topmost surface of the resulting thin graphene is hydrophilic because of oxygen plasma irradiation, which is advantageous for coating organic layers in device fabrication. Moreover, because of the absence of a metal etching process, the recycled use of Ni catalysts is available, increasing the cost efficiency. Analysis of the emitter property of solar cells is important because the emitter doping characteristics can affect the surface recombination velocity, contact resistance, emitter saturation current density, and cell efficiency. To analyze the emitter quality, we used the following methods: the four-point probe method, quasi-steady-state photoconductance (QSSPC), and secondary ion mass spectroscopy (SIMS). The four-point probe method is used to measure the doping dose in the emitter. Using QSSPC, we can characterize the emitter quality, including the lifetime of the emitter, and using SIMS, we can measure the concentration of dopants as a function of depth in the emitter. However, SIMS measurement is destructive and limited to the measurement of planar surface wafers. To solve this problem, we investigated the relationship between the minority carrier lifetime and the emitter doping profile using the QSSPC. The relationship between the lifetime and emitter doping profile showed that the lifetime of the emitter decreases as the emitter doping concentration increases. From this result, we performed a lifetime analysis for differently doped POCl3-diffused emitters. The results obtained using the theoretical model for the lifetime agreed with experimental SIMS measurement results, indicating that the model can be used as a quantitative model for comparing emitter doping characteristics. Hydrogenated polymorphous silicon (pm-Si:H) is a material consisting of a small volume fraction of nanocrystals embedded in an amorphous matrix, and which can be grown at high deposition rates by increasing the radio-frequency power. When grown at high deposition rates, pm-Si: H films show a shift of their infrared (IR) absorption stretching band peak to higher wavenumbers and a sudden increase in their optical bandgap. The IR absorption spectrum was analyzed by deconvolution into three bands, including a medium stretching mode positioned at 2030 cm(-1), which has been attributed to Si-H bonds at silicon nanocrystal surfaces. Secondary ion mass spectrometry measurements confirmed that an excess of hydrogen is incorporated in pm-Si: H grown at high deposition rate, leading to a sharp increase in the optical bandgap. We suggest that this sharp increase can be used as a simple tool to detect the deterioration of material quality when using high deposition rate processes. Carboxymethyl cellulose-g-acrylamide, a superabsorbent hydrogel was prepared from carboxymethyl cellulose (CMC, DS = 0.7-0.8) and acrylamide (AA) monomer through graft copolymerization by the use of ceric ammonium nitrate (CAN) or Ce4+ as an initiator in aqueous medium. CMC and hydrogel were characterized using Fourier transform infrared (FTIR) spectroscopy and Atomic force microscopy (AFM). FTIR spectra established crosslinked polymeric network structure of hydrogel through graft copolymerization reaction between CMC and acrylamide indicating incorporation of acrylamide monomer resulting the formation of carboxamide group (> C=O). AFM showed the surface morphology or properties of the superabsorbent hydrogel. The hydrophilic properties of the superabsorbent hydrogel were identified by the swelling percentage or degree of swelling. Various physical properties were also identified in terms of water desorption, water absorption or hydration of hydrogel and gel content in hydrogel and the corresponding values of these are 86%, 524% and 70%, respectively. Transparent, stretchable strain sensors have numerous applications in structural health monitoring, smart clothing, human motion detection etc. In this research, we reported a versatile, simple method for transparent, stretchable strain sensors based on sandwich structure of single-walled carbon nanotube (SWCNT) network and polydimethylsiloxane (PDMS) substrate. The SWCNT network showed isotropy due to the surfactant driven random self-assembly at air/water interface. The SWCNT film of 2 layers presents high transparency (> 90%) and great conductivity (12.1 k Omega/cm(2)) without strain. During first 50 cycles of stretching/releasing, the resistance increased due to the irrecoverable loss of junctions between nanotubes. After 50th cycle, the conductivity of SWCNT thin film remains stable and shows good linearity due to the strong inter-bundle junctions. The growth of TiO2 nanotube arrays (TNTAs) on non-native substrates is essential for exploiting the full potential of this nanoarchitecture in applications such as gas sensing, biosensing, antifouling coatings, low-cost solar cells, drug-eluting bimedical implants and stem-cell differentiators. The direct formation of anodic TNTAs on non-native substrates requires the vacuum deposition of a thin film of titanium on the substrate followed by subsequent electrochemical anodization of the film. In this report, we studied the effect of atomic peening on the formation of Ti thin films on technologically important non-native substrates. We compared the structure and morphology of evaporated and sputtered Ti films, and correlated them to the morphology of the vertically oriented TiO2 nanostructures that resulted subsequent to anodization of those films. We calculated a minimum value of 1.33 eV/atom for the energy of reflected neutral Ar species arriving at the substrate when a chamber pressure of 1 mTorr is used during the sputter deposition of Ti. Previous approaches relied on substrate heating to elevated temperatures during Ti thin film deposition or ion-beam assisted Ti thin film deposition as a prerequisite to form TiO2 nanotube arrays (TNTAs). We demonstrated TNTAs on a variety of substrates at room temperature using both evaporated and sputtered Ti films without recourse to ion-beam sources. Evaporated Ti films were found to possess small grain size and high local surface roughness, which resulted in nanotubes with extremely rough sidewalls. Ti thin films formed by Ar+ ion sputtering at commonly used chamber pressures of 7-20 mTorr at substrate temperatures ranging from room-temperature to 250 degrees C possessed a highly rough surface and three-dimensional grains, which precluded the formation of ordered nanotubes upon anodization due to highly non-uniform pore nucleation processes. In contrast, Ti thin films sputtered at low chamber pressures of 0.8-2 mTorr had a low surface roughness due to the atomic peening process. Such films, even when deposited at room temperature, resulted in ordered nanotube arrays upon anodization. TiO2 coatings were obtained by a combination of a sol-gel process with the addition of commercial TiO2 particles (Degussa-Evonik P25). Simple and composite (with P25) sols were deposited by dip-coating on glass plates giving simple and composite coatings. The order of addition of reagents in the preparation of sols, the incorporation of P25 to the sols, and the thermal treatments were analyzed as variables to improve the photocatalytic activity. The samples were completely characterized with respect to the amount of deposited TiO2, the crystal structure (XRD) and the morphology (SEM). The activity of the supported photocatalysts (simple, composite and P25 coatings) was evaluated using 4-chlorophenol degradation and Cr(VI) reduction in the presence of EDTA as model photocatalytic systems. The stability of sols and the photocatalytic activity of the resulted coatings depended strongly on the order of addition of the reagents to obtain the simple sol. Sols prepared by adding the alcohol-water-HCl mixture to the precursor were the simple sols presenting a very good stability that permitted the incorporation of powdered P25 without the addition of additives. The composite coatings prepared from these simple sols were the most active in both photocatalytic systems and also presented the highest resistance to scratch. In this work in vitro cytotoxicity of poly(urea-urethane) nanoparticles obtained by miniemulsion polymerization against murine fibroblast (L929), normal human fibroblast (NIH3T3) and MRC-5 cell line derived from normal lung tissue, as well as, hemocompatibility, and in vitro macrophage uptake were evaluated. From the results obtained in this work, poly(urea-urethane) nanoparticles showed spherical morphology, negative zeta potential and mean diameter of 85 +/- 4 nm. In vitro cytotoxicity against fibroblast cell lines L929, NIH-3T3, and MCR-5 showed no reduction of cell viability, which is a critical parameter for the drug carrier biocompatibility. In addition, in in vitro macrophage uptake studies, poly(urea-urethane) nanoparticles showed a high phagocytic uptake, 72%. Hemolysis assays showed high human blood biocompatibility and the results indicate that poly(urea-urethane) nanoparticles obtained in this work have excellent properties without secondary effect, with potential application as carrier for drug delivery in the treatment of diseases in which macrophages act as host cells. A series of electrochemically deposited palladium nanotubes (e-PdNTs) on nanoclusters (e-PdNCs) mosaic basement have been prepared with variation of cycle number and scan rate through a facile electrochemical deposition method using cyclic voltammetry (CV) technique in 20 mM HCl solution and have been used as an anode electrocatalyst for hydrazine (N2H4) oxidation reaction. The morphology and structure of the self-formed e-PdNTs catalysts have characterized by scanning electron microscopy. As observed, the size and amount of e-PdNTs increased with the increasing of cycle number and decreased with respect to the increasing of scan rate. The overall experimental parameters have determined that the superior and direct electrocatalytic N2H4 oxidation obtained with lowering NH3 formation upon 6 cycles deposited e-PdNTs at 50 mV s(-1) scan rate in 0.1 M KOH solution than that of other prepared e-PdNTs and Pd/C in respect to the onset potential, current intensity and durability. The lower detection limit (3.6 mu M) and the linear range of N2H4 (up to 14 mM) have also been determined on e-PdNTs via amperometry for further sensor application. In the present study, oil palm mesocarp fibers (OPMF), an agroindustrial residue from the production of palm oil, were used to obtain cellulose nanowhiskers. They were obtained from bleaching of fibers, followed by hydrolysis using sulfuric acid and microfluidization, to control the length of cellulose nanowhiskers and avoid a decrease in thermal stability with extended acid hydrolysis time. The results showed that the nanowhiskers obtained by acid hydrolysis for 105 min resulted in structures with an average length (L) of 117 +/- 54 nm and diameter (D) of 10 +/- 5 nm. After 105 min of acid hydrolysis, the suspension was dialyzed and the neutral suspension was subjected to microfluidization. At this time the nanowhiskers presented the same dimensions, even with the fibrils disintegration of both amorphous and crystalline phases, during the microfluidization. However, after microfluidization, the sample presented a more stable suspension, but the crystallinity decreased. Increasing the hydrolysis time from 105 to 140 min, more sulfonated nanowhiskers were obtained, presenting lower thermal stability, but higher crystallinity than the microfluidized sample. Furthermore, this study proved that it is possible to obtain cellulose nanowhiskers from oil palm mesocarp fibers, an agroindustrial residue from the palm oil production, helping to reduce the environmental impact of this waste, as well as providing the obtaining of a high value-added product. Three types of NASICON-type ceramic materials which are Li1.3Al0.3Ti1.7(PO4)(3), Li1.3Sc0.15Y0.15 Ti-1.7(PO4)(3) and Li1.3Al0.3Zr1.7(PO4)(3) are prepared by a solid-state reaction for surface modification of LiCoO2 by mechano-chemical fusion coating. Ionic conductivity of the prepared ceramic electrolytes are evaluated as temperature changes, Li1.3Al0.3Ti1.7(PO4)(3), Li1.3Sc0.15Y0.15Ti1.7(PO4)(3) and Li1.3Al0.3Zr1.7(PO4)(3) exhibit the high ionic conductivity of 6.49x10(-4) S cm(-1), 5.61x10(-4) S cm(-1) and 4.85x10(-4) S cm(-1), respectively, at room temperature. Under the high cut-off potential for utilization large amount of lithium, the surface modification by ionic conducting coatings improves cycleability and rate capability of the lithium ion batteries. As ionic conductivity of coating materials increases, the coated LiCoO2 exhibits the better electrochemical performances. With AC impedance analyses, it is elucidated that the NASICON surface-coatings greatly relieve the interfacial resistance between LiCoO2 electrode and electrolyte. Mesoporous cerium oxide (CeO2) nanoparticles act as an effective heterogeneous catalyst for the transamidation reaction of amides with amines. The mesoporous CeO2 nanoparticles were prepared by hydrothermal method using different cerium precursor such as: cerium(III) chloride heptahydrate [CeCl3 center dot 7H(2)O], cerium nitrate hexahydrate [Ce(NO3)(3)center dot 6H(2)O], ceric ammonium nitrate [(NH4)(2)Ce(NO3)(6)], and cerium(III) acetate [Ce(C2H3O2)(3)center dot 1.5H(2)O]. It shows highest catalytic activity for transamidation of acetamide with N-octylamine under solvent free conditions. This is the first example of a heterogeneous catalyst for transamidation using aliphatic amines as substrates. The X-ray diffraction, BET surface area analysis, and FT-IR characterizations of CeO2 suggested its excellent catalytic activity for transamidation reaction. Nanostructures have been widely employed to reduce optical reflection for various optoelectronic devices. In this work, the aligned nanostructure arrays of with desirable diameter and density were obtained. We demonstrated the potential applications of nanosphere lithography in fabrication of Si nanoholes (SiNHs) and Si nanowires (SiNWs) with excellent broadband light antireflectance properties. Both nanostructures drastically suppress the reflection across the whole measured spectrum with wavelength above Si band gap (1.12 eV) as compared to that of planar Si wafer. The reflectance intensity of SiNHs and SiNWs is less than 6% over broad range of 400 to 1000 nm. The nanosphere lithography technique is expected to have potential applications in fabrication of nanostructures with various properties. In the Jurihe area of Inner Mongolia, lamellar hydrothermal sediment of Permian was recognized. These rocks were inter-bedded with volcanic rocks, altered by hot water (hydrothermal fluid) during the synsedimentary and quasi-synsedimentary stage. The lithogeochemistry of the rock components differ from typical sedimentary rocks. The particles and mineralogical components cannot be identified by microscope. By scanning electronic microscopy (SEM) observation and the rock mineral micro-district analysis (QEMSCAN), we found the mineralogical components of rocks are mainly illite and quartz, plagioclase and alkali feldspar, biotite, montmorillonite, chlorite, glauconite, apatite, hornblende, kaolinite and pyrite, zoisite, and there is a small amount of dolomite, calcite, rutile, gypsum/anhydrite, diopside, zircon, sphene, pyrophyllite, muscovite, epidote, siderite, ilmenite, sphalerite, paragonite, etc. Amongst these, biotite, epidote, zoisite are hydrothermalized alteration minerals. Total organic carbon (TOC) within sedimentary rocks is 0.41 similar to 0.27%. Crystalline minerals are generally of micro and nano-size. Amorphous minerals account for a large portion. Fineness of particles in the rock results from heat inhibiting crystallization,? regulation of organic molecules and macromolecular (Microbe-mediated mineralization). Oriented large granules are silicified organic matter from a hydrothermal source. Another form of organic matter taking shapes of clouds or clumps, distributed in layers, and rich in heavy elements, contains filamentous carbon-containing material. It is associated with microbiological activity. If, as postulated above, there are two types of organic matter, the theory of petroleum formation can be enriched to a certain extent. Al2O3/TiO2 stacks formed by atomic layer deposition are known to provide a high level of passivation for boron-doped silicon. In previous works, the TiO2 layer was deposited on a pre-annealed Al2O3 layer, however this stack showed passivation degradation after post-deposition annealing. This work presents an alternative to using the as-deposited Al2O3 for the Al2O3/TiO2 stack, which shows no degradation of passivation after post-deposition annealing up to 400 degrees C. This approach simplifies the processing, allowing continuous layer deposition, and eliminates the undesirable vacuum breaking. This simplified processing leads to better thermal stability of the Al2O3/TiO2 stacks and a low emitter saturation current density. In order to understand the underlying mechanism of surface passivation, an investigation on the effect of thermal SiO2 on the passivation of the Al2O3/(T)iO(2) stack was performed, which indicates that the TiO2 capping layer enhances the field-effect passivation for both the Si/Al2O3 and SiO2/Al2O3 structures. The superabsorbent polymers (SAP) based electrically conducting hydrogel such as polyaniline (PANI) impregnated polyacrylate (PAANa) hydrogel was synthesized via two-steps interpenetrating polymer network (IPN) process. This work deals with the optimization of preparation conditions (such as concentration of crosslinker, aniline monomer, and initiator) for PANI impregnated PAANa conducting hydrogel preparation with improved conductivity. The microstructural characterization of the synthesized conducting hydrogel was carried out using Fourier Transform infrared spectroscopy (FTIR), UV-visible spectroscopy, scanning electron microscopy (SEM) and X-ray diffraction (XRD) techniques. The detailed study of PAANa/PANI hydrogel behavior with HCl dopant showed good conductivity, thermal stability, and pH sensitive swelling behavior in comparison to other dopants used. On the optimization of synthesis conditions, the conducting hydrogel has shown good conductivity in order of 3.71 mS/cm and the electrical conductivity of hydrogel samples is also found higher than their corresponding filtrate. A multilayered FePdB(X angstrom)/ZnO(500 angstrom)/FePdB(X angstrom) film (X is 25, 50, 75, and 100 angstrom) was sputtered onto a glass substrate at room temperature (RT). X-ray diffraction (XRD) measurements revealed main peaks of highly crystalline ZnO(002) and FePd(200). Minor crystalline peaks, namely, those for ZnO(004), FePd(111), and FePd(002), were also detected. As the FePdB thickness increased, the crystallinity of ZnO(002) and FePd(200) increased. Strong peak for FePd(200) induced magnetocrystalline anisotropy due to the crystallinity of ZnO(002). This increased the low-frequency alternating-current magnetic susceptibility (chi(ac)) and decreased the electrical resistivity (rho) of the film. Thus, the chi(ac) value increased with the FePdB thickness because of magnetocrystalline anisotropy. The highest value of chi(ac) was 1.9, which was obtained at a FePdB thickness of 100 angstrom and at the highest spin sensitivity at an optimal resonance frequency of 30 Hz. rho decreased as the FePdB thickness increased probably because of electron scattering at the grain boundaries or impurities within the material. rho ranged from 1650 to 720 mu Omega cm. The FePdB(100 angstrom)/ZnO(500 angstrom)/FePdB(100 angstrom) sample had the maximum chi(ac) of 1.9 and a low rho (720 mu Omega cm) and is therefore suitable for magnetic and electrical applications. In addition, the surface energy is related to the adhesion of the film; a high surface energy corresponds to strong adhesion. The surface energy decreased with increasing degree of FePdB crystallization and increasing FePdB thickness, resulting in weaker adhesion. The recently developed magnetic field-assisted electroless anodization technique enables more morphological tunability by expanding the parameter space available during nanostructure synthesis and limiting the effects of copper electromigration. We synthesized CuO nanodendrites, nanowalls and nanospheres on both transparent and non-transparent substrates using the magnetic field-assisted anodization of copper foils and vacuum deposited Cu thin films on glass substrates respectively. Non-linear optical characterization of the resulting partially transparent Cu/CuO nanostructures on glass substrates revealed clear optical limiting behavior. When the fluence of the incident 800 nm radiation from a Ti:sapphire laser was increased from 20 mJ cm(-2) to 2 J cm(-2), the optical transmittance of the Cu/CuO nanostructures decreased by nearly an order of magnitude. Microwaves are routinely used in the contemporary chemical synthesis of nanomaterials to increase the reaction kinetics and reduce the reaction time. In the present investigation, microwaves have been used in two step solvothermal and combustion method for the synthesis of iron oxide (alpha-Fe2O3) nanostructures. The structural examination reveals formation of hematite phase of iron oxide. Morphological analysis shows formation of sheet and rod-like morphologies depending on reaction conditions. Owing to such morphologies enabling relatively large surface to volume ratio, these nanostructures can find useful applications in supercapacitors, photocatalysis and sensors. This work determines optimal settings for the deposition parameters of CrWN hardened, nanocomposite films grown on tungsten carbide tools and glass substrates with direct current (DC) reactive magnetron co-sputtering. Using metallic Cr and W targets, argon was the plasma gas and nitrogen was the reactive gas. An orthogonal array, the signal-to-noise ratio and analysis of variance are together used to analyze the effect of the deposition parameters. The Taguchi method was used to design a robust experiment. Then deposition parameters of the CrWN hard films, including Cr DC power (Watts), W DC power (Watts), N-2/(Ar+N-2) flow-rate ratios (%) and deposition time (min) were optimized with reference to structure of the films, dry turning tests of AISI 316 stainless steel (50 mm diameter) and wear tests of CrWN films. This study uses a grey-based Taguchi method to determine the parameters of the coating process for CrWN hard films by considering multiple performance characteristics. In the confirmation runs, according to grey relational analysis, improvements were noted of 6.17% in the surface roughness, 26.27% in the flank wear, and 6% in the coefficient of friction. The wear resistance of CrWN films can be evaluated by considering the other mechanical properties, including elastic recovery, hardness and Young's modulus, which are obtained from the nanoindentation testing. Chitin nanofibers were prepared by grinder pretreatments and subsequent mechanical treatment using a high-pressure water-jet (HPWJ) system. From SEM observations, grinder pretreatment improved disintegration efficiency in subsequent HPWJ treatment. That is, with pretreatment, excellent nanofiber networks were observed after a single HPWJ treatment, and sufficient nano-fibrillation was accomplished with 5 cycles of HPWJ treatments. The characteristic crystalline structure of alpha-chitin and degree of relative crystallinity were maintained after a series of mechanical treatments. From UV-Vis spectra, grinder pretreatment improved transparency of chitin nanofiber slurry, indicating the pretreatment facilitated subsequent HPWJ mechanical disintegration. Viscosities of chitin nanofiber slurry also showed that the grinder pretreatment and HPWJ treatment disintegrated the chitin into nanofiber very well. However, excessive HPWJ treatment decreased nano-fiber length, resulting in lower viscosity. ZnS thin films were deposited onto glass substrates by radio (RF) magnetron sputtering at room temperature. The films were grown at various annealing temperatures in vacuum at the range of 100-350 degrees C for 60 min. The effects of annealing temperatures on the structural and optical properties of the annealed samples were investigated with X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM) and UV-visible transmission spectra. XRD patterns of ZnS thin films annealed in vacuum show a single peak at 2 theta = 28.91 degrees and a preferred orientation along the (111) reflection plane, indicating a zinc blende structure. The intensity of the diffraction peaks is increased with annealing temperatures. FESEM images reveal that the particle size is enhanced from 22.6 nm (at 100 degrees C) to 48.7 nm (at 350 degrees C) with increase in the annealing temperatures. The optical band gap of the films shifts to a higher energy and the calculated value shows an increment from 3.27 to 3.51 eV as the growth temperature increases. The structural and optical properties of ZnS thin films are significantly affected by their annealing temperatures. ZnS thin films were deposited on glass substrates by radio frequency (RF) magnetron sputtering. The sputtering power was varied from 60 to 120 W in 20 W increments. The effects of the sputtering powers on the structural and optical properties of the ZnS thin films were characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM) and UV-visible spectroscopy. XRD showed that the films were polycrystalline with a cubic structure and preferential orientation along (111) the directions. ZnS thin films deposited at higher sputtering power showed better crystallinity than that of the films deposited at lower sputtering power. FESEM images of the surface morphology clearly show that a high sputtering power enhanced the particle size and nucleation of the ZnS thin films. The optical band gap was reduced from 3.96 to 3.47 eV and the optical transmission improved with the sputtering power from 60 to 120 W. Because of its high passivation quality, Al2O3 is used as the front passivation layer in commercial n-type silicon solar cells. The front passivation layer of a solar cell should have passivation properties and antireflection properties. For process efficiency and protection, SiNx is used as a capping layer on Al2O3. However, the Al2O3/SiNx stack layer has an issue with firing stability during screen printing of the cell, similar to the Al2O3 layer. This is because the Al2O3/SiNx stack layer needs to be fired to form the metal contact. In this study, the firing stability of the Al2O3/SiNx stack layer is investigated and the relation between blister formation and passivation is elucidated. Annealing improves the passivation quality of layers after firing. The order of annealing and stack formation is also one of parameters for firing stability. We used thermal atomic layer deposition to form Al2O3 and plasma enhanced chemical vapor deposition to form SiNx. The refractive index of each layer is 1.6 and 2.0, respectively, and the thickness is 10 nm and 70 nm, respectively. Rapid thermal processing was used for the annealing and firing. Passivation qualities were determined by quasi steady-state photo conductance, and blistering was observed by optical microscopy. Although there is some blistering on the sample annealed at 600 degrees C, good passivation and good firing stability was observed. We identified that a low density of blisters formed during the annealing step improves the firing stability of the passivation layer by preventing abrupt blister formation during the firing step, which is the cause of thermal degradation. We investigated the stress distribution and electrical characteristics according to changes in the process parameters in a vertical NAND (VNAND) flash cell with a poly-Si channel. We used technology computer-aided design to confirm that process parameters changes affect the stress distribution in a VNAND flash cell and the stress in the poly-Si channel. Also, we found that, as the stress distributions changed, the electrical characteristics depended significantly on the annealing temperature, channel hole angle, and tungsten intrinsic stress in a VNAND flash cell. Thus, the industry needs to develop and apply better process parameters and acquire a better understanding of how the electrical characteristics of a VNAND flash cell depend on those parameters. 1D ZnO nanostructures have been considerable interests due to their unique structures and properties as well as potential applications as building blocks of nanodevices. The Zn1-xLixO NR arrays were grown on p-Si substrate by an inexpensive environment friendly hydrothermal process followed by forming Au/Zn1-xLixO NRs Schottky photodiodes. The diodes show excellent rectifying property in dark and the rectification to be 1 x 19 x 10(2) and 6.26 x 10(2) at 2.4 V, for Au/ZnO and Au/LZO NRs diodes, respectively. It is interesting that Li doped diode shows negative shift of the turn on voltage (V-on) and the rapidly increasing forward current as well as low leakage current which are of great benefit to minimize the power loss and an improving switching characteristics of rectifier devices. The obtained ideality factor is greater than unity. The value of barrier height obtained from I-V measurements is smaller than the value obtained from C-V measurements. The responsivity is higher for Li doped diode (0.13 A/W) than undoped (0.05 A/W) even the commercial GaN UV detector (0.1 A/W). Thus, good crystallinity and excellent Schottky diode rectifying properties of Au/ZnLiO NRs diode on p-Si substrate, enabling efficient electronics nanodevices and nanoscale UV-photodetection sensors. Carbon dioxide (CO2) resulting from the burning of organic carbons has aroused serious concern because of increasing concentrations in the atmosphere and its effect on global warming. Therefore, finding an efficient method for CO2 utilization is important. In this research, CO2 underwent efficient reduction when graphene (GN)-TiO2 was used as a catalyst under visible light irradiation. The radicals involved in the reaction as intermediates were observed by ESR to identify the reduction pathways. Furthermore, the hypothetical pathways were verified by the steady-state approximation model. The calculated rate constants of the final products and intermediates (including radicals) indicated that the formation of CH3OH was mainly derived from center dot CO- rather than from HCOOH. Furthermore, the effects of GN loading, catalyst loading, and light source were also discussed. Optimal performance was achieved at 0.4 g L-1 of 40% GN-TiO2 loading under visible light irradiation. Cell-specific gene delivery through conjugating targeting molecules to gene carriers is a promising strategy for gene therapy. Click chemistry is a convenient tool for such conjugations. We have developed siloxane-based amphiphilic polymers with alkyne-functionalized and quaternized imidazolium salts (PIm) for forming nanoemulsions capable of conjugating azide-functionalized targeting molecules by click chemistry. Being positively charged, these polymers were expected to be applicable to targeted gene delivery. In this study, PIm was conjugated with lactose, which is recognized by asialoglycoprotein receptors (ASGP-Rs) on hepatocytes, using click chemistry and was examined for DNA binding, cytotoxicity, and in vitro transfection in HepG2 and NIH3T3 cells. The agarose gel retardation assay using a 5.3-kbp plasmid DNA (pDNA) confirmed complex formation between the lactose-conjugated polymers and DNA at a nitrogen/phosphorous (N/P) ratio of 8. The polymers exhibited no significant hemolytic activity up to 50 mu g/mL. The polymer-pDNA complexes have low cytotoxicity, which maintained a cell survival rate greater than 70% at N/P ratios of up to 12 under transfection conditions. The luciferase assay revealed that PIm with a low level of lactose conjugation showed a 29-fold higher transfection efficiency than unmodified PIm at an N/P ratio of 12 in HepG2 cells, but not in NIH3T3 cells that do not express ASGP-Rs. These results demonstrate that the lactose conjugation confers significantly enhanced gene transfer capability for HepG2 cells to PIm most likely due to binding of the polymer to ASGP-Rs on HepG2 cells. Therefore, PIm could be a promising gene delivery vehicle clickable for conjugating cell-specific targeting molecules. In this study, a novel method to immobilize Pseudozyma jejuensis for polycaprolactone (PCL) biodegradation has been developed using Fe3O4 nanoparticles. The Fe3O4 nanoparticles were encapsulated with silica and functionalized with NH2 groups to enhance their capacity to adsorb on the cell surface. The results show that the NH2-functionalized silica-encapsulated Fe3O4 nanoparticles strongly adsorbed on the cell surface of P. Jejuensis without any interruptions of their normal cell growth. There was a significant increase in the total organic carbon (TOC) concentration of P. jejuensis cells coated with NH2-functionalized silica-encapsulated Fe3O4 nanoparticles after 10 days biodegradation of PCL at 30 degrees C. Concerning reusability, the coated cells could completely degrade PCL during the first 2 cycles, and retain similar to 80% activity for the third and 75% activity of PCL biodegradation for the fourth, fifth, and sixth cycles. In this work, polypropylene/clay nanocomposites were synthesized by in situ polymerization employing an isospecific metallocene catalyst supported on montmorillonite clay, prepared under different conditions. The study aimed to evaluate the influence of the treatment process of catalyst supports, using different aluminum compounds, on their activity and polymer properties. They were evaluated in propylene polymerization at 60 degrees C under different reaction conditions, using toluene or hexane as reaction medium, and methylaluminoxane or its mixture with triisobutylaluminium as cocatalysts. Afterwards, another clay supported catalyst was prepared at a lower concentration of metallocene in order to produce a PP/clay nanocomposite with high concentration of clay. This is the first report showing the influence of the polymerization time, which was evaluated by taking aliquots during the reaction to determine its influence on the polymer properties during the exfoliation process. PP/clay-based nanocomposites were obtained with high concentration of exfoliated clay, showing mechanical and thermal properties superior to those of the homogeneous polypropylene. Ni-Mo alloy films were prepared by electrodeposition technique in sulfate-chloride bath. The influence of main technical parameters, such as bath temperature, pH and current density on microstructure and surface morphology of Ni-Mo alloy films were characterized by X-ray diffractometer (XRD) and scanning electron microscope (SEM). The electrochemical properties for the electrodepositing Ni-Mo alloy films for hydrogen evolution were investigated by electrochemical methods, using cathode polarization curves. Results showed that the excellent electrochemical properties for the Ni-Mo alloy films were obtained when the current density, bath temperature and pH were 8 A/dm(2), 30 degrees C and 5.0, respectively. A disposable electrochemical immunosensor for detection of human alpha-fetoprotein (AFP) has been developed in this study, using monoclonal antibody against AFP and horseradish peroxidase co-modified colloidal gold nanoparticles (Ab-Au-HRP) as trace label and screen-printed carbon electrodes (SPCEs) as signal transducers. The immunosensor was based on a sandwich immunoassay scheme in which the nanoprobe Ab-Au-HRP was dispensed on the polyester membrane as conjugate pad. The other monoclonal antibody against AFP, as capture antibody, was immobilized onto the nitrocellulose membrane which was mounted on the carbon surface of the screen-printed electrodes. An absorbent paper was mounted downstream to draw the sample to successively flow through the conjugate, capture and absorbent pads. The resulting sandwich immune reaction was evaluated by electrochemical detection, through addition of a single-component substrate for HRP. Under optimum conditions, the current value was linearly increased with an increased AFP concentration from 0.5 to 350 ng/mL, at a relatively low detection limit of 0.25 ng/mL at 3 sigma. Excellent analytical performance, fabrication reproducibility and operational stability for the proposed immunosensor indicated its promising application in clinical diagnostics and self-exams. Co/CoO bilayer thin films were sputter deposited on a substrate with periodically close-packed truncated 500 nm void arrays. After field cooling from room temperature to 10 K, the hysteresis loops measured at 10 K show distinguished two asymmetries: the unidirectional anisotropy resulted from exchange bias (EB) effect and a step in the second quadrant. This step is believed to be caused by unbiased free magnetic moment. The hysteresis loop representing magnetization change leading to the step show hard-axis behavior, which strongly suggests that the unbiased free moments come from the magnetization with perpendicular anisotropy. We report a p-n heterojunction solid-state solar cell comprised of p-type CdTe that was electrochemically deposited onto hydrothermally synthesized vertically oriented arrays of n-type single crystal rutile TiO2 nanowire arrays (RNWAs). The nanowires were grown on fluorine doped tin oxide (FTO)-coated transparent conductive glass substrates by hydrothermal synthesis. Morphological studies revealed the TiO2 nanowires to be completely infiltrated by the CdTe. The solid-state solar cell exhibited an open circuit voltage of 0.6 V, a short-circuit current density of 3.57 mA cm(-2), a fill factor of 44% and a power conversion efficiency of 0.96%. Fibrillar collagen hydrogels have been used widely as bioactive scaffolds for multiple applications in biomedical and tissue engineering. The physical functions of collagen fibrils are regulated by their underlying microstructure-represented by fibrillar density and orientation. The extent to which characterization techniques are used for imaging collagen fibrillar networks has significantly reduced the usefulness of published data for biomedical engineers. This short communication explains the level of uncertainty surrounding fibrillar orientation measurements. It is discussed how a correlation between the orientation of collagen fibrils and normal distribution function can provide a robust baseline for comparative research in imaging collagen. We fabricated p-n-junction-type and Schottky barrier (SB)-type lead telluride (PbTe) mid-infrared focal plane arrays (FPAs) using a flip-chip bonder. The detection wavelength peak of the SB-type FPA shifted from 6.10 mu m to 4.95 mu m as the ambient temperature was increased from -258.15 degrees C to -148.15 degrees C. At -196.5 degrees C, the detection wavelength peak and cut-off wavelength were 5.68 mu m and 6.16 mu m, respectively. The p-n-junction-type FPA yielded the sharpest photoconductivity spectrum among the investigated devices, presumably because of the Tl-related deep levels in its p-type PbTe epitaxial layer. These deep levels absorb the mid-infrared light and cut off the detection wavelength range. Thus, the p-n-junction-type FPAs are expected to be employed in vision cameras and sensors with selectable mid-infrared wavelengths. Meanwhile, the SB-type FPA has a high dielectric constant; consequently, it develops a wide depletion layer that detects a wide range of mid-infrared wavelengths. FPAs based on the PbTe system are potentially implementable in simple, highly sensitive, mid-infrared imaging devices with a wide scan range. In this short review, we have summarized our recent findings about hydrogen water, focusing on the methods for generating hydrogen water, in vivo and in vitro safety tests, and the antioxidant effect and anticancer effect of hydrogen water. Our findings indicate that a portable, inexpensive apparatus for generating hydrogen water in the laboratory is necessary and useful in the field of hydrogen medicine. The United Nations Millennium Development Goals initiative, designed to meet the needs of the world's poorest, ended in 2015. The purpose of this article is to describe the progress made through the Millennium Development Goals and the additional work needed to address vulnerable populations worldwide, especially women and children. A description of the subsequent Sustainable Development Goals, enacted to address the root causes of poverty and the universal need for development for all people, is provided. Innovative programs introduced in response to the Millennium Development Goals show promise to reduce the global rate of maternal mortality. The Sustainable Development Goals, introduced in 2015, were designed to build on this progress. In this article, we describe the global factors that contribute to maternal mortality rates, outcomes of the implementation of the Millennium Development Goals, and the new, related Sustainable Development Goals. Implications for clinical practice, health care systems, research, and health policy are provided. The health and productivity of a global society is dependent upon the elimination of gender inequities that prevent girls from achieving their full potential. Although some progress has been made in reducing social, economic, and health disparities between men and women, gender equality continues to be an elusive goal. The Millennium Development Goals (2000-2015) and the Sustainable Development Goals (2015-2030) include intergovernmental aspirations to empower women and stress that change must begin with the girl child. Objective: To describe the reports of young women in their senior college years related to alcohol and tobacco use and to describe their health screening experiences in college health centers. Design: A secondary analysis of data collected as part of a cross-sectional study of college women. Setting: For the original study, women were recruited from two accredited 4-year universities in the Northeastern United States. The first was a private university, and the second was a public university; both had on-campus health centers. Participants: The participants were 615 female undergraduate students enrolled in their senior year of college. Methods: A Web-based survey was sent to approximately 1,200 women at each university. Women were asked about their alcohol and tobacco use and about screening experiences in college health centers. The mean response rate was 25.8%. Results: Nearly 90% (n = 550) of the women reported drinking alcohol in the last 3 months, and of those, more than two thirds (n = 370) met the Centers for DiseaseControl and Prevention definition of hazardous drinking. However, only 21.5% (n = 56) reported being screened for alcohol use. Similarly, only 19.7% (n = 52) reported being screened for tobacco use. Conclusion: College health centers are ideally positioned to screen and provide interventions for young women who are at high risk for alcohol misuse and tobacco use. Despite prevalence of use and importance of screening, reported screening is low. Future research is needed to understand barriers to screening and implement recommendations for college health centers. Objective: To examine the consistency and adequacy of nutritional intake in a population of Black women in the second and third trimesters of pregnancy. Design: This was a longitudinal descriptive study. Data were collected from women with low-risk pregnancies at 22- to 24-week prenatal visits and two subsequent visits. Setting: Participants were recruited from urban prenatal clinics in one city in the Northeastern United States. Participants: Pregnant women who self-identified as Black (N = 195). Methods: A 24-hour diet recall was obtained at each of the three study time points. Food models and measuring cups were used to improve the accuracy of portion size reporting. Data from diet recalls were manually entered in Food Processor software to compute nutritional content. Results: A linear mixed-effects model was used to examine dietary intake. Dietary patterns were stable from the second to the third trimesters, and caloric intake was inadequate. Women met minimal daily requirements for carbohydrate and protein intake, but the overall percentages of fat, protein, and carbohydrates indicated that additional calories needed to come from protein. Although more than 80% of women regularly took prenatal vitamins, micronutrient and fiber intake were consistently inadequate. Conclusion: Prenatal care to help women identify foods that are rich in fiber, protein, and micronutrients is important for the health of women and newborns. Knowing that nutritional intake is consistently inadequate, nurses can counsel pregnant women whenever they have contact with them to attempt to improve nutritional intake and make women aware of inexpensive nutrient sources. Objective: To determine pregnant women's preferences for the treatment of insomnia: cognitive behavioral therapy (CBT-I), pharmacotherapy, or acupuncture. Design: A cross-sectional survey of pregnant women. Setting: We recruited participants in person at a low-risk maternity clinic and a pregnancy and infant trade show and invited them to complete an online questionnaire. Participants: The sample (N = 187) was primarily White (70%), married or common-law married (96%), and on average 31 years of age; the mean gestational age was 28 weeks. Methods: Participants read expert-validated descriptions of CBT-I, pharmacotherapy, and acupuncture and then indicated their preferences and perceptions of each approach. Results: Participants indicated that if they experienced insomnia, they preferred CBT-I to other approaches, chi(2) (2) = 38.10, p < .001. They rated CBT-I as the most credible treatment (eta(2)(partial) = .22, p < .001) and had stronger positive reactions to it than to the other two approaches (eta(2)(partial) = 37, p <. 001). Conclusion: Participants preferred CBT-I for insomnia during pregnancy. This preference is similar to previously reported preferences for psychotherapy for treatment of depression and anxiety during pregnancy. It is important for clinicians to consider women's preferences when discussing possible treatment for insomnia. Objective: To describe the adaption and psychometric testing of the Picker Employee Questionnaire to measure work environment, work experience, and employee engagement with midwives. Design: Expert interviews, cognitive testing, and online survey for data collection. Setting: Obstetric departments in Germany. Participants: Midwives employed in German obstetric departments: 3,867 were invited to take part, and 1,692 (44%) responded to the survey. Methods: Questionnaire adaption involved expert interviews and cognitive testing. Psychometric evaluation was done via exploratory factor analysis, reliability analysis, and construct validity assessment. Results: The adaption of the Picker Employee Questionnaire resulted in a tool with 75 closed questions referring to central aspects of work environment, experience, and engagement. Factor analysis yielded 10 factors explaining 51% of the variance. Themes covered were Support from Management (Immediate Superior and Hospital Management), Workload, Overtime, Scheduling, Education and Training, Interaction with Colleagues (Midwives, Physicians, and Nurses), and Engagement. Eight scales had a Cronbach's alpha coefficient of 0.7 or greater; the remaining two were 0.6 or less. The questionnaire distinguished between different subgroups of midwives and hospitals. Conclusion: The questionnaire is well suited for the measurement of midwives' work experience, environment, and engagement. It is a useful tool that supports employers and human resource managers in shaping and motivating an efficient work environment for midwives. Objective: To compare the efficacy of umbilical cord sponging with 70% alcohol, sponging with 10% povidone-iodine, and dry care on the time to umbilical cord separation and bacterial colonization. Design: Prospective, interventional experimental study design. Setting: Three different family health centers in Istanbul, Turkey. Participants: In total, 194 newborns were enrolled in one of three study groups: Group 1, 70% alcohol (n = 67); Group 2, 10% povidone-iodine (n = 62); and Group 3, dry care (n = 65). Methods: Data were collected between January 2015 and July 2015. Umbilical separation time and umbilical cord bacterial colonization were considered as the study outcomes. Results: The most commonly isolated bacteria were Staphylococcus aureus, Escherichia coli, and enterococci. There was no significant difference among the groups for umbilical cord separation times (p > .05). Conclusion: Dry care may be perceived as an attractive option because of cost benefits and ease of application. Clinicians may face new ethical considerations when parents continue pregnancies after receiving life-limiting fetal diagnoses and desire palliative care. In this article we present four ethical considerations in perinatal palliative care: ambiguous terminology in relation to diagnosis or prognosis, differences between bereavement support and palliative care, neonatal organ donation, and postdeath cooling. In this article, we enable readers to consider current topics from different perspectives and reflect on care when confronted with sensitive clinical scenarios. We propose a cycles-breaking conceptual framework to guide perinatal research, interventions, and clinical innovations that can prevent or disrupt intergenerational cycles of childhood maltreatment and psychiatric vulnerability. The framework is grounded in literature, clinical observations, team science collaboration, and empirical research from numerous disciplines and is specific to the childbearing year. Adoption of the framework has the potential to speed the progress of research on the social problems of intergenerational childhood maltreatment and psychiatric vulnerability. Objective: To examine the effects of intimate partner violence (IPV) at varied time points in the perinatal period on inadequate and excessive gestational weight gain. Design: Retrospective cohort using population-based secondary data. Setting: Pregnancy Risk Assessment Monitoring System and birth certificate data from New York City and 35 states. Participants: Data were obtained for 251,342 U.S. mothers who gave birth from 2004 through 2011 and completed the Pregnancy Risk Assessment Monitoring System survey 2 to 9 months after birth. Methods: The exposure was perinatal IPV, defined as experiencing physical abuse by a current or ex-partner in the year before or during pregnancy. Adequacy of gestational weight gain (GWG) was categorized using 2009 Institute of Medicine guidelines. Weighted descriptive statistics and multivariate logistic regression models were used. Results: Approximately 6% of participants reported perinatal IPV, 2.7% reported IPV in the year before pregnancy, 1.1% reported IPV during pregnancy only, and the remaining 2.5% reported IPV before and during pregnancy. Inadequate GWG was more prevalent among participants who experienced IPV during pregnancy and those who experienced IPV before and during pregnancy (23.3% and 23.5%, respectively) than in participants who reported no IPV (20.2%; p<.001). Participants who experienced IPV before pregnancy only were significantly more likely to have excessive GWG (p<.001). Results were attenuated in the multivariate modeling; only participants who experienced IPV before pregnancy had weakly significant odds of excessive GWG (adjusted odds ratio = 1.14, 95% CI [1.02, 1.26]). Conclusion: The association between perinatal IPV and inadequate GWG was explained by confounding variables; however, women who reported perinatal IPV had greater rates of GWG outside the optimal range. Future studies are needed to determine how relevant confounding variables may affect a woman's GWG. Objective: To describe the use of hydrotherapy for pain management in labor. Design: This was a retrospective cohort study. Setting: Hospital labor and delivery unit in the Northwestern United States, 2006 through 2013. Participants: Women in a nurse-midwifery-managed practice who were eligible to use hydrotherapy during labor. Methods: Descriptive statistics were used to report the proportion of participants who initiated and discontinued hydrotherapy and duration of hydrotherapy use. Logistic regression was used to provide adjusted odds ratios for characteristics associated with hydrotherapy use. Results: Of the 327 participants included, 268 (82%) initiated hydrotherapy. Of those, 80 (29.9%) were removed from the water because they met medical exclusion criteria, and 24 (9%) progressed to pharmacologic pain management. The mean duration of tub use was 156.3 minutes (standard deviation = 122.7). Induction of labor was associated with declining the offer of hydrotherapy, and nulliparity was associated with medical removal from hydrotherapy. Conclusion: In a hospital that promoted hydrotherapy for pain management in labor, most women who were eligible initiated hydrotherapy. Hospital staff can estimate demand for hydrotherapy by being aware that hydrotherapy use is associated with nulliparity. Objective: To describe the maternity care nurse staffing in rural U.S. hospitals and identify key challenges and opportunities in maintaining an adequate nursing workforce. Design: Cross-sectional survey study. Setting: Maternity care units within rural hospitals in nine U.S. states. Participants: Maternity care unit managers. Methods: We calculated descriptive statistics to characterize the rural maternity care nursing workforce by hospital birth volume and nursing staff model. We used simple content analysis to analyze responses to open-ended questions and identified themes related to challenges and opportunities for maternity care nursing in rural hospitals. Results: Of the 263 hospitals, 51% were low volume (<300 annual births) and 49% were high volume (>= 300 annual births). Among low-volume hospitals, 78% used a shared nurse staff model. In contrast, 31% of high-volume hospitals used a shared nurse staff model. Respondents praised the teamwork, dedication, and skill of their maternity care nurses. They did, however, identify significant challenges related to recruiting nurses, maintaining adequate staffing during times of census variability, orienting and training nurses, and retaining experienced nurses. Conclusion: Rural maternity care unit managers recognize the importance of nursing and have varied staffing needs. Policy implementation and programmatic support to ameliorate challenges may help ensure that an adequate nursing staff can be maintained, even in small-volume rural hospitals. Objectives: To measure the cultural competence level of obstetric and neonatal nurses, explore relationships among cultural competence and selected sociodemographic variables, and identify factors related to cultural competence. Design: Descriptive correlational study. Setting: Online survey. Participants: A convenience sample of 132 obstetric and neonatal registered nurses practicing in the United States. Methods: Nurse participants completed the Cultural Competence Assessment (CCA) instrument, which included Cultural Awareness and Sensitivity (CAS) and Cultural Competence Behaviors (CCB) subscales, and a sociodemographic questionnaire. Correlation and regression analyses were conducted. Results: The average CCA score was 5.38 (possible range = 1.00-7.00). CCA scores were negatively correlated with age and positively correlated with self-ranked cultural competence, years of nursing experience, years of experience within the specialty area, and number of types of previous cultural diversity training. CCB subscale scores were correlated positively with age, years of nursing experience, years of experience within the specialty area, and number of types of previous diversity training. CAS subscale scores were positively correlated with number of types of previous diversity training. Standard multiple linear regression explained approximately 10%, 12%, and 11% of the variance in CCA, CAS, and CCB scores, respectively. Conclusion: Obstetric and neonatal registered nurses should continue to work toward greater cultural competence. Exposing nurses to more types of cultural diversity training may help achieve greater cultural competence. Objective: To develop the theme of Resilience of mothers of very-low-birth-weight infants in the NICU from a qualitative study on maternal role attainment. Design: Secondary analysis using retrospective interpretation, that is, the further development and refinement of content related to resilience that was identified in the original data. Setting: A tertiary NICU in Chicago. Participants: Twenty-three English-speaking, predominantly single (74%), minority (Black [57%], Hispanic [17%]), low-income (78%), primiparous (78%) mothers of very-low-birth-weight infants. Methods: Narrative analysis and core story creation were used to analyze the data related to resilience. A narrative of each participant's birth and NICU story was constructed and recurring meanings were analyzed. Identified patterns were compared across narratives to create one coherent core story that synthesized themes common to all stories. Results: Participants found meaning in redefining their priorities to become advocates for their infants and to "pick themselves up for their babies" by using resources that actively promoted their mental health. NICU-based breastfeeding peer counselors and bedside nurses helped guide participants through their NICU experiences, provided support so participants could gain confidence and competence, and allowed participants to cope with their long-term psychological distress. Conclusion: Participants demonstrated resilience as they learned to live with what was beyond their control. NICU nurses are ideally positioned to capitalize and expand on mothers' health-promoting strengths, resources, and coping strategies to help them further decrease their NICU-related stress and better integrate mothering behaviors into their lives long after NICU discharge. Objective: To examine associations among parent perceptions of infant symptoms/suffering, parent distress, and decision making about having additional children after an infant's death in the NICU. Design: Mixed-methods pilot study incorporating mailed surveys and qualitative interviews. Setting: Midwestern Level IV regional referral NICU. Participants: Participants were 42 mothers and 27 fathers whose infants died in the NICU. Methods: Parents reported on infant symptoms/suffering at end of life and their own grief and posttraumatic stress symptoms. Qualitative interviews explored decision making about having additional children. Results: Approximately two thirds of bereaved parents had another child after their infant's death (62% of mothers, 67% of fathers). Mothers who had another child reported fewer infant symptoms at end of life compared with mothers who did not (p = .002, d = 1.28). Although few mothers exceeded clinical levels of prolonged grief (3%) and posttraumatic stress symptoms (18%), mothers who had another child endorsed fewer symptoms of prolonged grief (p = .001, d = 1.63) and posttraumatic stress (p = .009, d = 1.16). Differences between fathers mirrored these effects but were not significant. Parent interviews generated themes related to decision making about having additional children, including Impact of Infant Death, Facilitators and Barriers, Timing and Trajectories of Decisions, and Not Wanting to Replace the Deceased Child. Conclusion: Having another child after infant loss may promote resilience or serve as an indicator of positive adjustment among parents bereaved by infant death in the NICU. Prospective research is necessary to distinguish directional associations and guide evidence-based care. Objectives: To investigate maternal anxiety in women with pregnancies complicated by fetal anomalies that require surgery. Design: Prospective comparison pilot study. Setting: A fetal care center in a Northeastern U.S. academic medical center. Participants: Women in their second or early third trimesters of pregnancy; 19 with pregnancies complicated by fetal anomalies and 25 without. Methods: After ultrasonography, all participants completed the Spielberger State-Trait Anxiety Inventory and a sociodemographic questionnaire. Participants with pregnancies complicated by fetal anomalies also answered questions about the causes of their anxiety, their awareness of the nurse care coordinator service, and desired methods of emotional support. Obstetric and mental health history data were abstracted from the medical records of both groups. Results: Participants with pregnancies complicated by fetal anomalies had greater mean state anxiety scores than those without (43.58 vs. 29.08, p = .002). Maternal age was positively correlated with the state anxiety in women with fetuses with anomalies (r = 0.59, p = .008). Participants with histories of mental health issues had greater mean trait anxiety scores than those without (39.2 vs. 32.2, p = .048). Most participants (68%) reported that knowledge of the fetal care center's nurse care coordinator decreased their anxiety. Participants wanted the opportunity to speak with families who had similar experiences as a source of emotional support. Conclusion: Older maternal age may be a risk factor for anxiety in this population. Knowledge of the fetal care center nurse care coordinator service may have a positive effect and should be studied further. Objective: To describe the prevalence and predictors of breastfeeding intent and outcomes in women with histories of childhood maltreatment trauma (CMT), including those with posttraumatic stress disorder (PTSD). Design: Secondary analysis of a prospective observational cohort study of the effects of PTSD on perinatal outcomes. Setting: Prenatal clinics in three health systems in the Midwestern United States. Participants: Women older than 18 years expecting their first infants, comprising three groups: women who experienced CMT but did not have PTSD (CMT-resilient), women with a history of CMT and PTSD (CMT-PTSD), and women with no history of CMT (CMT-nonexposed). Methods: Secondary analysis of an existing data set in which first-time mothers were well-characterized on trauma history, PTSD, depression, feeding plans, feeding outcomes, and several other factors relevant to odds of breastfeeding success. Results: Intent to breastfeed was similar among the three groups. Women in the CMT-resilient group were twice as likely to breastfeed exclusively at 6 weeks (60.5%) as women in the CMT-PTSD group (31.1%). Compared with women in the CMT-nonexposed group, women in the CMT-resilient group were more likely to exclusively breastfeed. Four factors were associated with increased likelihood of any breastfeeding at 6 weeks: prenatal intent to breastfeed, childbirth education, partnered, and a history of CMT. Four factors were associated with decreased odds of breastfeeding: African American race, PTSD, major depression, and low level of education (high school or less). Conclusion: Posttraumatic stress disorder is more important than childhood maltreatment trauma history in determining likelihood of breastfeeding success. Further research on the promotion of breastfeeding among PTSD-affected women who have experienced CMT is indicated. Barriers to breastfeeding in women with substance use disorders (SUDs) often exist. Neonatal abstinence syndrome-related feeding difficulties, maternal SUD-related maladaptive behaviors, and psychological comorbidities can adversely affect breastfeeding. A neglected barrier that frequently occurs in women with SUDs is a history of sexual abuse. It is important that nurses and providers understand each maternal and/or infant factor that can affect the breastfeeding course to assist effectively with lactation support for these frequently misunderstood dyads. Problem: Improvements in staff training, identification, and treatment planning for children with special health care needs who have behavioral issues are routinely recommended, but a literature review revealed no coherent plans targeted specifically toward pediatric ED staff. Methods: An educational module was delivered to emergency staff along with a survey before and after and 1 month after the intervention to examine comfort in working with children with behavioral special needs and the ability to deliver specialized care. Child life consultations in the pediatric emergency department were measured 3 months before and 3 months after the education was provided. Results: A total of 122 staff participated and reported clinically significant improvements across all areas of care that were maintained at 1 month. Implications for practice: To the best of our knowledge, this project represents the first quality improvement project offering behavioral needs education to emergency staff at a large pediatric hospital with an examination of its impact on staff competence, comfort, and outcomes. A large-scale educational module is a practical option for improvement in pediatric ED staff competence in caring for patients with behavioral special needs. Problem: Rapid diagnosis of seasonal influenza leads to optimized clinical care and reduces the spread of infection. The collection of adequate cellular material can be facilitated by the presence of moisture in the nares. The specific aim of this project was to determine if the installation of sterile saline into the nares prior to specimen collection would improve the quality of the specimen. Methods: This quasi-experimental single group design tested an initial "dry swab" specimen against a second swab after instillation of sterile saline solution using a nasal atomizer, a "wet swab." Results: A total of 80 paired specimens were collected and analyzed between December 7, 2015, and April 21, 2016, with an 11.25% infection rate in those tested. Of 9 positive tests, 6 subjects tested positive for influenza A or B for both the dry swab and the wet swab. Three subjects tested positive for influenza A or B for only the wet swab, and these subjects had experienced their symptoms longer than did subjects who tested positive for both methods (mean symptom onset of 72 hours vs 66 hours). We found an important inconsistency between manufacturers' recommendations and typical hospital practice. Implications for Practice: The results appear somewhat equivocal. Because viral shedding declines after the first 48 to 72 hours in adults, the wet swab method may be clinically superior for detecting influenza in adults presenting later in the course of their illness. Hospital policy was revised for consistency in using the gel medium before sampling in accordance with manufacturer recommendations. Introduction: Many patient visits to emergency departments result in the patient dying or being pronounced dead on arrival. The numbers of deaths in emergency departments are likely to increase as a significant portion of the U.S. population ages. Consequently, emergency nurses face many obstacles to providing quality end-of-life (EOL) care when death occurs. The purpose of this study was to identify suggestions that emergency nurses have to improve EOL care, specifically in rural emergency departments. Methods: A 57-item questionnaire was sent to 53 rural hospitals in 4 states in the Intermountain West, plus Alaska. One item asked nurses to identify the one aspect of EOL care they would change for dying patients in rural emergency departments. Each qualitative response was individually reviewed by a research team and then coded into a theme. Results: Four major themes and three minor themes were identified. The major themes were providing greater privacy during EOL care for patients and family members, increasing availability of support services, additional staffing, and improved staff and community education. Discussion: Providing adequate privacy for patients and family members was a major obstacle to providing EOL care in the emergency department, largely because of poor department design, especially in rural emergency departments where space is limited. Lack of support services and adequate staffing were also obstacles to providing quality EOL care in rural emergency departments. Consequently, rural nurses are commonly pulled away from EOL care to perform ancillary duties because additional support personnel are lacking. Providing EOL care in rural emergency departments is a challenging task given the limited staffing and resources, and thus it is imperative that nurses' suggestions for improvement of EOL care be acknowledged. Because of the current lack of research in rural EOL care, additional research is needed. Introduction: The National Institutes of Health Stroke Scale (NIHSS) is commonly used in Comprehensive Stroke Centers, but it has not been easily implemented in smaller centers. The aim of this study was to assess whether nurse providers who were naive to stroke assessment scales could obtain accurate stroke severity scores using our previously validated NIH Stroke Scale in Plain English (NIHSS-PE) with minimal or no training. Methods: We randomly assigned 122 nursing students who were naive to stroke assessment scales to 1 of 4 groups: trained on the NIHSS, untrained on the NIHSS, trained on the NIHSS-PE, or untrained on the NIHSS-PE. The Trained/NIHSS and Trained/NIHSS-PE groups watched assessment scale-specific training DVDs. All 4 study groups scored the same 3 patients from the National Institute of Neurological Disorders and Stroke certification DVD, in randomly assigned order. Two-way repeated measures analysis of variance was used to compare group scores with those obtained by a consensus panel of NIHSS-certified expert users, and with each other. Results: NIHSS-PE users had scores significantly closer to the expert scores compared with NIHSS users (F-(1,F-118) = 4.656, P = .033). Trained users had scores significantly closer to the expert scores than untrained users (F-(1,F-118) = 6.607, P = .011). Scores from untrained users of the NIHSS-PE did not differ from those of trained users of the NIHSS (F-(1,F-59) = 0.08, P = .780). Discussion: With minimal or no training, novice nurse users of the NIHSS-PE can do as well as, if not better than, novice users of the NIHSS, making this tool useful for facilities pursuing Acute Stroke-Ready certification. Introduction: Unrelieved acute musculoskeletal pain continues to be a reality of major clinical importance, despite advancements in pain management. Accurate pain assessment by nurses is crucial for effective pain management. Yet inaccurate pain assessment is a consistent finding worldwide in various clinical settings, including the emergency department. In this study, pain assessments between nurses and patients with acute musculoskeletal pain after extremity injury will be compared to assess discrepancies. A second aim is to identify patients at high risk for underassessment by emergency nurses. Methods: The prospective PROTACT study included 539 adult patients who were admitted to the emergency department with musculoskeletal pain. Data on pain assessment and characteristics of patients including demographics, pain, and injury, psychosocial, and clinical factors were collected using questionnaires and hospital registry. Results: Nurses significantly underestimated patients' pain with a mean difference of 2.4 and a 95% confidence interval of 2.2-2.6 on an 11-points numerical rating scale. Agreement between nurses' documented and patients' self-reported pain was only 27%, and 63% of the pain was underassessed. Pain was particularly underassessed in women, in persons with a lower educational level, in patients who used prehospital analgesics, in smokers, in patients with injury to the lower extremities, in anxious patients, and in patients with a lower urgency level. Discussion: Underassessment of pain by emergency nurses is still a major problem and might result in undertreatment of pain if the emergency nurses rely on their assessment to provide further pain treatment. Strategies that focus on awareness among nurses of which patients are at high risk of underassessment of pain are needed. Introduction: Each year, more than 130,000 children younger than 13 years are treated in the emergency department after evaluation of injuries sustained from motor vehicle crashes (MVCs). Many of these injuries can be prevented with use of child restraints. In this study we sought to assess emergency nurses' knowledge of child passenger safety (CPS) and its use to keep children safe while traveling in motor vehicles. Methods: A cross-sectional anonymous study was distributed electronically to 530 emergency nurses who were asked to forward the survey link to other emergency nurses through snowball sampling. The target population included full-time and part-time emergency nurses, including nurse practitioners caring for pediatric patients. Emergency nurses' CPS knowledge, attitudes, and practices were ascertained. Results: Nine hundred eighty-four emergency nurses completed a Web-based survey. All 6 CPS knowledge and scenario-based items were answered correctly by only 18.8% of the sample; these respondents were identified as the "high knowledge" group. Similarly, ED nurses rarely addressed CPS during ED visits in the prior 6 months. Those with high knowledge were more likely to be confident about providing recommendations for CPS topics. Discussion: Emergency nurses can improve their knowledge and provision of CPS in the emergency department, particularly for children presenting for care following MVCs. These results identify opportunities to increase the knowledge and confidence of emergency nurses in providing CPS information to parents seen in the emergency department, especially those involved in MVCs. The gap in knowledge can be overcome by providing the nurses with increased CPS-focused educational opportunities. Introduction: Medication errors are one of the most frequently occurring errors in health care settings. The complexity of the ED work environment places patients at risk for medication errors. Most hospitals rely on nurses' voluntary medication error reporting, but these errors are under-reported. The purpose of this study was to examine the relationship among work environment (nurse manager leadership style and safety climate), social capital (warmth and belonging relationships and organizational trust), and nurses' willingness to report medication errors. Methods: A cross-sectional descriptive design using a questionnaire with a convenience sample of emergency nurses was used. Data were analyzed using descriptive, correlation, Mann-Whitney U, and Kruskal-Wallis statistics. Results: A total of 71 emergency nurses were included in the study. Emergency nurses' willingness to report errors decreased as the nurses' years of experience increased (r = -0.25, P = .03). Their willingness to report errors increased when they received more feedback about errors (r = 0.25, P = .03) and when their managers used a transactional leadership style (r = 0.28, P = .01). Discussion: ED nurse managers can modify their leadership style to encourage error reporting. Timely feedback after an error report is particularly important. Engaging experienced nurses to understand error root causes could increase voluntary error reporting. Introduction: Pneumatic tube systems (PTSs) are widely used in many hospitals because they lead to reduced turnaround times and cost efficiency. However, PTSs may affect the quality of the blood samples transported to the laboratory. The aim of this study was to investigate the effect of the PTS used in our hospital on the hemolysis of the biochemical blood samples transported to the laboratory. Methods: A total of 148 samples were manually transported to the laboratory by hospital staff, 148 samples were transported with the PTS, and 113 were transported with the PTS without use of sponge-rubber inserts (PTSws). Hemolysis rates and the levels of biochemical analytes for the different transportation methods were compared. Results: No significant difference was found between the samples transported manually and with the PTS with regard to hemolysis rate and the levels of biochemical analytes. However, the samples transported with the PTSws showed a significant difference compared with the samples transported manually and with the PTS with regard to hemolysis rate and potassium and lactate dehydrogenase levels. The percentages of the samples that exceeded the permissible threshold for the hemolysis among the samples transported manually, with the PTS, and with the PTSws were 10%, 8%, and 47%, respectively. Discussion: A PTS can be used safely for transporting biochemistry blood samples to the laboratory. However, a sponge-rubber insert that holds sample tubes must be used with the PTS to prevent the hemolysis of blood samples. The aim of the article is a proposal of a classifier based on neural networks that will be applicable in machine digitization of incomplete and inaccurate data or data containing noise for the purpose of their classification (pattern recognition). The article is focused on the possibility of increasing the efficiency of the algorithms via their appropriate combination, and particularly increasing their reliability and reducing their time demands. Time demands do not mean runtime, nor its development, but time demands of applying the algorithm to a particular problem domain. In other words, the amount of professional labour that is needed for such an implementation. The article aims at methods from the field of pattern recognition, which primarily means various types of neural networks. The proposed approaches are verified experimentally. (C) 2017 Elsevier Inc. All rights reserved. The electrohydrodynamic instability of a vertical dielectric fluid saturated Brinkman porous layer whose vertical walls are maintained at different temperatures is considered. An external AC electric field is applied across the vertical porous layer to induce an unstably stratified electrical body force. The stability eigenvalue equation is solved numerically using the Chebyshev collocation method. The presence of inertia is found to instill instability on the system and the value of modified Darcy-Prandtl number Pr-D at which the transition from stationary to travelling-wave mode takes place is independent of the AC electric field but increases considerably with an increase in the value of Darcy number Da. The presence of AC electric field promotes instability but its effect is found to be only marginal. Although the flow is stabilizing against stationary disturbances with increasing Da, its effect is noted to be dual in nature if the instability is via travelling-wave mode. The streamlines and isotherms for various values of physical parameters at their critical state are presented and analyzed. Besides, energy norm at the critical state is also computed and found that the disturbance kinetic energy due to surface drag, viscous force and dielectrophoretic force have no significant effect on the stability of fluid flow. (C) 2017 Elsevier Inc. All rights reserved. In this paper, the error analysis of a two-grid method (TGM) with backward Euler scheme is discussed for semilinear parabolic equation. Contrary to the conventional finite element analysis, the error between exact solution and finite element solution is split into two parts (temporal error and spatial error) by introducing a corresponding time-discrete system. This can lead to the spatial error independent of tau(time step). Secondly, based on the above technique, optimal error estimates in L-2 and H-1-norms of TGM solution are deduced unconditionally, while previous works always require a certain time step size condition. Finally, a numerical experiment is provided to confirm the theoretical analysis. (C) 2017 Elsevier Inc. All rights reserved. Since the donation list contains a lot of information, the cooperation may be promoted if the list can be skillfully applied. If the donation list is published completely, it will be considered as moral coercion. However, it is unfair to cooperators who contribute more money if organizers do not publish the list. Thus, how to publish the donation list properly is a subject worth studying. In our paper, we take reputation, behavior diversity and face culture into account at the same time to study the role of donation list in the public goods game. The results of numerical simulations show that the effect of publishing the list incompletely is better than that of publishing it completely or keeping it secret. Furthermore, there exists an optimum threshold to make the results best. And reasonable neighborhood relations are needed to promote cooperation. In addition, some personal attributes, such as the habit of data selection and mental capacity, have influences on cooperation. (C) 2017 Elsevier Inc. All rights reserved. This paper investigates the global asymptotic stability and stabilization of memristive neural networks (MNNs) with communication delays via event-triggered sampling control. First, based on the novel approach in Lemma 1, the concerned MNNs are converted into traditional neural networks with uncertain parameters. Next, a discrete event-triggered sampling control scheme, which only needs supervision of the system state at discrete instants, is designed for MNNs for the first time. Thanks to this controller, the number of control updates could greatly reduce. Then, by getting utmost out of the usable information on the actual sampling pattern, a newly augmented Lyapunov-Krasovskii functional (LKF) is constructed to formulate stability and stabilization criteria. It should be mentioned that the LKF is positive definite only at endpoints of each subinterval of the holding intervals but not necessarily positive definite inside the holding intervals. Finally, the feasibility and effectiveness of the proposed results are tested by two numerical examples. (C) 2017 Elsevier Inc. All rights reserved. In this article, a new two-step hybrid block method for the numerical integration of ordinary differential initial value systems is presented. The method is obtained after considering two intermediate points and the approximation of the true solution by an adequate polynomial and imposing collocation conditions. The proposed method has the tenth algebraic order of convergence and is A-stable. The numerical experiments considered revealed the superiority of the new method for solving this kind of problems, in comparison with methods of similar characteristics appeared in the literature. (C) 2017 Elsevier Inc. All rights reserved. NURBS surface is very useful in geometric modeling, animation, image morphing and deformation. Constructing non-self-intersecting (injective) NURBS surfaces is an important process in surface and solid modeling. In this paper, the injective conditions of tensor product NURBS surface are studied, based on the geometric positions of control points, which are equivalent to the surface to be non-self-intersecting for all positive weights. Finally, some representative examples are provided. (C) 2017 Elsevier Inc. All rights reserved. This paper considers the stochastic Korteweg-de Vries equation on a bounded domain. The existence of a weak martingale solution is discussed by the Galerkin's approximation and compactness method. Based on this result, we establish the Unique Continuation Property for the stochastic Korteweg-de Vries equation. (C) 2017 Elsevier Inc. All rights reserved. We describe two-phase flows by a six-equation single-velocity two-phase compressible flow model with stiffmechanical relaxation. In particular, we are interested in the simulation of liquid-gas mixtures such as cavitating flows. For the numerical approximation of the homogeneous hyperbolic portion of the model equations we have previously developed two-dimensional wave propagation finite volume schemes that use Roe-type and HLLC-type Riemann solvers. These schemes are very suited to simulate the dynamics of transonic and supersonic flows. However, these methods suffer from the well known difficulties of loss of accuracy and efficiency encountered by classical upwind finite volume discretizations at low Mach number regimes. This issue is particularly critical for liquid-gas flows, where the Mach number may range from very low to very high values, due to the large and rapid variation of the acoustic impedance. In this work we focus on the problem of loss of accuracy of standard schemes related to the spatial discretization of the convective terms of the model equations. To address this difficulty, we consider the class of preconditioning strategies that correct at low Mach number the numerical dissipation tensor. First we extend the approach of the preconditioned Roe-Turkel scheme of Guillard-Viozat for the Euler equations [Computers & Fluids, 28, 1999] to our Roe-type method for the two-phase flow model, by defining a suitable Turkel-type preconditioning matrix. A similar low Mach number correction is then devised for the HLLC-type method, thanks to a novel reformulation of the HLLC solver. We present numerical results for two-dimensional liquid-gas channel flow tests that show the effectiveness of the proposed preconditioning techniques. In particular, we observe that the order of pressure fluctuations generated at low Mach number regimes by the preconditioned methods agrees with the theoretical results inferred for the continuous relaxed two-phase flow model by an asymptotic analysis. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we present a simple yet effective model to promote cooperation in selfish population, namely, a spatial evolutionary public goods game model that includes three kinds of players: cooperator, defector and loner. In spatial settings, the players locate on a regular lattice, and each player randomly selects one strategy, then all the player acquire their payoffs with their four nearest neighbors, after that the focal player chooses a neighbor based on the logit selection model and updates his/her strategy in accordance with a random sequential simulation procedure. The Monte Carlo simulation results demonstrate that the ruthless invasion of defectors can be efficiently prevented by the loners, especially when enhanced factor r is low. Further interesting is the fact that the introduction of a logit selection model, making the fittest neighbors more likely to act as sources of adopted strategies, effectively promotes the evolution of cooperation even if the loner is absence. (C) 2017 Elsevier Inc. All rights reserved. In this paper, we proposed a new numerical scheme to solve the time fractional nonlinear Klein-Gordon equation. The fractional derivative is described in the Caputo sense. The method consists of expanding the required approximate solution as the elements of Sinc functions along the space direction and shifted Chebyshev polynomials of the second kind for the time variable. The proposed scheme reduces the solution of the main problem to the solution of a system of nonlinear algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. The method is easy to implement and produces accurate results. (C) 2017 Elsevier Inc. All rights reserved. Based on an improved element-free Galerkin (EFG) method and the geometric nonlinear elastic theory, an element-free numerical approach for rigid-flexible coupling dynamics of a rotating hub-beam system is implemented. Using the Hamilton principle, the coupled nonlinear integral-differential governing equations are derived, in which the dynamic stiffening effect is expressed by the longitudinal shrinking of the beam induced by the transverse displacement. By the EFG method where the global interpolating moving least squares (IMLS) and generalized moving least squares (IGMLS) were used for discretizing the longitudinal and transverse deformation variables, respectively, the spatially element-free discretized dynamic equations of the system are obtained and the Newmark scheme was selected as the time integration method for the numerical computation. The superiority of the global interpolating property makes the ease for imposing both displacement and derivative boundary conditions. The transverse bending vibration and structural dynamics of the flexible beam in non-inertial system are analyzed in the numerical simulations and influence parameters of the EFG method are also fully discussed. Simulation results verify the feasibility of the improved EFG approach for rigid-flexible coupling dynamics. (C) 2017 Elsevier Inc. All rights reserved. The finite-time stochastically boundedness (FTSB) and the finite-time strictly stochastically exponential dissipative (FTSSED) control problems for the stochastic interval systems, which are encountered the time-delay and Markovian switching, are investigated in this paper. The stochastic delayed interval systems with Markovian switching (SDISs-wMS) are equivalently transformed into a kind of stochastic uncertain time-delay systems with Markovian switching by interval matrix transformation. Some sufficient conditions of FTSB and FTSSED for the stochastic delayed interval systems with Markovian switching are obtained, and the FTSB and FTSSED controllers are designed by solving a series of linear matrix inequalities, which are solvable by LMIs toolbox. Finally, a numerical example with simulations is given to illustrate the correctness of the obtained results and the effectiveness of the designed controller. (C) 2017 Elsevier Inc. All rights reserved. The basic characteristics of extreme events are their infrequence and potential damages to the human-nature system. It is difficult for people to design comprehensive policies for dealing with such events due to time pressure and their limit knowledge about rare and uncertain sequential impacts. Recently, online media provides digital source of individual and public information to study collective human responses to extreme events, which can help us reduce the damages of an extreme event and improve the efficiency of disaster relief. More specifically, there are different emotional responses (e.g., anxiety and anger) to an event and its subevents during a whole event, which can be reflected in the contents of public news and social media to a certain degree. Therefore, an online computational method for extracting these contents can help us better understand human emotional states at different stages of an event, reveal underlying reasons, and improve the efficiency of event relief. Here, we first employ tweets and reports extracted from Twitter and ReliefWeb for text analysis on three distinct events. Then, we detect textual contents by sentiment lexicon to find out human emotional responses over time. Moreover, a clustering-based method is proposed to detect emotional responses to a certain episode during events based on the co-occurrences of words as used in tweets and/or articles. Taking Japanese earthquake in 2011, Haiti earthquake in 2010 and Swine influenza A (H1N1) pandemic in 2009 as case studies, we reveal the underlying reasons of distinct patterns of human emotional responses to the whole events and their episodes. (C) 2017 Elsevier Inc. All rights reserved. In this study, we present a conservative Fourier spectral collocation (FSC) method to solve the two-dimensional nonlinear Schrodinger (NLS) equation. We prove that the proposed method preserves the mass and energy conservation laws in semi-discrete formulations. Using the spectral differentiation matrices, the NLS equation is reduced to a system of nonlinear ordinary differential equations (ODEs). The compact implicit integration factor (cIIF) method is later developed for the nonlinear ODEs. In this approach, the storage and CPU cost are significantly reduced such that the use of cIIF method becomes attractive for two-dimensional NLS equation. Numerical results are presented to demonstrate the conservation, accuracy, and efficiency of the method. (C) 2017 Elsevier Inc. All rights reserved. The Simpson's rule for the computation of supersingular integrals in boundary element methods is discussed, and the asymptotic expansion of error function is obtained. A series to approach the singular point is constructed. The extrapolation algorithm is presented and the convergence rate is proved. Some numerical results are also reported to confirm the theoretical results and show the efficiency of the algorithms. (C) 2017 Elsevier Inc. All rights reserved. Background: Isothiocyanates derived from the Brassicaceae plants possess chemopreventive and anticancer activities. One of them is sulforaphene (SF), which is abundant in Rhapanus sativus seeds. The underlying mechanism of its anticancer activity is still underexplored. Purpose: SF properties make it an interesting candidate for cancer prevention and therapy. Thus, it is crucial to characterize the mechanism of its activity. Study design: We investigated the mechanism of antiproliferative activity of SF in breast cancer cells differing in growth factor receptors status and lacking functional p53. Methods: Viability of SKBR-3 and MDA-MB-231 breast cancer cells treated with SF was determined by SRB and clonogenic assays. Cell cycle, cell death and oxidative stress were analyzed by flow cytometry or microscopy. The levels of apoptosis and autophagy markers were assessed by immunoblotting. Results: SF efficiently decreased the viability of breast cancer cells, while normal cells (MCF10A) were less sensitive to the analyzed isothiocyanate. SF induced G2/M cell cycle arrest, as well as disturbed cytoskeletal organization and reduced clonogenic potential of the cancer cells. SF induced apoptosis in a concentration-dependent manner which was associated with the oxidative stress, mitochondria dysfunction, increased Bax: Bcl2 ratio and ADRP levels. SF also potentiated autophagy which played a cytoprotective role. Conclusions: SF exhibits cytotoxic activity against breast cancer cells even at relatively low concentrations (5-10 mu M). This is associated with induction of the cell cycle arrest and apoptosis. SF might be considered as a potent anticancer agent. Background: RecA is a bacterial multifunctional protein essential to genetic recombination, error-prone replicative bypass of DNA damages and regulation of SOS response. The activation of bacterial SOS response is directly related to the development of intrinsic and/or acquired resistance to antimicrobials. Although recent studies directed towards RecA inactivation via ATP binding inhibition described a variety of micromolar affinity ligands, inhibitors of the DNA binding site are still unknown. Purpose: Twenty-seven secondary metabolites classified as anthraquinones, depsides, depsidones, dibenzofurans, diphenyl-butenolides, paraconic acids, pseudo-depsidones, triterpenes and xanthones, were investigated for their ability to inhibit RecA from Escherichia coli. They were isolated in various Chilean regions from 14 families and 19 genera of lichens. Methods: The ATP hydrolytic activity of RecA was quantified detecting the generation of free phosphate in solution. The percentage of inhibition was calculated fixing at 100 mu M the concentration of the compounds. Deeper investigations were reserved to those compounds showing an inhibition higher than 80%. To clarify the mechanism of inhibition, the semi-log plot of the percentage of inhibition vs. ATP and vs. ssDNA, was evaluated. Results: Only nine compounds showed a percentage of RecA inhibition higher than 80% (divaricatic, perlatolic, alpha-collatolic, lobaric, lichesterinic, protolichesterinic, epiphorellic acids, sphaerophorin and tumidulin). The half-inhibitory concentrations (IC50) calculated for these compounds were ranging from 14.2 mu M for protolichesterinic acid to 42.6 mu M for sphaerophorin. Investigations on the mechanism of inhibition showed that all compounds behaved as uncompetitive inhibitors for ATP binding site, with the exception of epiphorellic acid which clearly acted as non-competitive inhibitor of the ATP site. Further investigations demonstrated that epiphorellic acid competitively binds the ssDNA binding site. Kinetic data were confirmed by molecular modelling binding predictions which shows that epiphorellic acid is expected to bind the ssDNA site into the L2 loop of RecA protein. Conclusion: In this paper the first RecA ssDNA binding site ligand is described. Our study sets epiphorellic acid as a promising hit for the development of more effective RecA inhibitors. In our drug discovery approach, natural products in general and lichen in particular, represent a successful source of active ligands and structural diversity. Background: Flacourtia indica is especially popular among the various communities of many African countries where it is being used traditionally for the treatment of malaria. In our previous report, we have identified some phenolic glycosides from the aerial parts of F. indica as promising antiplasmodial agents under in vitro conditions. Purpose: Antimalarial bioprospection of F. indica derived phenolic glycoside in Swiss mice (in vivo) with special emphasis on its mode of action. Methods: Chloroquine sensitive strain of Plasmodium falciparum was routinely cultured and used for the in vitro studies. The in vivo antimalarial potential of phenolic glycoside was evaluated against P. berghei in Swiss mice through an array of parameters viz., hematological, biochemical, chemo-suppression and mean survival time. Results: 2-(6-benzoyl-beta-D-glucopyranosyloxy)-7-(1 alpha, 2 alpha, 6 alpha-trihydroxy-3-oxocyclohex-4-enoyl)-5-hydroxybenzyl alcohol (CPG), a phenolic glycoside isolated from the aerial parts of F. indica was found to exhibit promising antiplasmodial activity by arresting the P. falciparum growth at the trophozoite stage. Spectroscopic investigations reveal that CPG possesses a strong binding affinity with free heme moieties. In addition, these interactions lead to the inhibition of heme polymerization in malaria parasite, augmenting oxidative stress, and delaying the rapid growth of parasite. Under in-vivo condition, CPG exhibited significant antimalarial activity against P. berghei at 50 and 75 mg/kg body weight through chemo-suppression of parasitemia and ameliorating the parasite induced inflammatory and oxidative (hepatic) imbalance in the experimental mice. Conclusion: CPG was found to be a potential antimalarial constituent of F. indica with an explored mechanism of action, which also offers the editing choices for developing CPG based antimalarial chemotypes. Background: Astragaloside. (ASG-IV, (Fig. 1) is the most active component of Chinese sp. Astragalus membranaceus Bunge (Fabaceae) that has showed antioxidant, antiapoptotic and antiviral activities among others. It is reported to play an important role in cardiac fibrosis (CF), but the mechanism remains unclear. Purpose: To investigate the mechanism of ASG-IV on inhibiting myocardial fibrosis induced by hypoxia. Study design: We studied the relationship between anti-fibrotic effect of ASG-IV and transient receptor potential cation channel, subfamily M, member 7 (TRPM7) by in vivo and in vitro experiments. Methods: In vivo, CF was induced by subcutaneous isoproterenol (ISO) for 10 days. Rat hearts were resected for histological experiment and reverse transcription real-time quantitative poly merase chain reaction (RT-qPCR). In vitro, molecular and cellular biology technologies were used to confirm the anti-fibrosis effect underlying mechanism of ASG-IV. Results: Histological findings and the collagen volume fraction showed that ASG-IV decreased fibrosis in heart tissues. Hypoxia could stimulate the proliferation and differentiation of cardiac fibroblast which indicated that the degree of fibrosis was increased significantly. Anoxic treatment could also obviously up-regulate the expression of TRPM7 protein and current. ASG-IV groups showed the opposite results. Knock-down TRPM7 experiment further confirmed the role of TRPM7 channel in hypoxia-induced cardiac fibrosis. Conclusion: Our results suggest that the inhibition of hypoxia-induced CF in vivo and in vitro by ASG-IV is associated with reduction of the expression of TRPM7. The moderate inhibition of the TRPM7 channel may be a new strategy for treating cardiac fibrosis. Background: Based on the traditional application of traditional Chinese Medicines (TCMs), Ephedra Herba (EH) is used to cure cold fever by inducing sweating, whereas Ephedra Radix (ER) is used to treat hyperhidrosis. Although they come from the same plant, Ephedra sinica Stapf, but have play opposing roles in clinical applications. EH is known to contain ephedrine alkaloids, which is the driver of the physiological changes in sweating, heart rate and blood pressure. However, the active pharmacological ingredients (APIs) of ER and the mechanisms by which it restricts sweating remain unknown. Purpose: The current work aims to discover the hidroschesis APIs from ER, as well as to establish its action mechanism. Methods: UPLC-Q/TOF-MS, PCA, and heat map were utilized for identifying the differences between EH and ER. HPLC integrated with a beta(2) -adrenoceptor (beta(2)-AR) activity luciferase reporter assay system was used to screen active inhibitors; molecular docking and a series of biological assays centered on beta(2)-AR-related signaling pathways were evaluated to understand the roles of APIs. Results: The opposite effect on sweating of EH and ER can be attributed to the APIs of amphetamine-type alkaloids and flavonoid derivatives. Mahuannin B is an effective anti-hydrotic agent, inhibiting the production of cAMP via suppression of adenylate cyclase (AC) activity. Conclusion: The effects of EH and ER on sweat and beta(2)-AR-related signaling pathway are opposite due to different alkaloids and flavonoids of APIs in EH and ER. The present work not only sheds light on the hidroschesis action of mahuannin B, but also presents a potential target of AC in the treatment of hyperhidrosis. (C) 2017 Elsevier GmbH. All rights reserved. Background: Renal tubulointerstitial fibrosis (TIF) is commonly the final result of a variety of progressive injuries and leads to end-stage renal disease. There are few therapeutic agents currently available for retarding the development of renal TIF. Purpose: The aim of the present study is to evaluate the role of arctigenin (ATG), a lignan component derived from dried burdock (Arctium lappa L.) fruits, in protecting the kidney against injury by unilateral ureteral obstruction (UUO) in rats. Methods: Rats were subjected to UUO and then administered with vehicle, ATG (1 and 3 mg/kg/d), or losartan (20 mg/kg/d) for 11 consecutive days. The renoprotective effects of ATG were evaluated by histological examination and multiple biochemical assays. Results: Our results suggest that ATG significantly protected the kidney from injury by reducing tubular dilatation, epithelial atrophy, collagen deposition, and tubulointerstitial compartment expansion. ATG administration dramatically decreased macrophage (CD68-positive cell) infiltration. Meanwhile, ATG down-regulated the mRNA levels of pro-inflammatory chemokine monocyte chemoattractant protein-1 (MCP-1) and cytokines, including tumor necrosis factor-alpha(TNF-alpha), interleukin-1 beta(IL-1 beta), and interferon-gamma(IFN-gamma), in the obstructed kidneys. This was associated with decreased activation of nuclear factor kappa B (NF-kappa B). ATG attenuated UUO-induced oxidative stress by increasing the activity of renal manganese superoxide dismutase (SOD2), leading to reduced levels of lipid peroxidation. Furthermore, ATG inhibited the epithelial-mesenchymal transition (EMT) of renal tubules by reducing the abundance of transforming growth factor-beta 1 (TGF-beta 1) and its type I receptor, suppressing Smad2/3 phosphorylation and nuclear translocation, and up-regulating Smad7 expression. Notably, the efficacy of ATG in renal protection was comparable or even superior to losartan. Conclusion: ATG could protect the kidney from UUO-induced injury and fibrogenesis by suppressing inflammation, oxidative stress, and tubular EMT, thus supporting the potential role of ATG in renal fibrosis treatment. (C) 2017 Elsevier GmbH. All rights reserved. Background: Cancer stem cells (CSCs) are a subset of cells within the bulk of a tumor that have the ability to self-renew and differentiate, and are thus associated with cancer invasion, metastasis, and recurrence. Phenethyl isothiocyanate (PEITC) is a natural compound found in cruciferous vegetables such as broccoli and is used as a cancer chemopreventive agent; however, its effects on CSCs are little known. Purpose: To evaluate the effect of PEITC on CSCs in this study by examining CSC properties. Methods: NCCIT human embryonic carcinoma cells were treated with PEITC, and the expression of pluripotency factors Oct4, Sox-2, and Nanog were evaluated by luciferase assay and western blot. Effect of PEITC on self-renewal capacity and clonogenicity were assessed with the sphere formation, soft agar assay, and clonogenic assay in an epithelial cell adhesion molecule (EpCAM)-expressing CSC model derived from HCT116 colon cancer cells using a cell sorting system. The effect of PEITC was also investigated in a mouse xenograft model obtained by injecting nude mice with EpCAM-expressing cells. Results: We found that PEITC treatment suppressed expression of the all three pluripotency factors in the NCCIT cells, in which pluripotency factors are highly expressed. Moreover, PEITC suppressed the self-renewal capacity and clonogenicity in the EpCAM-expressing CSC model. EpCAM was used as a specific CSC marker in this study. Importantly, PEITC markedly suppressed both tumor growth and expression of three pluripotency factors in a mouse xenograft model. Conclusion: These results demonstrate that PEITC might be able to slow down or prevent cancer recurrence by suppressing CSC stemness. (C) 2017 Elsevier GmbH. All rights reserved. Background: Most studies reveal that the mechanism of action of propolis against bacteria is functional rather than structural and is attributed to a synergism between the compounds in the extracts. Hypothesis/Purpose: Propolis is said to inhibit bacterial adherence, division, inhibition of water-insoluble glucan formation, and protein synthesis. However, it has been shown that the mechanism of action of Russian propolis ethanol extracts is structural rather than functional and may be attributed to the metals found in propolis. If the metals found in propolis are removed, cell lysis still occurs and these modified extracts may be used in the prevention of medical and biomedical implant contaminations. Study design: The antibacterial activity of metal-free Russian propolis ethanol extracts (MFRPEE) on two biofilm forming bacteria: penicillin-resistant Staphylococcus aureus and Escherichia coli was evaluated using MTT and a Live/Dead staining technique. Toxicity studies were conducted on mouse osteoblast (MC-3T3) cells using the same viability assays. Methods: In the MTT assay, biofilms were incubated with MTT at 37 degrees C for 30 min. After washing, the purple formazan formed inside the bacterial cells was dissolved by SDS and then measured using a microplate reader by setting the detecting and reference wavelengths at 570 nm and 630 nm, respectively. Live and dead distributions of cells were studied by confocal laser scanning microscopy. Results: Complete biofilm inactivation was observed when biofilms were treated for 40 h with 2 mu g/ml of MFRPEE. Results indicate that the metals present in propolis possess antibacterial activity, but do not have an essential role in the antibacterial mechanism of action. Additionally, the same concentration of metals found in propolis samples, were toxic to tissue cells. Comparable to samples with metals, metal free samples caused damage to the cell membrane structures of both bacterial species, resulting in cell lysis. Conclusion: Results suggest that the structural mechanism of action of Russian propolis ethanol extracts stem predominate from the organic compounds. Further studies revealed drastically reduced toxicity to mammalian cells when metals were removed from Russian propolis ethanol extracts, suggesting a potential for medical and biomedical applications. Published by Elsevier GmbH. Background: Human noroviruses (HuNoV), which are responsible for acute gastroenteritis, are becoming a serious public health concern worldwide. Since no effective antiviral drug or vaccine for HuNoV has been developed yet, some natural extracts and their active components have been investigated for their ability to inhibit noroviruses. However, their exact antiviral mechanisms have not been investigated. Purpose: This study was performed to investigate the expression of interferon (IFN)-alpha, IFN-lambda, tumor necrosis factor-alpha (TNF-alpha), Mx, and zinc finger CCCH type antiviral protein 1 (ZAP), 2'-5' oligo (A) synthetase (OAS), and inducible nitric oxide synthase (iNOS) in RAW 264.7 cells pre-treated with fisetin, daidzein, quercetin, epigallocatechin gallate (EGCG), and epicatechin gallate (ECG) that have anti-noroviral activity. Study design: Based on the antiviral activity of the five flavonoids, recently reported by our group, the expression of antiviral factors such as IFN-alpha, IFN-lambda, TNF-alpha, IL-1 beta, IL-6, Mx, ZAP, OAS, and iNOS was investigated in RAW 264.7 cells pre-treated with these flavonoids. Methods: Anti-noroviral effect was determined by performing a plaque assay on cells treated with the flavonoid. RAW 264.7 cells were treated with fisetin, daidzein, quercetin, EGCG, and ECG. Then, mRNA of IFN-alpha, IFN-lambda, TNF-alpha, IL-1 beta, IL-6, Mx, ZAP, OAS, and iNOS were measured by real-time RT-PCR. IFN-alpha, TNF-alpha, IL-1 beta, and IL-6 proteins were measured by ELISA. Results: Pre-treatment with fisetin (50 mu M), fisetin (100 mu M), EGCG (100 mu M), quercetin (100 mu M), daidzein (50 mu M), and ECG (150 mu M) significantly reduced MNoV by 50.00 +/- 7.14 to 60.67 +/- 9.26%. The mRNA levels of IFN-alpha, IFN-lambda, TNF-alpha, Mx, and ZAP were upregulated in RAW 264.7 cells pre-treated with fisetin, quercetin, and daidzein, but not in those pre-treated with EGCG or ECG. Regarding protein levels, IFN-alpha was significantly induced in cells pre-treated with fisetin, quercetin, and daidzein, whereas TNF-alpha was significantly induced only in cells pre-treated with daidzein. Conclusion: Pre-treatment of RAW 264.7 cells with the five flavonoids inhibited MNoV by upregulating the expression of antiviral cytokines (IFN-alpha, IFN-lambda, and TNF-alpha) and interferon-stimulating genes (Mx and ZAP). Background: The search for novel antitrypanosomal agents had previously led to the isolation of ellagic acid as a bioactive antitrypanosomal compound using in vitro studies. However, it is not known whether this compound will elicit antitrypanosomal activity in in vivo condition which is usually the next step in the drug discovery process. Purpose: Herein, we investigated the in vivo activity of ellagic acid against bloodstream form of Trypanosoma congolense and its ameliorative effects on trypanosome-induced anemia and organ damage as well as inhibitory effects on trypanosomal sialidase. Methods: Rats were infected with T. congolense and were treated with 100 and 200 mg/kg body weight (BW) of ellagic acid for fourteen days. The levels of parasitemia, packed cell volume and biochemical parameters were measured. Subsequently, T. congolense sialidase was partially purified on DEAE cellulose column and the mode of inhibition of ellagic acid on the T. congolense sialidase determined. Molecular docking study was also conducted to determine the mode of interaction of the ellagic acid to the catalytic domain of T. rangeli sialidase. Results: At a dose of 100 and 200 mg/kg (BW), ellagic acid demonstrated significant (P< 0.05) trypanosuppressive effect for most of the 24 days experimental period. Further, the ellagic acid significantly (P< 0.05) ameliorated the trypanosome-induced anemia, hepatic and renal damages as well as hepatomegaly, splenomegaly and renal hypertrophy. The trypanosome-associated free serum sialic acid upsurge alongside the accompanied membrane bound sialic acid reduction were also significantly (P< 0.05) prevented by the ellagic acid treatment. The T. congolense sialidase was purified to a fold of 6.6 with a yield of 83.8%. The enzyme had a KM and Vmax of 70.12 mg/ml and 0.04 mu mol/min respectively, and was inhibited in a non-competitive pattern by ellagic acid with an inhibition binding constant of 1986.75 mu M. However, in molecular docking study, ellagic acid formed hydrogen bonding interaction with major residues R-39, R-318, and W-124 at the active site of T. rangeli sialidase with a predicted binding free energy of -25.584 kcal/mol. Conclusion: We concluded that ellagic acid possesses trypanosuppressive effects and could ameliorate the trypanosome-induced pathological alterations. Background: gamma-Tocotrienol, a vitamin E isomer possesses pronounced in vitro anticancer activities. However, the in vivo potency has been limited by hardly achievable therapeutic levels owing to inefficient high-dose oral delivery which leads to subsequent metabolic degradation. Jerantinine A, an Aspidosperma alkaloid, originally isolated from Tabernaemontana corymbosa, has proved to possess interesting anticancer activities. However, jerantinine A also induces toxicity to non-cancerous cells. Purpose: We adopted a combinatorial approach with the joint application of gamma-tocotrienol and jerantinine A at lower concentrations in order to minimize toxicity towards non-cancerous cells while improving the potency on brain cancer cells. Methods: The antiproliferative potency of individual gamma-tocotrienol and jerantinine A as well as combined in low-concentration was firstly evaluated on U87MG cancer and MRC5 normal cells. Morphological changes, DNA damage patterns, cell cycle arrests and the effects of individual and combined low-concentration compounds on microtubules were then investigated. Finally, the potential roles of caspase enzymes and apoptosis-related proteins in mediating the apoptotic mechanisms were investigated using apoptosis antibody array, ELISA and Western blotting analysis. Results: Combinatorial study between gamma-tocotrienol at a concentration range (0-24 mu g/ml) and fixed IC20 concentration of jerantinine A (0.16 mu g/ml) induced a potent antiproliferative effect on U87MG cells and led to a reduction on the new half maximal inhibitory concentration of gamma-tocotrienol (i.e. tIC50 = 1.29 mu g/ml) as compared to that of individual gamma-tocotrienol (i.e. IC50 = 3.17 mu g/ml). A reduction on undesirable toxicity to MRC5 normal cells was also observed. G0/G1 cell cycle arrest was evident on U87MG cells receiving IC50 of individual.-tocotrienol and combined low-concentration compounds (1.29 mu g/ml gamma-tocotrienol + 0.16 mu g/ml jerantinine A), whereas, a profound G2/M arrest was evident on cells treated with IC50 of individual jerantinine A. Additionally, individual jerantinine A and combined compounds (except individual gamma-tocotrienol) caused a disruption of microtubule networks triggering Fas-and p53-induced apoptosis mediated via the death receptor and mitochondrial pathways. Conclusions: These findings demonstrated that the combined use of lower concentrations of gamma-tocotrienol and jerantinine A induced potent cytotoxic effects on U87MG cancer cells resulting in a reduction on the required individual concentrations and thereby minimizing toxicity of jerantinine A towards non-cancerous MRC5 cells as well as probably overcoming the high-dose limiting application of gamma-tocotrienol. The multi-targeted mechanisms of action of the combination approach have shown a therapeutic potential against brain cancer in vitro and therefore, further in vivo investigations using a suitable animal model should be the way forward. (C) 2017 Elsevier GmbH. All rights reserved. In this work we study the bifurcations from the trivial equilibrium of the equation partial derivative u/partial derivative t (x,t) - -u(x,t) + tanh(beta(J*u) (x,t), in the space of 2T periodic functions. This is accomplished with the help of the Global bifurcation equivariant branching lemma, which allows us to take into account the symmetries present in the model. We show that the phenomenon of 'spontaneous symmetry breaking' occurs here, that is, the bifurcating solutions are less symmetric than the trivial one. We also prove that, under certain conditions, these equilibria can be globally continued. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we investigate the Keller-Segel-Stokes system (K-S-S): {n(t) + u .del n = Delta n - del . (n del c), x is an element of Omega, t > 0, Tc-t + u . del c = Delta c - c + n, x is an element of Omega, t > 0, u(t) + del P = Delta u + n del phi, x is an element of Omega, t > 0, del . u = 0, x is an element of Omega, t > 0, with no-flux boundary conditions for n and c as well as no-slip boundary condition for u in a bounded domain Omega subset of R-2 with smooth boundary. For T = 0, it was shown by Lorz (2012) that the corresponding initial value problem has global-in time solutions for initial mass of cells below some specified value, which may be larger than the well known critical mass of the corresponding fluid-free system. For the case T = 1, i.e, the parabolic parabolic Keller-Segel-Stokes system, we show that, by some new energy methods which are different from some known related results, there also exists a particular value. If the initial mass of cells below it, then the corresponding initial boundary value problem has global-in time and bounded solutions. (C) 2017 Elsevier Ltd. All rights reserved. A zero-Hopf equilibrium is an isolated equilibrium point whose eigenvalues are +/- wi, not equal 0 and 0. In general for a such equilibrium there is no theory for knowing when it bifurcates some small-amplitude limit cycles moving the parameters of the system. Here we study the zero-Hopf bifurcation using the averaging theory. We apply this theory to a Chua system depending on 6 parameters, but the way followed for studying the zero-Hopf bifurcation can be applied to any other differential system in dimension 3 or higher. In this paper first we show that there are three 4-parameter families of Chua systems exhibiting a zero-Hopf equilibrium. After, by using the averaging theory, we provide sufficient conditions for the bifurcation of limit cycles from these families of zero-Hopf equilibria. From one family we can prove that 1 limit cycle bifurcates, and from the other two families we can prove that 1, 2 or 3 limit cycles bifurcate simultaneously. (C) 2017 Elsevier Ltd. All rights reserved. We investigate the existence of positive ground states for pseudo-relativistic nonlinear Choquard equations. Our results are based on Nehari manifold technique and rearrangement methods. Furthermore, we also obtain the nonrelativistic limit of ground states in H-l(R-N) space. (C) 2017 Elsevier Ltd. All rights reserved. In this paper we study the existence of a global solution for a diffusion problem of Kirchhoff type driven by a nonlocal integro-differential operator. As a particular case, we consider the following parabolic equation involving the fractional p-Laplacian: {partial derivative(u)(t) + [u](s,p)((lambda-1)p) (-Delta)(p)(s) u = vertical bar u vertical bar(q-2)u, in Omega x R+, partial derivative(t)u = partial derivative u/partial derivative t, u(x,0) = u(0)(x), in Omega, u(x,t) = 0, in (R-N \ Omega) X R-0(+), where [u](s,p) is the Gagliardo p-seminorm of u, Omega C R-N is a bounded domain with Lipschitz boundary partial derivative Omega, p < q < Np/(N - sp) with 1 < p < N/s and s is an element of (0,1), 1 <= lambda < N/(N - sp), (-A)(s)(p) is the fractional p-Laplacian. Under some appropriate assumptions, we obtain the existence of a global solution for the problem above by the Galerkin method and potential well theory. It is worth pointing out that the main result covers the degenerate case, that is the coefficient of (-Delta)(p)(s) can vanish at zero. (C) 2017 Elsevier Ltd. All rights reserved. In this paper we prove the existence of solution to a mathematical model for gas transportation networks on non-flat topography. Firstly, the network topology is represented by a directed graph and then a nonlinear system of numerical equations is introduced whose unknowns are the pressures at the nodes and the mass flow rates at the edges of the graph. This system is written in a compact vector form in terms of the vector of the square pressures at the nodes and then an existence result is proved under some simplifying assumptions. The proof is done in two steps: the first one uses convex analysis tools and the second one the Brouwer fixed-point theorem. (C) 2017 Elsevier Ltd. All rights reserved. We consider compressible flow with periodic boundary conditions. In a neighborhood of a state of constant density and nonzero velocity, we prove exact null controllability of the system with a control localized in space and acting only on the momentum equation. (C) 2017 Elsevier Ltd. All rights reserved. Based on the possible mechanism of the recent global resurgence of mumps, a novel multi-group SVEIAR epidemic model with infinite distributed delays of vaccination and latency, asymptomatic infection and nonlinear incidence is formulated, which is also equivalent to the multi-group version with ages of vaccination and latency. Here, it is assumed that the vaccine confers complete protection against infection. By constructing Lyapunov functionals, it is shown that the disease will die out if the vaccinated reproduction number R-v <= 1 and the disease becomes endemic if R-v > 1. Moreover, the threshold dynamics of the multi-group SVEIAR epidemic model of ordinary differential equations (ODEs) with imperfect vaccination is also established. Our main conclusions generalize and improve some existing results. In the end, it is found that the possible reasons for occurrence of backward bifurcation in epidemic models with vaccination may include partial immunity, standard incidence by comparing our results obtained with the existing literatures. (C) 2017 Elsevier Ltd. All rights reserved. This paper is devoted to the existence of the traveling waves of the equations describing a diffusive SEIR model with non-local reaction between the infected and the susceptible. The existence of traveling Waves depends on the minimal speed c* and basic reproduction rate beta/gamma. We use the Laplace transform and the Schauder fixed point theorem to get the existence and non-existence of traveling waves in our paper. We also give some numerical results of the minimal wave speed. (C) 2017 Elsevier Ltd. All rights reserved. We provide high-order approximations to periodic travelling wave profiles, through a novel expansion which incorporates the variation of the total mechanical energy of the water wave. We show that these approximations are extended to any finite order. Moreover, we provide the velocity field and the pressure beneath the waves, in flows with constant vorticity over a flat bed. (C) 2017 Elsevier Ltd. All rights reserved. We study the motion of isentropic gas in nozzles. This is a major subject in fluid dynamics. In fact, the nozzle is utilized to increase the thrust of rocket engines. Moreover, the nozzle flow is closely related to astrophysics. These phenomena are governed by the compressible Euler equations, which are one of crucial equations in inhomogeneous conservation laws. In this paper, we consider its unsteady flow and devote to proving the global existence and stability of solutions to the Cauchy problem for the general nozzle. The theorem has been proved in Tsuge (2013). However, this result is limited to small data. Our aim in the present paper is to remove this restriction, that is, we consider large data. Although the subject is important in Mathematics, Physics and engineering, it remained open for a long time. The problem seems to rely on a bounded estimate of approximate solutions, because we have only method to investigate the behavior with respect to the time variable. To solve this, we first introduce a generalized invariant region. Compared with the existing ones, its upper and lower bounds are extended constants to functions of the space variable. However, we cannot apply the new invariant region to the traditional difference method. Therefore, we invent the modified Godunov scheme. The approximate solutions consist of some functions corresponding to the upper and lower bounds of the invariant regions. These methods enable us to investigate the behavior of approximate solutions with respect to the space variable. The ideas are also applicable to other nonlinear problems involving similar difficulties. (C) 2017 Elsevier Ltd. All rights reserved. Three main results concerning the infinity-Laplacian are proved. Theorem 1.1 shows that some overdetermined problems associated to an inhomogeneous infinity-Laplace equation are solvable only if the domain is a ball centered at the origin: this is the reason why we speak of constrained radial symmetry. Theorem 1.2 deals with a Dirichlet problem for infinity-harmonic functions in a domain possessing a spherical cavity. The result shows that under suitable control on the boundary data the unknown part of the boundary is relatively close to a sphere. Finally, Theorem 1.4 gives boundary conditions implying that the unknown part of the boundary is exactly a sphere concentric to the cavity. Incidentally, a boundary-point lemma of Hopf's type for the inhomogeneous infinity-Laplace equation is obtained. (C) 2017 Elsevier Ltd. All rights reserved. The purpose of this paper is to study well-posedness of the initial value problem (IVP) for the inhomogeneous nonlinear Schrodinger equation (INLS) iu(t) + Delta u + lambda vertical bar x vertical bar(-b)vertical bar u vertical bar(alpha) u = 0, where lambda = +/- 1 and alpha, b > 0. We obtain local and global results for initial data in H-s (RN), with 0 <= s <= 1. To this end, we use the contraction mapping principle based on the Strichartz estimates related to the linear problem. (C) 2017 Elsevier Ltd. All rights reserved. In this article we prove the existence of multi solitary waves of a fourth order SchrOdinger equation (4NLS) which describes the motion of the vortex filament. These solutions behave at large time as sum of stable Hasimoto solitons. It is obtained by solving the system backward in time around a sequence of approximate multi solitary waves and showing convergence to a solution with the desired property. The new ingredients of the proof are modulation theory, virial identity adapted to 4NLS and energy estimates. Compare to NLS, 4NLS does not preserve Galilean transform which contributes the main difficulty in spectral analysis of the corresponding linearized operator around the Hasimoto solitons. (C) 2017 Elsevier Ltd. All rights reserved. In the paper James et al. (1995), the authors established a compact framework for general n x n system of chromatography (1.1) by using the kinetic formulation coupled with the compensated compactness method. However, how to construct suitable approximated solutions {u(i)(l)} of system (1.1) and then to prove the compactness of n(u(i)(l))t + q(u(i)(l))x in H-loc(-1), for the entropy entropy flux pairs (n, q) constructed by the kinetic formulation, with respect to the sequence {u(i)(l)}, is an open problem. In this paper, we construct the approximated solutions {u(i)epsilon} by using the parabolic viscosity method. By carefully calculating the Riemann invariants of system (1.1), we obtained all necessary estimates in the compact framework of James et al. (1995), and gave a complete proof of the global existence of weak solutions for the Cauchy problem (1.1) with the bounded, nonnegative initial data (1.2). As a direct by-product, when the total variation of the initial data is bounded, we obtained a simple proof of the existence of global weak solutions by applying the Div-Curl lemma in the compensated compactness theorem to some pairs of functions (c, f(u(i))), where c is a constant. (C) 2017 Elsevier Ltd. All rights reserved. We consider the Cucker-Smale flocking model with a singular communication weight psi(s) = s(-alpha) with alpha > 0. We provide a critical value of the exponent alpha in the communication weight leading to global regularity of solutions or finite-time collision between particles. For alpha >= 1, we show that there is no collision between particles in finite time if they are placed in different positions initially. For a >= 2 we investigate a version of the Cucker-Smale model with expanded singularity i.e. with weight psi delta(s) = (s - delta)(-alpha), delta >= 0. For such model we provide a uniform with respect to the number of particles estimate that controls the delta-distance between particles. In case of delta = 0 it reduces to the estimate of collision avoidance. (C) 2017 Elsevier Ltd. All rights reserved. Reaction-advection-diffusion systems are widely used to model the population dynamics of mutually interacting species in ecology, where diffusion describes the random dispersal of species, advection accounts for the directed dispersal due to the population pressures from interspecies, and the reaction term represents the population growth of species. This paper is devoted to the studies of global-in-time solutions and nonconstant positive steady states of a class of two-species Lotka-Volterra competition systems with advection over multi-dimensional bounded domains. We prove the global existence and boundedness of positive classical solutions to the system provided that the sensitivity function decays super-linearly. The existence and stability of nonconstant positive steady states are obtained through rigorous bifurcation analysis. Moreover, numerical simulations of the emergence and evolution of spatial patterns are performed to illustrate and verify our theoretical results. These nonconstant solutions can be used to model the segregation phenomenon through inter-specific competitions. (C) 2017 Published by Elsevier Ltd. In this paper, we investigate an initial boundary value problem for 1D compressible Navier-Stokes / Allen-Cahn system, which describes the motion of a mixture of two viscous compressible fluids. We establish the global existence and uniqueness of strong and classical solutions of Navier Stokes / Allen-Cahn system by using the energy estimates, structure of the equations and the properties of one dimension. Here, we emphasize that the initial vacuum is allowed here. (C) 2017 Elsevier Ltd. All rights reserved. A system of renewal equations on a graph provides a framework to describe the exploitation of a biological resource. In this context, we formulate an optimal control problem, prove the existence of an optimal control and ensure that the target cost function is polynomial in the control. In specific situations, further information about the form of this dependence is obtained. As a consequence, in some cases the optimal control is proved to be necessarily bang-bang, in other cases the computations necessary to find the optimal control are significantly reduced. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we consider an interaction system in which a wave and a viscoelastic wave equation evolve in two bounded domains, with natural transmission conditions at a common interface. We show the lack of uniform decay of solutions in general domains. The method is based on the construction of ray-like solutions by means of geometric optics expansions and a careful analysis of the transfer of the energy at the interface. (C) 2017 Elsevier Ltd. All rights reserved. Mosquito-borne diseases are global health problems, which mainly affect low-income populations in tropics and subtropics. In order to prevent the transmission of mosquito-borne diseases, the intracellular symbiotic bacteria named as Wolbachia is becoming a promising candidate to interrupt the virus transmission. In this paper, an impulsive mosquito population model with general birth and death rate functions is established to study the cytoplasmic incompatibility (CI) effect caused by mating of Wolbachia-infected males and uninfected females. The dynamics of the spread of Wolbachia in mosquito population are studied, and the strategies of mosquito extinction or replacing Wolbachia-uninfected mosquitoes with Wolbachia-infected mosquitoes are analyzed. Moreover, the results are applied to models with specific birth and death rate functions. It is shown that strategies may be different due to different birth and death rate functions, the type of Wolbachia strains and the initial number of Wolbachia-infected mosquitoes. Furthermore, numerical simulations are conducted to illustrate our conclusions. (C) 2017 Elsevier Ltd. All rights reserved. In this paper, we investigate the periodic homogenization of nonlinear parabolic equation arising from heat exchange in composite material problems. This problem, defined in periodical domain, is nonlinear at the interface. This nonlinearity models the heat radiation on the interface, which constitutes the transmission boundary conditions, between the two components of the material. The main challenge is, first, to show the well-posedness of the microscopic problem using the topological degree of Leray-Schauder tools. Then, we apply the two scale convergence to identify the equivalent macroscopic model using homogenization techniques. Finally, in order to confirm the efficiency of the homogenization process, we present some numerical results obtained via finite element approximation. (C) 2017 Elsevier Ltd. All rights reserved. In this paper we mainly investigate the Cauchy problem of the finite extensible nonlinear elastic (FENE) dumbbell model with dimension d >= 2. We first proved the local well-posedness for the FENE model in Besov spaces by using the Littlewood-Paley theory. Then by an accurate estimate we get a blow-up criterion. Moreover, if the initial data is a small perturbation around equilibrium, we obtain a global existence result. Our obtained results generalize and cover recent results in Masmoudi (2008). (C) 2017 Elsevier Ltd. All rights reserved. Classical global solutions are established for a model of compressible radiative flow in a slab under semi-reflexive boundary conditions using energy entropy estimates and a homotopic version of the Leray Schauder fixed point Theorem together with classical Friedman Schauder estimates for linear second order parabolic equations in boundary Holder spaces. (C) 2017 Published by Elsevier Ltd. The present study was designed to assess the bioequivalence of two agomelatine formulations (25-mg tablets) in healthy Chinese male subjects. This single-dose, open-label, randomized, four-way replicate study with a 1-week washout period was conducted in 60 healthy Chinese male volunteers under fasting conditions. Blood samples were collected over a 12-h period after a single dose of the 25-mg agomelatine test (T) formulation or a reference (R) formulation, and the drug concentrations were assayed by liquid chromatography tandem mass spectrometry (LC-MS/MS). Pharmacokinetic parameters were calculated using a non compartmental model. Bioequivalence between the formulations was assessed. Tolerability and safety were monitored by physical examination, electrocardiogram (12-lead ECG), clinical laboratory tests, and adverse events (AEs). A total of 56 out of 60 subjects completed the study. No AEs were observed. The values of maximum plasma concentration (C-max), maximum concentration (T-max), area under curve (AUC)(0-t), and t(1/2), were 12.032 ng/mL, 0.658h, 12.637 ng.h/mL, and 0.813h, respectively, for the test formulation, and 10.891 ng/mL, 0.709h, 11.572 ng.h/mL, and 0.96h, respectively, for the reference formulation. The intra-individual variability of C-max and AUC(0-t), were 78.3 and 61.8%, respectively. The inter-individual coefficients of variance (CVs) of C-max and AUC(0-t) were approximately 100%. The 90% confidence intervals for the ratio of means for the log-transformed C-max (97.7-124.9%), AUC(0-t) (98.2-118%), and AUC(0-infinity), (97.8-117.2%) were within the guideline range of bioequivalence (80-125%). The test and reference formulations of agomelatine met the regulatory criteria for bioequivalence of the Chinese Food and Drug Administration. Significant intra-individual and inter-individual variations were found. RSPP050 (AG50) is one of the semi-synthetic andrographolide that is isolated from Andrographis paniculata NEES (Acanthaceae). The anti-proliferation effects of AG50 against cholangiocarcinoma (HuCCT1) were displayed high cytotoxicity. Unfortunately, poor water solubility of AG50 limited its clinical applications. This study aimed to increase the concentration of AG50 in water and drug loading and release study in phosphate-buffered saline (PBS) in the absence/presence of pig liver esterase enzyme. Cytotoxicity of AG50-loaded polymeric micelles was evaluated against HuCCT1. AG50 loaded micelles were prepared by film sonication and encapsulated by polymers including poly(ethylene glycol)-b-poly(epsilon-caprolactone) (PEG-b-PCL) or poly(ethylene glycol)-b-poly(D,L-lactide) (PEG-b-PLA). Micelle properties were characterized such as solubility, drug loading, drug release and in vitro cytotoxicity against HuCTT1. AG50 was successfully loaded into both types of polymeric micelles. The best drug-polymer (D/P) ratio was 1:9. AG50/PCL and AG50/PLA-micelles had small particle size (36.4 +/- 5.1, 49.0 +/- 2.7nm, respectively) and high yield (58.2 +/- 1.8, 58.8 +/- 2.9, respectively). AG50/PLA-micelles (IC50=2.42 mu g/mL) showed higher cytotoxicity against HuCCT1 than AG50/PCL-micelles (IC50=4.40 mu g/mL) due to the higher amount of AG50 released. Nanoencapsulation of AG50 could provide a promising development in clinical use for cholangiocarcinoma treatment. Streptococcus pneumoniae (pneumococcus) is an important causative agent of acute invasive and non-invasive infections. Pneumolysin is one of a considerable number of virulence traits produced by pneumococcus that exhibits a variety of biological activities, thus making it a target of small molecule drug development. In this study, we aimed to evaluate the effect of morin, a natural compound that has no antimicrobial activity against S. pneumonia, is a potent neutralizer of pneumolysin-mediated cytotoxicity and genotoxicity by impairing oligomer formation, and possesses the capability of mitigating tissue damage caused by pneumococcus. These findings indicate that morin could be a potent candidate for a novel therapeutic or auxiliary substance to treat infections for which there are inadequate vaccines and that are resistant to traditional antibiotics. The objective of this study was to develop sustained release matrix tablets of atenolol (AT) using different concentrations of polyvinyl acetate polyvinylpyrrolidone mixture (KSR) (20, 30, or 40%) with various types of fillers such as spray dried lactose (SP.D.L), avicel pH 101 (AV), and emcompress (EMS). The physical characteristics of the prepared tablets were evaluated. Characterization of the optimized formulation was performed using Fourier transform (FT)-IR spectroscopy and differential scanning calorimetry (DSC). Moreover, the in vitro release profiles of AT formulations were investigated in different pH dissolution media. Drug release kinetics and mechanisms were also determined. The results revealed that there was no potential incompatibility of the drug with the polymer. The release profiles of AT were affected by the concentration of KSR, fillers used, and pH of the dissolution media. The drug release kinetic from most of the formulations obeyed Higuchi diffusion model. The selected formulae were investigated for their stability by storage at 30 and 40 degrees C with atmospheric humidity and 75% relative humidity (RH), respectively. The results demonstrated that no change in the physicochemical properties of the tablets stored at 30 degrees C/atmospheric RH in comparison with some changes at 40 degrees C/75% RH. Finally, the in vivo study provided an evidence that the optimized AT tablet containing 40% KSR and SP.D.L exhibited prominent higher oral bioavailability and more efficient sustained-release effect than the drug alone or the commercial tablet product. It is noteworthy that KSR could be considered as a promising useful release retardant for the production of AT sustained release matrix tablets. In this study three new spectrophotometric methods have been developed and validated for simultaneous determination of ternary mixture of metronidazole (MET), diloxanide (DLX) and mebeverine HCl (MEB) without prior separation steps. The newly introduced methods consisted of several steps utilizing either zero order or ratio spectra without the need for derivatization steps. The developed methods are called area under the curve ("AUC"), modified absorption factor (MAFM) and modified amplitude center (MACM) spectrophotometric methods. Selectivity and validity of the methods were checked by using different synthetic mixtures and by analysis of their combined dosage form where low standard deviation values and good percentage recoveries were obtained. Additionally, methods linearity, accuracy and precision were determined following United States Pharmacopoeia (USP) recommendations. The obtained results were found to agree with the reported ones when they were statistically compared using One Way ANOVA test. These methods are easily applied during drug quality control studies and in laboratories lacking the facilities of chromatographic methods of analysis. Data manipulation steps are very simple, hence these methods can be considered as time and money saving methods comparing to chromatographic methods of analysis. Sulfoquinovosyl acylpropanediol (SQAP), a chemically modified analogue of sulfoquinovosyl acylglycerol (SQAG) that occurs in sea algae, has been reported to show a variety of biological activities, including accumulation in tumor cells and the inhibition of tumor cell growth. We report herein on a new concise and versatile synthesis of SQAP itself and derivatives bearing iodoaryl groups and boronclusters. This method should be useful for the design and synthesis of SQAG/SQAP derivatives for diagnosis and the treatment of cancer and related diseases. We describe herein a manganese(IV) oxide-mediated oxidation of N-p-methoxyphenyl (PMP)-protected glycine derivatives for the synthesis of alpha-imino carboxylic acid derivatives. Using this methodology, utilization of unstable glyoxic acid derivatives was avoided. Furthermore, using this methodology we synthesized novel alpha-imino carboxylic acid derivatives such as alpha-imino phenyl ester, perfluoroalkyl etsers, imides, and thioester. The asymmetric Mannich reaction of those novel imine derivatives with 1,3-dicarbonyl compound is also described, and the novel alpha-imino imide gave improved chemical yield and stereoselectivity compared with those obtained by the use of the conventional alpha-imino ester-type substrate. An efficient synthesis of ODM-201's diastereomers has been developed from (R)-methyl 3-hydroxybutanoate or (S)-methyl 3-hydroxybutanoate, respectively, with high overall yield and excellent diastereomeric purity. The key step in this synthesis is the preparation of the key intermediate (R)-5-(1-((tert-butyldimethylsilyl)oxy)ethyl)-1H-pyrazole-3-carboxylic acid or (S)-5-(1-((tert-butyldimethylsilypoxy)ethyl)-1H-pyrazole-3-carboxylicacid through intramolecular 1,3-dipolar cycloaddition of the vinyl diazo carbonyl compounds. A new pyranonaphthoquinone derivative, named 4-oxo-rhinacanthin A (1), was isolated from the roots of the Indonesian Rhinacanthus nasutus together with two known congeners, rhinacanthin A (2) and 3,4-dihydro-3,3-di-methyl-2H-naphtho[2,3-b]pyran-5,10-dione (3). The structure of 1 was elucidated based on its spectroscopic data. The absolute configuration of 1 was assigned by comparing its experimental Electronic Circular Dichroism (ECD) spectrum with the calculated ECD spectrum. Compounds 2 and 3 inhibited the growth of Staphylococcus aureus with inhibition zones of 16 and 20mm at 25 mu g/disc, respectively. Compound 3 also exhibited inhibitory activity against Mycobacterium smegmatis (20mm at 25 mu g/disc). Two new naphtoquinones (smenocerones A and B, 1 and 2) and four known sesquiterpene cyclopentenones (dactylospongenones A-D, 3-6) were isolated from sponge Smenospongia cerebriformis living in the Eastern Sea of Vietnam. Their chemical structures were determined by high resolution electrospray ionization (HR-ESI)-MS, NMR spectroscopic analysis, and in comparison with the reported data. The chiroptical properties of compounds 3-6 were examined by experiment and theoretical calculation of circular dichroism (CD) spectra to prove their absolute configurations. Compound 2 significantly exhibited cytotoxic activity towards lung carcinoma (LU-1), hepatocellular carcinoma (HepG-2), promyelocytic leukemia (HL-60), breast carcinoma (MCF-7), and melanoma (SK-Mel-2) human cancer cells with IC50 values of 5.5 +/- 0.8, 3.2 +/- 0.2, 4.0 +/- 0.7, 4.1 +/- 0.8, and 5.7 +/- 1.1 mu g/mL, respectively. Fifteen steroids, including two new compounds, leptosteroid (1) and 5,6 beta-epoxygorgosterol (2), were isolated and structurally elucidated from the Vietnamese soft coral Sinularia leptoclados. Their cytotoxic effect against a panel of eight human cancer cell lines was evaluated using sulforhodamine B (SRB) method. Significant cytotoxicity against hepatoma cancer (HepG2, IC50=21.13 +/- 0.70 mu M) and colon adenocarcinoma (SW480, IC50=28.65 +/- 1.53 mu M) cell lines were observed for 1 and against acute leukemia (HL-60, IC50=20.53 +/- 2.26 mu M) and SW480 (IC50=26.61 +/- 1.59 mu M for ergost-5-en-3 beta,7 beta-diol (8). In addition, 3 beta,7 beta-dihydroxyergosta-5,24(28)-diene (13) showed significant cytotoxic activity on all tested cell lines with IC50 values ranging from 13.45 +/- 1.81 to 29.01 +/- 3.21 mu M. In water, diketopiperazines cyclo(L-Pro-L-Xxx) and cyclo(L-Pro-D-Xxx) (Xxx=Phe, Tyr) formed an intramolecular hydrophobic interaction between the, main skeleton part and their benzene ring, and both cyclo(L-Pro-L-Xxx) and cyclo(L-Pro-D-Xxx) took a folded conformation. The conformational changes from folded to extended conformation by addition of several deuterated organic solvents (acetone-d(6), metanol-d(4), dimethyl sulfoxide-d(6) (DMSO-d(6))) and the temperature rise were investigated using H-1-NMR spectra. The results suggested that the intrarmolecular hydrophobic interaction of cyclo(L-Pro-D-Xxx) formed more strongtly than that of cyclo(L-Pro-L-Xxx). Under a basic condition of 1.0x10(-1) mol/L potassium deuteroxide, enolization of O-1-C-1-C-9-H-9 moiety of cyclo(L-Pro-L-Xxx) occurred, while that of the O-4-C-4-C-3-H-3 moiety did not. Cyclo(L-Pro-L-Xxx) epimerized to cyclo(D-Pro-L-Xxx), while cyclo(L-Pro-D-Xxx) did not change. Extracellular vesicles (EVs) are membrane-bound intercellular communication vehicles that transport proteins, lipids and nucleic acids with regulatory capacity between cells. RNA profiling using microarrays and sequencing technologies has revolutionized the discovery of EV-RNA content, which is crucial to understand the molecular mechanism of EV function. Recent studies have indicated that EVs are enriched with specific RNAs compared to the originating cells suggestive of an active sorting mechanism. Here, we present the comparative transcriptome analysis of human osteoblasts and their corresponding EVs using next-generation sequencing. We demonstrate that osteoblast-EVs are specifically depleted of cellular mRNAs that encode proteins involved in basic cellular activities, such as cytoskeletal functions, cell survival and apoptosis. In contrast, EVs are significantly enriched with 254 mRNAs that are associated with protein translation and RNA processing. Moreover, mRNAs enriched in EVs encode proteins important for communication with the neighboring cells, in particular with osteoclasts, adipocytes and hematopoietic stem cells. These findings provide the foundation for understanding the molecular mechanism and function of EV-mediated interactions between osteoblasts and the surrounding bone micro-environment. The GC-rich Binding Factor 2/Leucine Rich Repeat in the Flightless 1 Interaction Protein 1 gene (GCF2/LRRFIP1) is predicted to be alternatively spliced in five different isoforms. Although important peptide sequence differences are expected to result from this alternative splicing, to date, only the gene transcription regulator properties of LRRFIP1-Iso5 were unveiled. Based on molecular, cellular and biochemical data, we show here that the five isoforms define two molecular entities with different expression profiles in human tissues, subcellular localizations, oligomerization properties and transcription enhancer properties of the canonical Wnt pathway. We demonstrated that LRRFIP1-Iso3, -4 and -5, which share over 80% sequence identity, are primarily located in the cell cytoplasm and form homo and hetero-multimers between each other. In contrast, LRRFIP1-Isol and -2 are primarily located in the cell nucleus in part thanks to their shared C-terminal domain. Furthermore, we showed that LRRFIP1-Isol is preferentially expressed in the myocardium and skeletal muscle. Using the in vitro Topflash reporter assay we revealed that among LRRFIP1 isoforms, LRRFIP1-Isol is the strongest enhancer of the beta-catenin Wnt canonical transcription pathway thanks to a specific N-terminal domain harboring two critical tryptophan residues (W76, 82). In addition, we showed that the Wnt enhancer properties of LRRFIP1-Isol depend on its homo-dimerisation which is governed by its specific coiled coil domain. Together our study identified LRRFIP1-Isol as a critical regulator of the Wnt canonical pathway with a potential role in myocyte differentiation and myogenesis. The orexin (OX1R) and cholecystokinin A (CCK1R) receptors play opposing roles in the migration of the human colon cancer cell line HT-29, and may be involved in the pathogenesis and pathophysiology of cancer cell invasion and metastasis. OX1R and CCK1R belong to family A of the G-protein-coupled receptors (GPCRs), but the detailed mechanisms underlying their functions in solid tumor development remain unclear. In this study, we investigated whether these two receptors heterodimerize, and the results revealed novel signal transduction mechanisms. Bioluminescence and Forster resonance energy transfer, as well as proximity ligation assays, demonstrated that OX1R and CCK1R heterodimerize in HEK293 and HT-29 cells, and that peptides corresponding to transmembrane domain 5 of OX1R impaired heterodimer formation. Stimulation of OX1R and CCK1R heterodimers with both orexin-A and CCK decreased the activation of G alpha g, G alpha i2, G alpha 12, and G alpha 13 and the migration of HT-29 cells in comparison with stimulation with orexin-A or CCK alone, but did not alter GPCR interactions with beta-arrestins. These results suggest that OX1R and CCK1R heterodimerization plays an anti-migratory role in human colon cancer cells. (C) 2017 Elsevier B.V. All rights reserved. The paradigm of a cytoplasmic methionine cycle synthesizing/eliminating metabolites that are transported into/out of the nucleus as required has been challenged by detection of significant nuclear levels of several enzymes of this pathway. Here, we show betaine homocysteine S-methyltransferase (BHMT), an enzyme that exerts a dual function in maintenance of methionine levels and osmoregulation, as a new component of the nuclear branch of the cycle. In most tissues, low expression of Bhmt coincides with a preferential nuclear localization of the protein. Conversely, the liver, with very high Bhmt expression levels, presents a main cytoplasmic localization. Nuclear BHMT is an active homotetramer in normal liver, although the total enzyme activity in this fraction is markedly lower than in the cytosol. N-terminal basic residues play a role in cytoplasmic retention and the ratio of glutathione species regulates nucleocytoplasmic distribution. The oxidative stress associated with D-galactosamine (Gal) or buthionine sulfoximine (BSO) treatments induces BHMT nuclear translocation, an effect that is prevented by administration of N-acetylcysteine (NAC) and glutathione ethyl ester (EGSH), respectively. Unexpectedly, the hepatic nuclear accumulation induced by Gal associates with reduced nuclear BHMT activity and a trend towards increased protein homocysteinylation. Overall, our results support the involvement of BHMT in nuclear homocysteine remethylation, although moonlighting roles unrelated to its enzymatic activity in this compartment cannot be excluded. (C) 2017 Elsevier B.V. All rights reserved. Understanding the mechanisms underlying abnormal egg production and pregnancy loss is significant for human fertility. SENP7, a SUMO poly-chain editing enzyme, has been regarded as a mitotic regulator of heterochromatin integrity and DNA repair. Herein, we report the roles of SENP7 in mammalian reproductive scenario. Mouse oocytes deficient in SENP7 experienced meiotic arrest at prophase I and metaphase I stages, causing a substantial decrease of mature eggs. Hyperaceylation and hypomethylation of histone H3 and up-regulation of Cdcl4B/C accompanied by down-regulation of CyclinB1 and CyclinB2 were further recognized as contributors to defective M phase entry and spindle assembly in oocytes. The spindle assembly checkpoint activated by defective spindle morphogenesis, which was also caused by mislocalization and ubiquitylation-mediated proteasomal degradation of gamma-tubulin, blocked oocytes at meiosis I stage. SENP7-depleted embryos exhibited severely defective maternal zygotic transition and progressive degeneration, resulting in nearly no blastocyst production. The disrupted epigenetic landscape on histone H3 restricted Rad51C loading onto DNA lesions due to elevated HP1 alpha euchromatic deposition, and reduced DNA 5hmC challenged the permissive status for zygotic DNA repair, which induce embryo death. Our study pinpoints SENP7 as a novel determinant in epigenetic programming and major pathways that govern oocyte and embryo development programs in mammals. (C) 2017 Elsevier B.V. All rights reserved. Renal fibrosis is a common pathological feature of chronic kidney diseases (CKD) and its development and progression are significantly affected by epigenetic modifications such as aberrant miRNA and DNA methylation. Klotho is an anti-aging and anti-fibrotic protein and its early decline after renal injury is reportedly associated with aberrant DNA methylation. However, the key upstream pathological mediators and the molecular cascade leading to epigenetic Klotho suppression are not exclusively established. Here we investigate the epigenetic mechanism of Klotho deficiency and its functional relevance in renal fibrogenesis. Fibrotic kidneys induced by unilateral ureteral occlusion (DUO) displayed marked Klotho suppression and the promoter hypermethylation. These abnormalities were likely due to deregulated transforming growth factor-beta (TGF beta) since TGF beta alone caused the similar epigenetic aberrations in cultured renal cells and TGF beta blockade prevented the alterations in UUO kidney. Further investigation revealed that TGF beta enhanced DNA methyltransferase (DNMT) 1 and DNMT3a via inhibiting miR-152 and miR-30a in both renal cells and fibrotic kidneys. Accordingly the blockade of either TGF beta signaling or DNMT1/3a activities significantly recovered the Klotho loss and attenuated pro-fibrotic protein expression and renal fibrosis. Moreover, Klotho knockdown by RNA interferences abolished the anti-fibrotic effects of DNMT inhibition in both TGF beta-treated renal cell and UUO kidney, indicating that TGF beta-mediated miR-152/30a inhibitions, DNMT1/3a aberrations and subsequent Klotho loss constitute a critical regulatory loop that eliminates Klotho's anti-fibrotic activities and potentiates renal fibrogenesis. Thus, our study elaborates a novel epigenetic cascade of renal fibrogenesis and reveals the potential therapeutic targets for treating the renal fibrosis-associated kidney diseases. (C) 2017 Elsevier B.V. All rights reserved. The human pathogen Pseudomonas aeruginosa induces phosphorylation of the adaptor protein CrkII by activating the non-receptor tyrosine kinase Abl to promote its uptake into host cells. So far, specific factors of P. aeruginosa, which induce Abl/CrkII signalling, are entirely unknown. In this research, we employed human lung epithelial cells H1299, Chinese hamster ovary cells and P. aeruginosa wild type strain PAO1 to study the invasion process of P. aeruginosa into host cells by using microbiological, biochemical and cell biological approaches such as Western Blot, immunofluorescence microscopy and flow cytometry. Here, we demonstrate that the host glycosphingolipid globotriaosylceramide, also termed Gb3, represents a signalling receptor for the P. aeruginosa lectin LecA to induce CrkII phosphorylation at tyrosine 221. Alterations in Gb3 expression and LecA function correlate with CrkII phosphorylation. Interestingly, phosphorylation of CrkII(Y221) occurs independently of Abl kinase. We further show that Src family kinases transduce the signal induced by LecA binding to Gb3, leading to Crk(Y221) phosphorylation. In summary, we identified LecA as a bacterial factor, which utilizes a so far unrecognized mechanism for phospho-CrklI(Y221) induction by binding to the host glycosphingolipid receptor Gb3. The LecA/Gb3 interaction highlights the potential of glycolipids to mediate signalling processes across the plasma membrane and should be further elucidated to gain deeper insights into this non-canonical mechanism of activating host cell processes. Patients with inflammatory bowel disease often suffer from chronic and relapsing intestinal inflammation that favor the development of colitis associated cancer. An alteration of the epithelial intestinal barrier function observed in IBD is supposed to be a consequence of stress. It has been proposed that corticotrophin-releasing factor receptor (CRF2), one of the two receptors of CRF, the principal neuromediator of stress, acts on cholinergic nerves to induce stress-mediated epithelial barrier dysfunction. Non-neuronal acetylcholine (Ach) and muscarinic receptors (mAchR) also contribute to alterations of epithelial cell functions. In this study, we investigated the mechanisms through which stress and Ach modulate epithelial cell adhesive properties. We show that Ach-induced activation of mAchR in HT-29 cells results in cell dissociation together with changes in cell-matrix contacts, which correlates with the acquisition of invasive potential consistent with a matrix metalloproteinase (MMP) mode of invasion. These processes result from mAchR subsequent stimulation of the cascade of src/Erk and FAK activation. Ach-induced secretion of laminin 332 leads to alpha 3 beta 1 integrin activation and RhoA-dependent reorganization of the actin cytoskeleton. We show that Ach-mediated effects on cell adhesion are blocked by astressin 2b, a CRF2 antagonist, suggesting that Ach action depends partly on CRF2 signaling. This is reinforced by the fact that Ach-mediated activation of mAchR stimulates both the synthesis and the release of CRF2 ligands in HT-29 cells (effects blocked by atropine). In summary, our data provides evidence for a novel intracellular circuit involving mAchR acting on CRF2-signaling that could mediate colonic mucosal barrier dysfunction and exacerbate mucosal inflammation. Synthetic triterpenoids are a class of anti-cancer compounds that target many cellular functions, including apoptosis and cell growth in both cell culture and animal models. We have shown that triterpenoids inhibit cell migration in part by interfering with Arp2/3-dependent branched actin polymerization in lamellipodia (To et al., 2010). Our current studies reveal that Glycogen Synthase Kinase 3 beta (GSK3 beta), a kinase that regulates many cellular processes, including cell adhesion dynamics, is a triterpenoid-binding protein. Furthermore, triterpenoids were observed to inhibit GSK3 beta activity and increase cellular focal adhesion size. To further examine whether these effects on focal adhesions in triterpenoid-treated cells were GSK3 beta-dependent, GSK3 beta inhibitors (lithium chloride and SB216763) were used to examine cell adhesion and morphology as well as cell migration. Our results showed that GSK3 beta inhibitors also altered cell adhesion sizes. Moreover, these inhibitors blocked cell migration and displaced proteins at the leading edge of migrating cells, consistent with what was observed in triterpenoid-treated cells. Therefore, the triterpenoids may affect cell migration via a mechanism that targets and alters the activity and localization of GSK3 beta. The molecular action of artemisinins (ARTs) is not well understood. To determine the molecular and cellular basis that might underlie their differential effects observed in anti-malarial and anti-cancer studies, we utilized the yeast Saccharomyces cerevisiae to examine their toxicity profiles and properties. Previously we reported that while both low levels (2-8 mu M) of artemisinin (ART) and dihydroartemisinin (DHA) partly depolarize the mitochondrial membranes, inhibiting yeast growth on non-fermentable media, only DHA at moderate levels (such as 40 mu M) potently represses yeast growth on fermentable media via a heme-mediated pathway. Here we show that the lack of toxicity of ART even at high levels (200-400 mu M) on fermentable medium is due to the presence of Sodl. While we expected this normally Sodl-supressed action to be heme-mediated like DHA, surprisingly, this toxicity of ART is due to further depolarization of the mitochondrial membrane. We also found that for DHA, the Sodl-suppressible anti-mitochondrial action is hidden by its heme-mediated cytotoxicity, and becomes readily noticeable only when the heme-mediated action is compromised and Sodl is inactivated. Based on these findings, we propose that depending on the cell type and particular compound, ARTs work via one or more of the three types of activities: a Sodl-independent, partial mitochondria-depolarizing action; a Sodl-suppressible, more severe mitochondria-depolarizing action; and a heme-mediated general cytotoxicity. These action properties may underlie the disparities seen in the efficacy and toxicity of various ARTs, and additionally suggest it is important for researchers to clearly detail the particular compound when reporting on the effects of ARTs. Endoplasmic reticulum (ER) stress is characterized by an accumulation of misfolded proteins, and ER stress reduction is essential for maintaining tissue homeostasis. However, the molecular mechanisms that protect cells from ER stress are not completely understood. The present study investigated the role of sestrin 2 (SESN2) on ER stress and sought to elucidate the mechanism responsible for the hepatoprotective effect of SESN2 in vitro and in vivo. Treatment with tunicamycin (Tm) increased SESN2 protein and mRNA levels and reporter gene activity. Activating transcription factor 6 (ATF6) bound to unfolded protein response elements of SESN2 promoter, transactivated SESN2, and increased SESN2 protein expression. In addition, dominant negative mutant of ATF6 alpha and siRNA against ATF6a blocked the ER stress-mediated SESN2 induction, whereas chemical inhibition of PERK or IRE1 did not affect SESN2 induction by Tm. Ectopic expression of SESN2 in HepG2 cells inhibited CHOP and GRP78 expressions by Tm. Moreover, SESN2 decreased the phosphorylations of JNK and p38 and PARP cleavage, and blocked the cytotoxic effect of excessive ER stress. In a Tm-induced liver injury model, adenoviral delivery of SESN2 in mice decreased serum ALT, AST and LDH activities and the mRNA levels of CHOP and GRP78 in hepatic tissues. Moreover, SESN2 reduced numbers of degenerating hepatocytes, and inhibited caspase 3 and PARP cleavages. These results suggest ATF6 is essential for ER stress-mediated SESN2 induction, and that SESN2 acts as a feedback regulator to protect liver from excess ER stress. Parkin/PINK1-mediated mitophagy is implicated in the pathogenesis of Parkinson's disease (PD). Prior to elimination of damaged mitochondria, Parkin translocates to mitochondria and induces mitochondrial clustering. While the mechanism of PINK1-dependent Parkin redistribution to mitochondria is now becoming clear, the role of mitochondrial clustering has been less well understood. In our study, we found that loss of p62 disrupted mitochondrial aggregation and specifically sensitized Parkin-expressing cells to apoptosis induced by mitochondrial depolarization. Notably, altering mitochondrial aggregation through regulating p62 or other methods was sufficient to affect such apoptosis. Moreover, disruption of mitochondrial aggregation promoted proteasome-dependent degradation of outer mitochondrial membrane (OMM) proteins. The accelerated degradation in turn facilitated cytochrome c release from mitochondria, leading to apoptosis. Together, our study demonstrates a protective role of mitochondrial clustering in mitophagy and helps in understanding how aggregation defends cells against stress. Dysregulation of G protein-coupled receptors (GPCRs) is known to be involved in the pathogenesis of a variety of diseases, including cancer initiation and progression. Within this family, approximately 140 GPCRs have no known endogenous ligands and these "orphan" GPCRs remain poorly characterized. The orphan GPCR GPR19 was identified and cloned 2 decades ago, but relatively little is known about its physio-pathological relevance. We observed its expression to be elevated in breast cancers and therefore sought to investigate its potential role in breast cancer pathology. In this work, we show that overexpression of GPR19 drives mesenchymal-like breast cancer cells to adopt an epithelial-like phenotype, as demonstrated by the upregulation in E-cadherin expression and changes in functional behavior. We confirm a previous report that a peptide, adropin, is an endogenous ligand for GPR19. We further show that adropin-mediated activation of GPR19 activates the MAPK/ERK1/2 pathway, which is essential for the observed upregulation in E-cadherin and accompanying phenotypic changes. The recapitulation of epithelial characteristics at the secondary tumor sites is now understood to be an essential step in the colonization process. Taken together our work shows for the first time that GPR19 plays a potential role in metastasis by promoting the mesenchymal-epithelial transition (MET) through the ERK/MAPK pathway, thus facilitating colonization of metastatic breast tumor cells. If no fertilization occurs for a prolonged time following ovulation, oocytes experience a time-dependent deterioration in quality both in vivo and in vitro due to processes called postovulatory aging. Because the postovulatory aging of oocytes has marked detrimental effects on embryo development and offspring, many efforts have been made to unveil the underlying mechanisms. Here we showed that translationally controlled tumor protein (TCTP) regulates spindle assembly during postovulatory aging and prevents deterioration in mouse oocyte quality. Spindle dynamics decreased with reduced TCTP level during aging of mouse oocytes. Knockdown of TCTP accelerated the reduction of spindle dynamics, accompanying with aging-related deterioration of oocyte quality. Conversely, overexpression of TCTP prevented aging-associated decline of spindle dynamics. Moreover, the aging-related abnormalities in oocytes were rescued after TCTP overexpression, thereby improving fertilization competency and subsequent embryo development. Therefore, our results demonstrate that TCTP-mediated spindle dynamics play a key role in maintaining oocyte quality during postovulatory aging and overexpression of TCTP is sufficient to prevent aging-associated abnormalities in mouse oocytes. Cell death depends on the balance between the activities of pro- and anti-apoptotic factors. X-linked inhibitor of apoptosis protein (XIAP) plays an important role in the cytoprotective process by inhibiting the caspase cascade and regulating pro-survival signaling pathways. While searching for novel interacting partners of XIAP, we identified Fas-associated factor 1 (FAF1). Contrary to XIAP, FAF1 is a pro-apoptotic factor that also regulates several signaling pathways in which XIAP is involved. However, the functional relationship between FAF1 and XIAP is unknown. Here, we describe a new interaction between XIAP and FAF1 and describe the functional implications of their opposing roles in cell death and NF-kappa B signaling. Our results clearly demonstrate the interaction of XIAP with FAF1 and define the specific region of the interaction. We observed that XIAP is able to block FAF1-mediated cell death by interfering with the caspase cascade and directly interferes in NF-kappa B pathway inhibition by FAF1. Furthermore, we show that XIAP promotes ubiquitination of FAF1. Conversely, FAF1 does not interfere with the anti-apoptotic activity of XIAP, despite binding to the BIR domains of XIAP; however, FAF1 does attenuate XIAP-mediated NF-kappa 3 activation. Altered expression of both factors has been implicated in degenerative and cancerous processes; therefore, studying the balance between XIAP and FAF1 in these pathologies will aid in the development of novel therapies. Recent work has shown that deregulation of the transcription factor Myb contributes to the development of leukemia and several other human cancers, making Myb and its cooperation partners attractive targets for drug development. By employing a myeloid Myb-reporter cell line we have identified Withaferin A (WFA), a natural compound that exhibits anti-tumor activities, as an inhibitor of Myb-dependent transcription. Analysis of the inhibitory mechanism of WFA showed that WFA is a significantly more potent inhibitor of C/EBP beta, a transcription factor cooperating with Myb in myeloid cells, than of Myb itself. We show that WFA covalently modifies specific cysteine residues of C/EBP beta, resulting in the disruption of the interaction of C/EBP beta with the co-activator p300. Our work identifies C/EBP beta as a novel direct target of WFA and highlights the role of p300 as a crucial co-activator of C/EBP beta. The finding that WFA is a potent inhibitor of C/EBP beta suggests that inhibition of C/EBP beta might contribute to the biological activities of WFA. Resorcinol-formaldehyde xerogels are ideal desiccant materials since the high concentration of hydroxyl groups on their surfaces confers on them a high hydrophilicity, which adsorbs moisture from their surroundings and their large porosity provides them with a high water sorption capacity. In this study, the porosity of organic xerogels was tailored by adjusting the proportion of methanol in the precursor solution in order to optimize their desiccant capability. It was found that, although an increase in microporosity improves the performance of the desiccant, mesoporosity is a more important property for this application. Organic xerogels are excellent desiccants which are able to adsorb more than 80% of their own weight in moisture and function efficiently for more than 3000 h, when their porosity is optimized. This is a great improvement on the commonly used silica gels that become saturated after only 150 h and can only adsorb a maximum of 40 wt % of their own weight in moisture. Moreover, RF xerogels have the advantage that they are organic materials resistant to acid attack and this allows them to be used in processes where conventional inorganic desiccants would rapidly deteriorate. (C) 2017 Elsevier Inc. All rights reserved. Acidic zeolites beta with wider Si/Al ratios were desilicated and dealuminated using the alkali acid treatment approach involving hydrothermal alkali pre etching. Zeolites beta subjected to mesopore modification were characterized by the techniques of XRD, N-2 physisorption, SEM, TEM, EDX, ICP-AES, NH3-TPD, pyridine FTIR adsorption, and immersion porosimetry to examine changes of structural/ textural and acidic properties with initial Si/Al ratio. Benzene alkylation with 1-dodecene was used as a model reaction to elucidate the influence of mediated acidity and texturally mesoporous structure on bulky hydrocarbon transformations. The resulting zeolites beta were shown to have hierarchical porous structures and regular mesoporous size distributions. Initial Si/Al ratios appeared to dominate significant mesoporosity development though no clear threshold of initial Si/Al ratio was identified to be more instrumental in forming mesopores. A volumetric fraction of intraparticle to total BJH mesopores proved to reduce with increasing initial Si/Al ratio. The modified zeolites showed more than 95% desilication selectivity whereas less than 5.0% dealumination selectivity upon overall treating processes. T-atoms removal efficiencies relative to areal, volumetric and diametrical factors revealed the coherent characteristics to T-atoms removal selectivities. Moreover, the dependency of benzene alkylation activity on mesopore size distribution was observed from changes of 1-dodecene conversion and LAB selectivity, both of which were promoted by the hierarchical zeolites. Despite both with comparable acidic properties, the typical sample subjected to the combined alkali acid treatments was shown to have narrowing mesopore size distribution and hence to exhibit the enhanced catalytic performance for selective production of linear alkylbenzenes compared with the purely desilicated sample with broadening mesopore size distribution. (C) 2017 Elsevier Inc. All rights reserved. Hierarchical Beta zeolite composed of uniform nanocrystals with high pore volume and external surface area was fast synthesized within 4 hat a high yield using layered silicate precursor (H-kanemite) as silica source. The transformation process of layered silicate precursor into Beta zeolite is investigated carefully by XRD, FT-IR, NMR spectroscopies, SEM, and TEM techniques. TEM and SEM images indicated the obtained Beta zeolite is consisted of self-sustaining macrosized zeolitic aggregates assembled from uniform nanosized crystals, and it possesses relatively narrow distributed intercrystal mesopores. The hierarchically structured Beta zeolite, with an enhanced accessibility for bulky molecule to acid sites in the framework, shows better catalytic behaviors in the industrially-relevant Friedel-Crafts acylation reaction in comparison to commercially available or traditional Beta zeolite. (C) 2017 Elsevier Inc. All rights reserved. This computational study focuses on the adsorption and diffusion of ethane and ethylene in IsoReticular Metal-Organic Frameworks (IRMOFs). We selected the IRMOFs family for the diversity of linkers, which allows understanding the effect that functionalized groups, ligand length, cyclic groups, or interpenetrating cavities has on the accessible pore volume of the structures and on the selective behavior towards the components of the mixture. At atmospheric pressure and 298 K, we found that the smaller interpenetrated structures (IRMOF-9,-11, and-13) exhibit larger adsorption selectivity than their non interpenetrated counterparts (IRMOF-10,-12, and 14, respectively). Based on these findings we discuss the advantages of using interpenetrating structures for ethane capture. On the other hand, structures with large pore volume such as IRMOF-16 seem to reverse the adsorption selectivity in favor of ethylene. (C) 2017 Elsevier Inc. All rights reserved. In this work, the effect of a cell culture medium on the integrity of mesoporous silica shell of gold-core nanoparticles (Au@SiO2 NPs) and their behavior in cells of HeLa, MDA-MB and BT cultures were studied using the method of transmission electron microscopy. It was shown that degradation of Au@SiO2 NPs takes place in the culture medium. The presence of the cells and serum does not influence on the process of the mesoporous silica layer degradation. The presence of both damaged and undamaged NPs in endosomes and lysosomes is related to different time of NPs' location outside of the cell, i.e. in culture medium. The analysis of the cell ultrastructure leads to a conclusion that the internal conditions in endosomes/lysosomes do not influence on the integrity of Au@SiO2 NPs. (C) 2017 Elsevier Inc. All rights reserved. We report on femtosecond (fs) studies of HBA-4NP interacting with MCM41, Al-MCM41 and SBA15 materials, and discuss the dynamics of caged monomers and J-aggregates. For MCM41 and Al-MCM41 composites, and upon excitation at 380 nm (monomers region) the ultrafast dynamics (250 fs and 2.5 ps for HBA-4NP/MCM41, 350 fs and 3.5 ps for HBA-4NP/Al-MCM41) is slower than that observed in solution due to aggregates formation inside the material pores. While exciting at 410 nm, we got a slower behaviour. However, for HBA-4NP/SBA15 composites, with a low loading, where the caged monomers are the main guests, we recorded faster dynamics (200 fs and 2 ps) independently on the pumping wavelength. These results show how the properties of mesoporous materials, especially its pore size and Al doping, affect the nature of the formed composites and their ultrafast photodynamics. (C) 2017 Elsevier Inc. All rights reserved. In this work, SBA-15 support was impregnated with nickel in order to study the influence of different factors (metal content, support, method of preparation) on their hydrogen storage capacity. H-2 adsorption was measured at low and high pressures (up to 10 and 40 bar) at 77 and 293 K, evaluating the influence of metallic nickel on such adsorption capacity. The properties of the prepared materials were studied by N-2 adsorption-desorption, XRD, TPR, UV-Vis, TEM, SEM and XPS techniques. The results indicated the importance of the nickel dispersion on the support to improve hydrogen storage. Thus, Ni/SBA-15 (2.5) sample, in contrast to Ni/SBA-15 (2.5)-R, shows the higher hydrogen adsorption capacity at both 77 K and 293 K. (C) 2017 Elsevier Inc. All rights reserved. MIL-101(Cr) and activated carbon (AC) doped MIL-101(Cr) materials were synthesized under mild conditions, avoiding the use of hydrofluoric acid, and characterized by X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and physisorption. It was shown that the AC doping induced enhancements of the specific surface area and increase in the pore volume of the adsorbent. Hydrogen adsorption isotherms and kinetics were measured at 77 K up to 50 bar by using a volumetric method. A hydrogen uptake of 9.3 wt.% was measured for, the hybrid material, which was significantly higher than that for pristine MIL-101(Cr) which reached an uptake of 6.2 wt.% under the same conditions of temperature and pressure. Effective diffusion coefficients were besides attempted to be extracted from experimental kinetic curves, by using the Linear Driving Force (LDF) model in a first approach. However, this model failed to describe correctly the experimental kinetic data as it does not explicitly consider the external mass transfer resistance, neither the temperature nor the surface loading effects on the intra-crystalline diffusion process. To overcome these limitations, a more detailed model was proposed; based on the evaluation of both an external mass transfer coefficient and of an internal surface diffusivity in the adsorbed phase that accounts for the effects of temperature and adsorbent surface coverage. This model was proved to predict well the hydrogen adsorption rates in both the MOF and hybrid AC-MOF material at 77 K, in the pressure range up to 20 bar. (C) 2017 Elsevier Inc. All rights reserved. The effect of energy variation and structure change on the adsorption of a group of volatile organic compounds including methane, chioromethane, dichloromethane, trichloromethane and carbon tetrachloride (CH4, CH3CI, CH2Cl2, CHCl3 and CCl4) in zeolitic imidazolate framework-8 was investigated by density functional theory method, focusing on the swing effect implemented by the reorientation of 2-methylimidazole linkers, It was shown that the attachment energy is in the order of CCI4 (-19.5 kJ/mol) < CHCl3 (-17.0 kJ/mol) < CH4 (-16.5 kJ/mol) < CH2Cl2 (-15.4 kJ/mol) < CH3Cl (-13.8 kJ/mol) and the order of diffusion energy barrier is CH4 (33 kJ/mol) < CH2Cl2 (6.5 kJ/mol) < CH3Cl (13.5 kJ/mol) < CHCl3 (56.9 kJ/mol). It is hopeful that the characterization of the details of adsorption process on thermodynamics could serve as an important stepping stone for understanding the adsorption process during uptake experiments and therefore illuminate the way of some other ideas such as theoretical explain/prediction on the performance of porous materials. (C) 2017 Elsevier Inc. All rights reserved. Inverse liquid chromatography experiments were performed on five mesoporous alumina catalyst supports with similar porosity and different pore size distributions. By varying the size of the molecular tracer, it was shown that the diffusion regime in our conditions is molecular diffusion. Hindered diffusion was not observed even for squalane, a C-30 molecule. Using the slope of the Van Deemter equation, the tortuosity of each alumina support was determined. The results are in disagreement with literature correlations: although all alumina supports had similar total porosities, the measured tortuosity values are really different and much higher than those predicted by these theoretical models. This discrepancy has been resolved by assuming a two-level pore network organization, whose characteristics can be entirely estimated from a classical nitrogen adsorption isotherm. This simple methodology allows to evaluate the mass transfer in mesoporous alumina supports knowing their textural properties, which is an important issue for the design and optimization of numerous catalytic processes. (C) 2017 Elsevier Inc. All rights reserved. Hierarchical porous nitrogen-doped carbon (HPNC) derived from chitosan is synthesized through a novel metal ion-templated strategy. In this process, the absorbed Cu(II)-loaded chitosan was calcined at various temperatures which results in carbon/Cu composites. After removal of Cu particles with acid medium, hierarchical porous carbon forms which includes macropores, mesopores and micropores. The specific surface area of HPNC900 reaches 755 m(2) g(-1). The XPS results show that all the carbon materials contain nitrogen as the doped element due to the use of chitosan as a nitrogen-containing carbon source. Furthermore, highly dispersed Pt nanoparticles have been successfully deposited on the as-prepared HPNC through the in-situ reduction method. The average size of Pt nanoparticles anchored on HPNC900 is only 2.76 +/- 033 nm. The synthesized products are characterized by TEM, SEM and XRD analysis techniques. Among the studied electrocatalysts, the as-prepared Pt/HPNC900 electrocatalyst (HPNC fabricated at 900 degrees C) shows superior electrocatalytic activity and stability towards methanol oxidation in comparison with Pt/Vulcan XC-72 catalyst. The current density of methanol oxidation on Pt/HPNC900 is similar to 2.5 times higher than that of Pt/Vulcan XC-72 catalyst. The enhanced electrocatalytic activity is ascribed to hierarchical porous structures, nitrogen-doped surface characteristic and high BET specific surface area of HPNC. Therefore, the present method is very useful for fabrication of high-performance electrocatalyst for methanol oxidation. (C) 2017 Elsevier Inc. All rights reserved. Alkali-treated zeolite material was prepared by using 0.1-0.4 mol/L NaOH aqueous solution, as characterized by XRD, SEM, N-2 adsorption-desorption technique and ammonia-temperature programmed desorption (NH3-TPD). The as-prepared zeolite was found to be an effective catalyst for alcoholysis of urea and disubstituted urea to produce alkyl carbamate, getting 91.8-100% conversion of urea and substituted urea with >98.0% selectivity to alkyl carbamates. The results indicated that the high reactivity of the zeolites for alcoholysis of urea and N-substituted urea could be mainly ascribed to the enhanced specific area and balance between acidic and basic sites on the HZSM-5 surface caused by the alkali treatment. This catalyst can be easily recovered and reused. (C) 2017 Elsevier Inc. All rights reserved. Lithium concentrations [Li] and isotopic ratios [Li-7]/[Li-6] were measured for effluent fractions from a biphasic zeolite column. The biphasic state was ascribed to a mixture of hydrated Linde Type A (LTA) zeolites, [Li-0.008(NH4)(0.92)]A and [Li-0.33(NH4)(0.67)]A, which were formed by Li ion exchange from hydrated ammonium in the form (NH4)(12)[Al12Si12O48]center dot nH(2)O (NH(4)A). The biphasic Li band of the column was displaced by ion exchange with a solution of NH4NO3. A constant [Li] with a much lower level than the concentration of NHt(4)(_)(+) in the displacer (NH4NO3) was observed for the effluent from a short column. This constant lower level of [Li] was attributable to the biphasic state. On this [Li] plateau of the effluent, the level of [Li-7] shifted higher than the original isotopic composition of the Li feed, whereas Li-6 was concentrated on the biphasic zeolite solid. The accumulation of Li-6 in the zeolite proceeded by a mechanism of differential elution of Li-7 from the biphasic zeolite. For the long column experiment, a significant enrichment of Li-6 in the zeolite was observed, whereby a triadic band of Li was probably formed in the column. Two monophasic and a biphasic state were assigned. The biphasic band was deemed to push the monophasic bands forward, thereby enriching the monophasic bands with Li-7, while Li-6 accumulated at the end of the biphasic band. The trio structure of the Li band and isotopic discrimination in the band were analyzed. (C) 2017 Elsevier Inc. All rights reserved. A mixture of triphenylmethane (CHPh3, 1) and zeolite Y was heated to 423 K. The treatment resulted in the inclusion of 1 in the supercages of zeolite Y, up to 18 wt% (1.2 molecules of 1 per supercage). Similar phenomena were observed not only for 1 but also for the derivatives of 1 and for triphenylsilane (7). The inclusion of 1 was completed in less than 0.5 h, while the inclusion of 7 proceeded slowly (similar to 12 h), probably due to the bulk of 7. The included amounts of 1 and 7 tended to increase with a decreasing concentration of Al in zeolite Y. The Fourier-transform infrared spectra of 7-Y reveal the presence of three types of guest molecules, based on the appearance of three Si-H peaks in the Si-H stretching region. Calculations using molecular dynamics revealed that each of the three phenyl groups of I was inserted into the 12-membered rings of zeolite Y to form inclusion compounds. The catalytic activity of leucomalachite green and leucocrystal violet for the Knoevenagel reaction was significantly enhanced by the inclusion of these molecules into the SCs. (C) 2017 Elsevier Inc. All rights reserved. The crystallization of alumino-silicate zeolites of CHA structure in the system Na2O-K2O-Al2O3-SiO2-H2O is kinetically controlled, and occurs only in very narrow windows of compositions and hydrothermal conditions. This paper elaborates a new method to synthesize CHA zeolites with Si/Al ratios in a wide range, by means of adding some weak bases to modify the crystallization kinetics in favor of the desired CHA framework. A number of organic and inorganic bases are feasible for this purpose. Pure crystalline CHA materials with Si/Al = 2-30 have been obtained. These weak bases, either organic or inorganic, are not "templates", nor "structure-directing agents". The powder XRD patterns of obtained CHA type materials feature both broad and narrow peaks. And additional peaks that belong to low symmetry structures appear. These observations are in coincidence with earlier reported zeolite Phi, indicating existences of stacking faults in the ABC-D6R sequences. (C) 2017 Elsevier Inc. All rights reserved. The technique chemical vapor deposition (CVD) is known for its atom economy, simple preparation steps in catalyst preparation as well as high catalytic activity of the catalysts made by this method and was applied to the preparation of copper modified catalysts supported on Linde Type A zeolite membrane/paper-like stainless steel fibers (LTA/PSSF) in this work. A series of catalysts were prepared for acetone oxidation. Several techniques were carried out to characterize the catalysts including X-ray diffraction (XRD), scanning electron microscope (SEM), energy disperse X-ray spectrometer (EDS) mapping, X-ray photoelectron spectrum (XPS), N-2 adsorption and desorption isotherms as well as H-2 temperature programmed reduction (H-2-TPR). The results of the SEM showed that the zeolite membrane grew successfully on the surface of PSSF support, and the EDS illustrated that the copper oxides coated on the support materials uniformly. The Cu/LTA/PSSF catalyst prepared by CVD was tested in a zeolite membrane reactor. The experimental results revealed that higher loading results in better catalytic activity as the 10% Cu/LTA/PSSF-CVD presented the highest catalytic activity in acetone oxidation (90% of acetone converted at 300 degrees C) when compared with 5% Cu/LTA/PSSF-CVD (322 degrees C) and 1% Cu/LTA/PSSF-CVD (305 degrees C). The CVD method is proved to be a feasible and powerful way to prepare the catalyst. The experimental results suggested that the T90 of the 10% Cu/LTA/PSSF-CVD is slightly higher than that of 10% Cu/LTA/PSSF-impregnation (IM, 310 degrees C) and much better than that of granular Cu/LTA catalyst (346 degrees C), though the T50 (268 degrees C) is slightly lower than 10% Cu/LTA/PSSF-IM (260 degrees C). (C) 2017 Elsevier Inc. All rights reserved. Ethylenediaminetetraacetic acid functionalized mesoporous silica (EDTA-SBA-15) was synthesized and used to efficiently remove corrosion products like Co(II), Ni(II), and Cu(II) generated in nuclear reactor coolants. The mesoporosity, long range order, and functional group loadings were confirmed by N-2 adsorption/desorption, small angle x-ray scattering (SAXS), transmission electron microscopy (TEM), and Fourier transform infrared spectroscopy (FT-IR) techniques. The improved loading of EDTA on the SBA-15 surface significantly increased its adsorption capacity (1.33-1.44 mmol/g) for the above-mentioned corrosion products. Additionally, the fast equilibrium adsorption kinetics (<20 min) was observed, and the experimental data fitted the pseudo second order model. Complete sequestration of Co(II), Ni(II), and Cu(II) from their equimolar mixture was achieved with the optimum adsorbent dose. EDTA-SBA-15 had good stability over a wide pH range from 1 to 6 as well as in concentrated ionic media and at temperatures up to 75 degrees C. This study further evaluated the performance of the EDTA-SBA-15 under gamma irradiation. The results suggest that it has potential for use in the decontamination of radioactive corrosion products from the primary coolant system of nuclear power plants. (C) 2017 Elsevier Inc. All rights reserved. Multinuclear pulsed field gradient (PFG) NMR was used to study the self-diffusion of carbon dioxide and methane as equimolar mixtures and pure gases inside ZIF-11 crystals. Diffusion measurements were performed under conditions where the length scales of displacements were comparable with or smaller than the mean size of ZIF-11 crystals. It was found for both carbon dioxide and methane that the intracrystalline diffusivity decreases with an increasing diffusion time. Quantitative analysis of these time dependencies using a previously reported model explained the dependencies by the reflections of gas molecules from the external crystal surface. This analysis also allowed the intracrystalline gas diffusivities that are not perturbed by the influence of the external crystal surface to be determined. The ratio of such true intracrystalline diffusivities for CO2 and CH4 is found to be greater in the mixed sorbate sample than in the single-sorbate samples when the total gas concentration in each sample was the same within experimental error. (C) 2017 Elsevier Inc. All rights reserved. In this work, 8 samples of carbon aerogel with different resorcinol/catalytic ratio converter (R/C) ranging from 25 to 1500 were prepared. The textural properties were evaluated from N-2 adsorption isotherms at 77 K and the results were compared with those retrieved from immersion calorimetry using several molecule probe with different diameters. Experimental results show that the aerogel samples can be divided into two series with different properties: Series I, mainly microporous (low R/C ratio) and Series II (high R/C ratio) mainly microporous but with a contribution of mesoporosity. The specific surface area varied between 64 and 990 m(2) g(-1) and the QSDFT model best fitted with a kernel with a system of porous cylinder-slit. The experimental results show a good concordance between the results obtained from the immersion calorimetry in dichloromethane (rather than in benzene) and the corresponding obtained from BET method applyied to N-2 adsorption isotherms. Similarly, a good match of pore size distribution (PSD) data obtained from N-2 adsorption isotherms and immersion calorimetry was observed, except of a couple of aerogels. Finally, experimental and modelling results were critically discussed. (C) 2017 Elsevier Inc. All rights reserved. A series of MOx/HZSM-5 catalysts (MOx: PbO, Sb2O3 and Bi2O3) for the catalytic conversion of methanol to propylene were prepared by a wet impregnation method using M(NO3)(x) (M: Pb, Sb and Bi). The catalytic reactions were carried out at 470 degrees C, a weight hourly space velocity (WHSV) of 4 h(-1), p = 0.1 MPa, and a MeOH/H2O molar ratio of 1 in a fixed-bed reactor. The results indicated that the propylene/ethylene (P/E) ratio increased when Pb, Bi, and Sb were introduced into HZSM-5, which resulted in a reduction of the apparent pore size, the number and strength of the strong acid sites in the HZSM-5. Under the optimized conditions, the 2% PbO/HZSM-5 catalyst exhibited excellent catalytic performance with a propylene selectivity of 46.05%, an average P/E ratio of 8.80 and a catalyst life of more than 445 h. (C) 2017 Elsevier Inc. All rights reserved. Mesoporous alumina was one-pot synthesized via polymeric surfactant template assisted sol-gel and Evaporation-Induced Self-Assembly (EISA) using aluminum isopropoxide (Al(O-i-Pr)(3)) as aluminum source. Among several experimental parameters affecting the properties of the synthesized aluminas, the effect of calcination temperature and the triblock copolymer template (Pluronic P123) content in the gel were studied. The effects of calcination temperature in the range of 700 degrees C-1050 degrees C, on such parameters as surface area, pore volume, pore size distribution, phase transformation, and crystal size were experimentally investigated. The influence of P123/Al(O-i-Pr)(3) (mass) under bar ratio in the range of 0.49-1.96 on the textural properties of the synthesized alumina was studied. The synthesized mesoporous alumina was characterized using N-2 adsorption-desorption analysis, TGA/DTG and DSC/DTA. The phase identification and crystal domain size were determined using XRD. The morphology of the calcined samples was analyzed using SEM and TEM. The TEM observations and SAXS measurements revealed the meso-structured pore lattice of the samples. The pore diameter distributions were estimated by numerical image analysis of TEM images. The pore lengths distributions were also estimated using the scale bar of TEM images and found to be averaged around 800 nm. The calculated volume pore size distributions (PSD) were found in reasonable agreement with the PSDs obtained from NLDFT analysis of nitrogen adsorption-desorption isotherms. The synthesized gamma-aluminas exhibited thermal stability up to 950 degrees C. At higher temperatures, the a-phase was formed, and the mesostructure collapsed. Samples obtained with P123/Al(O-i-Pr)(3) = 0.98 and 1.47, calcined at 700 degrees C with 3 h soaking time exhibited a surface area of 259 m(2)/g and 238 m(2)/g respectively. (C) 2017 Elsevier Inc. All rights reserved. Intracrystal-mesoporous ZSM-5 zeolites of varying SiO2/Al2O3 molar ratios with different morphologies have been synthesized by employing a novel organosiloxane (OSO) as mesopore-directing template. In comparison to the previously reported ZSM-5 zeolites with mesoporosity, the mesoporous ZSM-5 zeolites obtained in this paper have high hierarchical factor (HF, 0.13-0.18), meaning that little microporosity has been consumed with abundant mesopores introduced within zeolite frameworks. The effects of tetrapropylammonium hydroxide (TPAOH), SiO2/Al(2)O3 molar ratios, crystallization temperature and crystallization time on the morphologies and textural properties of the products have been discussed in detail. The synthesized mesoporous ZSM-5 zeolites were characterized with X-ray diffraction (XRD), nitrogen sorption, scanning electron microscopy (SEM), transmission electron microscopy (TEM), nuclear magnetic resonance (NMR) spectroscopy, inductively coupled plasma (ICP) optical emission spectroscopy, the temperature-programmed desorption of NH3 technique (NH3-TPD) and thermogravimetric (TG)/differential scanning calorimetry (DSC) techniques. The outstanding hydrothermal stability of mesoporous ZSM-5 zeolites have been demonstrated by the hydrothermal treatment test. The Na+ ion-exchange test exhibits obvious superiority compared to traditional ZSM-5 zeolite. And the primary catalytic performance in reaction of cracking of 1, 3, 5-tri-isopropylbenzene also exhibits excellent conversion activity and products selectivity. (C) 2017 Elsevier Inc. All rights reserved. Aminothermal synthesis reported by our group recently, in which organic amine is used as the solvent and template, could obviously enhance the solid yield and crystallization rate of SAPO-34 as compared with the corresponding hydrothermal one. Herein, aminothermal crystallization process of SAPO-34 is investigated to gain insights into this novel synthetic method by XRD, SEM, IR, UV-Raman and various solid-state NMR techniques. It is found that aminothermal environment facilitates the fast formation of a semicrystalline AIPO layered phase, which further promotes the quick activation of Al source. The lamellar phase, which is water-soluble and contains double 6-rings in the framework, could be stabilized by protonated triethylamine (TEA(+)). SAPO-34 nucleates from the rearrangement of the double 6-rings in the layered phase through bond breaking, reforming and Si incorporation after heating at 160 degrees C for 3 h. For the hydrothermal process, the interaction between TEA(+) and inorganic species is weak and the formation of layered phase is retarded due to its high solubility in water. Correspondingly, the activation of Al source is slow and SAPO-34 appears after 24 h. This work demonstrates the importance of water concentration in the synthetic system, which may influence the formation of intermediate species and alter the crystallization process and rate. (C) 2017 Elsevier Inc. All rights reserved. The synthesis of zeolites from alternative sources of silicon and aluminium are promising routes to obtaining zeolitic materials. Such materials are typically applied in catalytic and adsorptive processes, to obtain new products and at separation and purification processes. In order to obtain a material with environmentally friendly features, this research was focused on the study of an effective and viable route for the synthesis of Y zeolite from different sources of silicon and aluminium. In this research two types of metakaolin were used: metakaolin residue (MICR) and metakaolin (MK). An experimental design was carried out as a tool to evaluate the effects of time and temperature parameters and they influence on the crystallization of zeolites with analytical reagents. The raw materials and the products obtained were characterized by a series of techniques: X-Ray Fluorescence (XRF), X-Ray Diffraction Spectroscopy (XRD), Fourier Transform Infrared (FTIR), BET surface area analysis, Scanning Electron Microscopy (SEM). The results obtained from the analysis and characterization showed that the route developed through hydrothermal reaction for the synthesis of Y zeolite is significantly efficient. The synthesized zeolites were compared with a commercial zeolite and yielded promising results, thus proving the efficiency of the proposed method. As a result of the experimental planning, time was verified to be a key factor in the crystallinity of the zeolitic material. (C) 2017 Elsevier Inc. All rights reserved. Biomimetic sol-gel synthesis was used to prepare new FeO(OH) zeolite (clinoptilolite tuff) adsorbent effective for antimony removal. The product was compared with other on the market accessible natural or commercial adsorption materials like granulated ferric hydroxide GEH, powder of zero valent iron (ZVI)- nanofer and the new synthesized oxi(hydr)oxide FeO(OH) and characterized by XRD, XPS, Raman, FT IR, TG, DTA, DTG, TEM and SEM techniques. Based upon the SEM analysis, the oxidized nanofer sample revealed the existence of hematite and goethite and morphology of FeO(OH) dopant confirmed the presence of ferrihydrite, in less extent also magnetite and hematite. Recorded exothermic maxima on DTA curves for powdered FeO(OH) zeolite at 460 degrees C and for pure component FeO(OH) at 560 degrees C indicated an 100 degrees C shifted exothermic effect, which characterized strong chemical interaction of FeO(OH) with zeolite structure. Based upon the XPS analyses, also the difference between Fe species in the raw and FeO(OH) doped zeolite was found in increasing Si/Al ratio, however only at the surface below app. 5 nm, measured as 3.94 for raw and 538 for sample treated with alkalic solution. The plotting of adsorption isotherms in the system studied clearly showed the increasing uptake capacity of the adsorbents towards antimony with the increased S(BET) data (GEH (FeO)-Fe->(OH)(FeO)-Fe->(OH) zeolite(>)nanofer). (C) 2017 Elsevier Inc. All rights reserved. Structured hybrid materials based on the combination adsorbent-photocatalyst have large surface area, an average pore diameter, good dispersion and strong adsorptivity, showing promising performance for environmental applications. In this study, honeycomb-like micro-mesoporous structure TiO2/sepiolite composites were fabricated using TiCI4 and modified sepiolite (MS) nanofibers as the precursors by a hydrolysis-precipitation method. The effect of the textural, morphological and structural properties of TiO2/sepiolite composites on the adsorption and photocatalytic degradation of formaldehyde (HCHO) in gas phase was analyzed. The results indicate that thermal treatment at 550 degrees C and TiO2/MS weight ratios of 1/2 ensure an optimum combination of textural, optical and HCHO adsorption and photocatalytic properties. At ambient temperature, the optimal catalyst (relative humidity of 45%, and HCHO initial concentration of 6.56 mg/m(3)) could efficiently decompose HCHO into CO2 and H2O, and still showed a good performance even after 4 cycles under UV irradiation. The excellent performance of the TiO2/sepiolite is primarily ascribed to the synergistic effect between the adsorbability of sepiolite nanofibers and the catalytic ability of TiO2 nanoparticles. (C) 2017 Elsevier Inc. All rights reserved. In this work, the MD simulation was used to find how the diffusion of acetylene molecules is affected by the type of MOF structure, temperature and loading. Diffusion of 4, 6, and 8 acetylene molecules in MOF-508a and MOF-508b were investigated in the temperature range of 300-900 K. The mean square displacement, the self-diffusion and the activation energies were calculated for each loading in different temperatures. The results indicated that the self-diffusion in MOF-508a is much higher than MOF-508b and increases with increasing the temperature and loading. The calculated binding energy decreases by the temperature and increases by the loading. The height of RDF peak of acetylene molecules reduces with temperature and loading and also shifts a little toward the higher distances in both MOFs Comparing our results with similar studies on Zeolites indicated that the self-diffusion of acetylene molecules in MOF-508a and MOF-508b is in order of the self-diffusion in zeolites. (C) 2017 Published by Elsevier Inc. Magnetic nanozeolites were synthesized through a new method and their abilities were evaluated toward adsorption of Pb (II) and Cu (II) from aqueous solutions. A multivariate optimization method using central composite design (CCD) was designed to obtain the best conditions of the adsorption. NaA zeolite because of its high affinity for the investigated ions was selected and was coated onto magnetic nano particles. The morphology and characterization of the adsorbent were studied by different techniques such as SEM imaging, FT-IR spectrometry and AGFM measurements. The most favorable conditions for Pb (II) removal was achieved in pH = 5.6 and 8 mg sorbent in a 750 mg L-1 Pb (II) aqueous solution when removal process was done in 8 min. Maximum removal of Cu (II) was attained under the optimal conditions with, pH = 2.9, adsorbent amounts 16 mg, initial ion concentration 22.1 mg L-1 and sonication time 24 min. The predicted model of Design Expert, software was indicated the significant main factors and the possible interactions of the factors. Based on the results, some interactions between the effecting factors were found for Pb (II) and Cu (II) ions and R-2 was attained 0.99 and 0.97 for these ions respectively. Our findings clearly showed that magnetic nanozeolites can be made successfully by entering silica-coated magnetic nanoparticles into the initial solution of zeolite synthesis. Also, the coating of the magnetic nanoparticles with zeolite NaA improved effectively the adsorption of the selected ions while we can conveniently separate the nanozeolites with an external magnet. (C) 2017 Elsevier Inc. All rights reserved. This Teview mainly focuses on the physiological function of 4-hydroxyphenylpyruvate dioxygenas (HPPD), as well as on the development and application of HPPD inhibitors of several structural, classes. Among them, one illustrative example is represented by compounds belonging to the class of triketone compounds. They were discovered by serendipitous observations on weed growth and were developed as bleaching herbicides. Informed reasoning on nitisinone (NTBC, 14), a triketone that failed to reach the final steps of the herbicidal design and development process, allowed it to become a curative agent for type I tyrosinemia (T1T) and to enter clinical trials for alkaptonuria. These results boosted the: research of new compounds able to interfere with HPPD activity to be used for the treatment of the tyrosine metabolism-related diseases. G protein-coupled receptors (GPCRs) belong to a large superfamily of membrane receptors mediating a variety of physiological functions. As such they are attractive targets for drug therapy. However, it remains a challenge to develop subtype selective GPCR ligands due to the high conservation of orthosteric binding sites. Bitopic ligands have been employed to address the selectivity problem by combining (linking) an orthosteric ligand with an allosteric modulator, theoretically leading to high-affinity subtype selective ligands. However, it remains a challenge to identify suitable allosteric binding sites. Computational studies on ligand binding to GPCRs have revealed transient, low-affinity binding sites, termed metastable binding sites. Metastable binding sites may provide a new source of allosteric binding sites that could be exploited in the design of bitopic ligands. Unlike the bitopic ligands that have been reported to date, this type of bitopic ligands would be composed of two identical pharmacophores. Herein, we outline the concept of bitopic ligands, review metastable binding sites, and discuss their potential as a new source of allosteric binding sites. In this report, we disclose the design and synthesis of a series of pentafluorosulfanyl (SF5) benzopyran derivatives as novel COX-2 inhibitors with improved pharmacokinetic and pharmacodynamic properties. The pentafluorosulfanyl compounds showed both potency and selectivity for COX-2 and demonstrated efficacy in several murine models of inflammation and pain. More interestingly, one of the compounds, R,S-3a, revealed exceptional efficacy in the adjuvant induced arthritis (ATA) model, achieving an ED50 as low as 0.094 mg/kg. In addition, the pharmacokinetics of compound R,S-3a in rat revealed a half-life in excess of 12 h and plasma drug concentrations well above its IC90 for up to 40 h. When R,S-3a was dosed just two times a week,in the AIA model, efficacy was still maintained. Overall, drug R,S-3a and other analogues are suitable candidates that merit further investigation for the treatment of inflammation and pain as well as other diseases where COX-2 and PGE(2) play a role in their etiology. Heat shock transcription factor 1 (HSF1) has been identified as a therapeutic target for pharmacological treatment of multiple myeloma (MM). However, direct therapeutic targeting of HSF1 function seems to be difficult due to the shortage of clinically suitable pharmacological inhibitors. We utilized the Ugi multi-component reaction to create a small but smart library of a-acyl aminocarboxamides and evaluated their ability to suppress heat shock response (HSR) in MM cells. Using the INA-6 cell line, as the MM model and the strictly HSF1-dependent HSP72 induction, as a HSR model, we identified potential HSF1 inhibitors. Mass spectrometry based affinity capture experiments with biotin-linked derivatives 4 revealed a number of target proteins and complexes, which exhibit an armadillo domain. Also, four members of the tumor-promoting and HSF1-associated phosphatidylinositol 3-kinase-related kinase (PIKK) family were identified. The antitumor activity was evaluated, showing that treatment with the anti-HSF1 compounds strongly induced apoptotic cell death in MM cells. We have designed and synthesized novel piperatine compounds with low lipophilicity as sigma(1) receptor ligands. 1-(4-Fluorobenzyl)-4-[(tetrahydrofuran-2-yl)methyl]piperazine (10) possessed a low nanomolar sigma(1) receptor affinity and a high selectivity toward the vesicular acetylcholine transporter (>20004-fold), sigma(2) receptors (52-fOld), and adenosine A(2A), adrenergic alpha(2), cannabinoid CBI, dopamine D-1, D-2L, gamma-amitiobutyric acid A (GABA(A)), NMDA, melatonin MT1, MT2, and Serntonin 5-HT1 receptors. The corresponding radiotracer [F-18]10 demonstrated high brain uptake and extremely high brain-to-blood ratios in biodistribution studies in mice. Pretreatment with the selective sigma(1) receptor agonist SA4503- significantly reduced the level of accumulation of the radiotracer in the brain. No radiometabolite of [F-18]10 was observed to enter the brain. Positron emission tomography and magnetic resonance imaging confirmed suitable kinetics and a high, specific binding of [F-18]10 to sigma(1) receptors in rat brain. Ex vivo autoradiography showed a reduced level of binding of [F-18]10 in the cortex and hippocampus of the senescence-accelerated prone (SAMP8) compared to that of the senescence-accelerated resistant (SAMR1) mite, indicating the potential dysfunction of sigma(1) receptors in Alzheimer's disease. A new class of potent and highly selective SGLT2 inhibitors is diSclosed. Compound 31 (HSK0935) demonstrated excellent hSGLT2 inhibition of 1.3 nM and a high hSGLT1/hSGLT2 selectivity of 843-fold. It showed robust urinary glucose excretion in Sprague Dawley (SD) rats and affected more urinary glucose excretion in Rhesus monkeys. Finally, an efficient synthetic route has been developed featuring a ring-closing cascade reaction to incorporate a double ketal 1-methoxy-6,8-dioxabicyclo[3.2.1] octane ring system. The role of the G-protein-coupled bile acid receptor TGRS in various organs, tissues, and cell types; specifically in intestinal endocrine L-cells and brown adipose tissue, has made it a promising therapeutical target in several diseases, especially type-2 diabetes and metabolic syndrome. However, recent studies have shown deleterious on-target effects of systemic TGRS agonists. To avoid these. systemic effects while stimulating glucagon-like peptide-1 (GLP-1) secreting enteroendocrine L-cells, we have designed TGRS agonists with low intestinal petmeability. In this article, we describe their synthesis, characterization, and biological evaluation. Among them, compound 24 is a potent GLP-1 secretagogue, has low effect on gallbladder volume, and improves glucose homeostasis in a preclinical murine model of diet-induced obesity and insulin resistance, making the proof of concept of the potential of topical intestinal TGR5 agonists as therapeutic agents in type-2 diabetes. Within a backup program for the clinical investigational agent pretomanid (PA-824), scaffold hopping from delamanid inspired the discovery of a novel class of potent antitubercular agents that unexpectedly possessed notable utility against the kinetoplastid disease visceral leishmaniasis (VL). Following the identification of delamanid analogue DNDI-VL-2098 as a VL preclinical candidate, this structurally related 7 substituted 2-nitro-5,6-dihydroimidazo [2,1-b][1,3]oxazine class was further explored, seeking efficacious backup compounds with improved solubility and safety. Commencing with a biphenyl lead, bioisosteres formed by replacing one phenyl by pyridine or pyrimidine showed improved solubility and potency, whereas more hydrophilic side chains reduced VL activity. In a Leishmania donovani mouse model, two racemic phenylpyridines (71 and 93) were superior, with the former providing >99% inhibition at 12.5 mg/kg (b.i.d., orally) in the Leishmania infantum hamster model. Overall, the 7R enantiomer of 71 (79) displayed more optimal efficacy, pharmacokinetics, and safety, leading to its selection as the preferred development candidate. The tumor suppressor protein p53, the "guardian of the genome", is inactivated in nearly all cancer types by mutations in the TPS3 gene or by overexpression of its negative regulators, oncoproteins MDM2/MDMX. Recovery of p53 function by disrupting the p53-MDM2/MDMX interaction using small-molecule antagonists could provide an efficient nongenotoxic anticancer therapy. Here we present the syntheses, activities, and crystal structures of the p53-MDM2/MDMX inhibitors based on the 1,4,5-trisubstituted imidazole scaffold which are appended with aliphatic linkers that enable coupling to bioactive carriers. The compounds have favorable properties at both biochemical and cellular levels. The most effective compound 19 is a tight binder of MDM2 and activates p53 in cancer cells that express the wild-type p53, leading to cell cycle arrest and growth inhibition. Crystal structures reveal that compound 19 induces MDM2 dimerization via the aliphatic linker. This unique dimerization-binding mode opens new prospects for the optimization of the p53-MDM2/MDMX inhibitors and conjugation with bioactive carriers. Protein ligand interactions are the fundamental basis for molecular design in pharmaceutical research, biocatalysis, and agrochemical development. Especially hydrogen bonds are known to have special geometric requirements and therefore deserve a detailed analysis. In modeling approaches a more general description of hydrogen bond geometries, using distance and directionality, is applied. A first study of their geometries was performed based on 15 protein structures in 1982. Currently there are about 95 000 protein ligand structures available in the PDB, providing a solid foundation for a new large-scale statistical analysis. Here, we report a comprehensive investigation of geometric and functional properties of hydrogen bonds. Out of 22 defined functional groups, eight are fully in accordance with theoretical predictions while 14 show variations from expected values. On the basis of these results, we derived interaction geometries to improve current computational models. It is expected that these observations will be useful in designing new chemical structures for biological applications. Granulins are a family of protein growth factors that are involved in cell proliferation. An orthologue of granulin from the human parasitic liver fluke Opisthorchis viverrini, known as Ov-GRN-1, induces angiogenesis and accelerates wound repair. Recombinant Ov-GRN-1 production is complex and poses an obstacle for clinical development. To identify the bioactive region(s) of Ov-GRN-1, four truncated N-terminal analogues were synthesized and characterized structurally using NMR spectroscopy. Peptides that contained only two native disulfide bonds lack the characteristic granulin beta-hairpin structure. Remarkably, the introduction of a non-native disulfide bond was critical for formation of beta-hairpin structure. Despite this structural difference, both two and three disulfide-bonded peptides drove proliferation of a human cholangiocyte cell line and demonstrated potent wound healing in mice. Peptides derived from Ov-GRN-1 are leads for novel wound healing therapeutics, as they are likely less immunogenic than the full-length protein and more convenient to produce. Design, synthesis, and evaluation of a new class of exceptionally potent HIV-1 protease inhibitors are reported. Inhibitor 5 displayed superior antiviral activity and drug-resistance profiles. In fact, this inhibitor showed several orders of magnitude improved antiviral activity over the FDA approved drug darunavir. This inhibitor incorporates an unprecedented 6-5-5 ring-fused crown-like tetrahydro-pyranofuran as the P2 ligand and an aminobenzothiazole as the P2' ligand with the (R)-hydroxyethylsulfonarnide isostere. The crown-like P2 ligand for this inhibitor has been synthesized efficiently-in an optically active form using a chiral Diels-Alder catalyst providing a key intermediate in high enantiomeric purity. Two high resolution X-ray structures of inhibitor-bound HIV-1 protease revealed extensive interactions with the backbone atoms of HIV-1 protease and provided molecular insight into the binding properties of these new inhibitors. The dCTP pyrophosphatase 1 (dCTPase) is a nucleotide pool "housekeeping" enzyme responsible for the catabolism of canonical and noncanonical nucleoside triphosphates (dNTPs) and has been associated with cancer progression and cancer cell sternness. We have identified a series of piperazin-1-ylpyridazines as a new class of potent dCTPase inhibitors. Lead compounds increase dCTPase thermal and protease stability, display outstanding selectivity over related enzymes and synergize with a cytidine analogue against leukemic cells. This new class of dCTPase inhibitors lays the first stone toward the development of drug-like probes for the dCTPase enzyme. Dual activation of the glucagon-like peptide 1 (GLP-1) and glucagon receptor has the potential to lead to a novel therapy principle for the treatment of diabesity. Here, we report a series of novel peptides with dual activity on these receptors that were discovered by rational design. On the basis of sequence analysis and structure-based design, structural elements of glucagon were engineered into the selective GLP-1 receptor agonist exendin-4, resulting in hybrid peptides with potent dual GLP-1/glucagon receptor activity. Detailed structure activity relationship data are shown. Further modifications with unnatural and modified amino acids resulted in novel metabolically stable peptides that demonstrated a significant dose-dependent decrease in blood glucose in chronic studies in diabetic db/db mice and reduced body weight in diet-induced obese (DIO) mice. Structural analysis by NMR spectroscopy confirmed that the peptides maintain an exendin-4-like structure with its characteristic tryptophan-cage fold motif that is responsible for favorable chemical and physical stability. IC87114 [compound 1, (2((6-amino-9H-purin-9-yl)methyl)-5-methyl-3-(o-tolyl)quinazolin-4(3H)-one)] is a potent PI3K inhibitor selective for the delta isoform. As predicted by molecular modeling calculations, rotation around the bond connecting the quinazolin-4(3H)-one nucleus to the o-tolyl is sterically hampered, which leads to separable conformers with axial chirality (i.e., atropisomers). After verifying that the aS and aR isomers of compound 1 do not interconvert in solution, we investigated how biological activity is influenced by axial chirality and conformational equilibrium. The aS and aR. atropisomers of 1 were equally active in the PI3K delta assay. Conversely, the introduction of a Methyl group at the methylene hinge connecting the 6-amino-9H-purin-9-yl pendant to the quinazolin-4(3H)-one nucleus of both aS and aR isomers of 1 had a critical effect on the inhibitory activity, indicating that modulation of the conformational space accessible for the two bonds departing from the central methylene considerably affects the binding of compound 1 analogues to PI3K delta enzyme. On the basis of X-ray crystallographic studies of the complex of hCA II with 4-(3,4-dihydro-1H-isoquinoline-2-carbonyl)benzenesulfonamide (3) (PDB code 4Z1J), a novel series of 4-(1-aryl-3,4-dihydro-1H-isoquinolin-2-carbonyl)-benzenesulfonamides (23-33) was designed. Specifically, our idea was to improve the selectivity toward druggable isoforms through the introduction of additional hydrophobic/hydrophilic functionalities. Among the synthesized and tested compounds, the (R,S)-4-(6,7-dihydroxy-1-phenyl-3,4-tetrahydroisoquinoline-1H-2-carbonyl)benzenesulfonamide (30) exhibited a remarkable inhibition for the brain-expressed hCA VII (K-i = 0.20 nM) and selectivity over wider distributed hCA I and hCA II isoforms. By enantioselective HPLC, we solved the racemic mixture and ascertained that the two enantiomers (30a and 30b) are equiactive inhibitors for hCA VII. Crystallographic and docking studies revealed the main interactions of these inhibitors into the carbonic anhydrase (CA) catalytic site, thus highlighting the relevant role of nonpolar contacts for this class of hCA inhibitors. Structural determinants of affinity of N-6-substituted-5'-C-(ethyltetrazol-2-yl)adenosine and 2-chloroadenosine derivatives at adenosine receptor (AR) subtypes were studied with binding and molecular modeling. Small N-6-cycloalkyl and 3-halobenzyl groups furnished potent dual acting A(1)AR agonists and A(3)AR antagonists. 4 was the most potent dual acting human (h) AFAR agonist (K-i = 0.45 nM) and A(3)AR antagonist (K-i = 0.31 nM) and highly selective versus A(2A); 11 and 26 were most potent at both h and rat (r) A(3)AR. All N-6-substituted-5'-C-(ethyltetrazol-2-yl)adenosine derivatives proved to be antagonists at hA(3)AR but agonists at the rA(3)AR. Analgesia of 11, 22, and 26 was evaluated in the mouse formalin test (A(3)AR antagonist blocked and A(3)AR agonist strongly potentiated). N-6-Methyl-5'-C-(ethyltetrazol-2-yl)adenosine (22) was most potent, inhibiting both phases, as observed combining AFAR and A(3)AR agonists. This study demonstrated for the first time the advantages of a single molecule activating two AR pathways both leading to benefit in this acute pain model. The centrally expressed melanocortin-3 and -4 receptors (MC3R/MC4R) have been studied as possible targets for weight management therapies, with a preponderance of studies focusing on the MC4R. Herein, a novel tetrapeptide scaffold [Ac-Xaa(1)-Arg-(pI)DPhe-Xaa(4)-NH2] is reported. The scaffold was derived from results obtained from a MC3R mixture-based positional scanning campaign. From these results, a set of 48 tetrapeptides were designed and pharmacologically characterized at the mouse melanocortin-1, -3, -4, and -5 receptors. This resulted in the serendipitous discovery of nine compounds that were MC3R agonists (EC50 < 1000 nM) and MC4R antagonists (5.7 < pA(2) < 7.8). The three most potent MC3R agonists, 18 [Ac-Arg-Arg-(pI)DPhe-Tic-NH2], 1 [Ac-His-Arg-(pI)DPhe-Tic-NH2], and 41 [Ac-Arg-Arg-(pI)DPhe-DNal(2')-NH2] were more potent (EC50 < 73 nM) than the melanocortin tetrapeptide Ac-His-DPhe-Arg-Trp-NH2. This template contains a sequentially reversed "Arg-(pI)DPhe" motif with respect to the. classical "Phe-Arg" melanocortin signaling motif; which results in pharmacology that is first-in-class for the central melanocortin receptors. B-cell lymphoma 6 (BCL6) is a transcriptional factor that expresses in lymphocytes and regulates the differentiation and proliferation of lymphocytes. Therefore, BCL6 is a therapeutic target for autoimmune diseases and cancer treatment. This report presents the discovery of BCL6 corepressor interaction inhibitors by using a biophysics-driven fragment based approach. Using the surface plasmon resonance (SPR)-based fragment screening, we successfully identified fragment 1 (SPR K-D = 1200 mu M, ligand efficiency (LE) = 0.28), a competitive binder to the natural ligand BCoR peptide. Moreover, we elaborated 1 into the more potent compound 7 (SPR K-D = 0.078 mu M, LE = 0.37, cell-free protein-protein interaction (PPI) IC50 = 0.48 mu M (ELISA), cellular PPI IC50 = 8.6 mu M (M2H)) by a structure-based design and structural integration with a second high-throughput screening hit. The hepatitis C virus (HCV) NS5B replicase is a prime target for the development of direct-acting antiviral drugs for the treatment of chronic HCV infection. Inspired by the overlay of bound structures of three structurally distinct NS5B palm site allosteric inhibitors, the high-throughput screening hit anthranilic acid 4, the known benzofuran analogue 5, and the benzothiadiazine derivative 6, an optimization process utilizing the simple benzofuran template 7 as a starting point for a fragment growing approach was pursued. A delicate balance of molecular properties achieved via disciplined lipophilicity changes was essential to achieve both high affinity binding and a stringent targeted absorption, distribution, metabolism, and excretion profile. These efforts led to the discovery of BMS-929075 (37), which maintained ligand efficiency relative to early leads, demonstrated efficacy in a triple combination regimen in HCV replicon cells, and exhibited consistently high oral bioavailability and pharmacokinetic parameters across preclinical animal species. The human PK properties from the Phase I clinical studies of 37 were better than anticipated and suggest promising potential for QD administration. Inhibition of the protein protein interaction between B-cell lymphoma 6 (BCL6) and corepressors has been implicated as a therapeutic target in diffuse large B-cell lymphoma (DLBCL) cancers and profiling of potent and selective BCL6 inhibitors are critical to test this hypothesis. We identified a pyrazolo[1,5-a]pyrimidine series of BCL6 binders from a fragment screen in parallel with a virtual screen. Using structure-based drug design, binding affinity was increased 100000-fold. This involved displacing crystallographic water, forming new ligand protein interactions and a macrocyclization to favor the bioactive conformation of the ligands. Optimization for slow off-rate constant kinetics was conducted as well as improving selectivity against an off-target kinase, CK2. Potency in a cellular BCL6 assay was further optimized to afford highly selective probe molecules. Only weak antiproliferative effects were observed across a number of DLBCL lines and a multiple myeloma cell line without a clear relationship to BCL6 potency. As a result, we conclude that the BCL6 hypothesis in DLBCL cancer remains unproven. LOXL2 catalyzes the oxidative deamination of epsilon-amines of lysine and hydroxylysine residues within collagen and elastin, generating, reactive aldehydes (allysine). Condensation with other allysines or lysines drives the formation of inter- and intramolecular cross-linkages, a process critical for the remodeling of the ECM. Dysregulation of this process can lead to fibrosis, and LOXL2 is known to be upregulated in fibrotic tissue. Small-molecules that directly inhibit LOXL2 catalytic activity represent a useful option for the treatment of fibrosis. Herein,we describe optimization of an initial hit 2, resulting in identification of racemic-trans-(3-((4-(aminoinethyl)-6-(trifluoromethyl)pyridin-2-yl)oxy)phenyl)(3-fluoro-4-hydroxypyrrolidin-1-yl)methanone 28, a potent irreversible inhibitor of LOXL2 that is highly selective over LOX and other amine oxidases. Oral administration of 2-8 significantly reduced fibrosis in a 14-day mouse lung bleomycin model. The (R,R)-enantionier 43 (PAT-1251) was selected as the clinical compound which has progressed into healthy volunteer Phase 1 trials, making it-the "first-in-class" small-molecule LOXL2 inhibitor to enter clinical development. This work follows on from our initial discovery of a series of piperidine-substituted thiophene[3,2-d]pyrimidine HIV-1 non-nucleoside reverse transcriptase inhibitors (NNRTI) (j. Med. Chem. 2016, 59, 7991-8007). In the present study, we designed, synthesized, and biologically tested several series of new derivatives in order to investigate previously unexplored chemical space. Some of the synthesized compounds displayed single-digit nanomolar anti-HIV potencies against wild-type (WT) virus and a panel of NNRTI-resistant mutant viruses in MT-4 cells. Compound 25a was exceptionally potent against the whole viral panel, affording 3-4-fold enhancement of in vitro antiviral potency against WT, L100I, K103N, Y181C, Y188L, E138K, and K103N+Y181C and 10-fold enhancement against F227L+V106A relative to the reference drug etravirine (ETV) in the same cellular assay. The structure activity relationships, pharmacokinetics, acute toxicity, and cardiotoxicity were also examined. Overall, the results indicate that 25a is a promising new drug candidate for treatment of HIV-1 infection. Spinal muscular atrophy (SMA) is caused by mutation or deletion of the survival motor neuron 1 (SMN1) gene, resulting in low levels of functional SMN protein. We have reported recently the identification of small molecules (coumarins, iso-coumarins and pyrido-pyrimidinones) that modify the alternative splicing of SMN2, a paralogous gene to SMN1, restoring the survival motor neuron (SMN) protein level in mouse models of SMA. Herein, we report our efforts to identify a novel chemotype as one strategy to potentially circumvent safety concerns from earlier derivatives such as in vitro phototoxicity and in vitro mutagenicity associated with compounds 1 and 2 or the in vivo retinal findings observed in a long-term chronic tox study with 3 at high exposures only. Optimized representative compounds modify the alternative splicing of SMN2, increase the production of full length SMN2 mRNA, and therefore levels of full length SMN protein upon oral administration in two mouse models of SMA. Pim kinases have been identified as promising therapeutic targets for hematologic-oncology indications, including multiple myeloma and certain leukemia. Here, we describe our continued efforts in optimizing a lead series by improving bioavailability while maintaining high inhibitory potency against all three Pim kinase isoforms. The discovery of extensive intestinal metabolism and major metabolites helped refine our design strategy, and we observed that optimizing the pharmacokinetic properties first and potency second was a more successful approach than the reverse. In the resulting work, novel analogs such as 20 (GNE-955) were discovered bearing 5-azaindazole core with noncanonical hydrogen bonding to the hinge. Multidrug resistance (MDR) mediated by ATP binding cassette (ABC) transport proteins remains a major problem in the chemotherapeutic treatment of cancer and might be overcome by inhibition of the transporter. Because of the lack of understanding, the complex mechanisms involved in the transport process, in particular for breast cancer resistance protein (BCRP/ABCG2), there is a persistent need for studies of inhibitors of ABCG2. In this study, we investigated a systematic series of 4-substituted-2-pyridylquinazolines in terms of their inhibitory potency as well as selectivity toward ABCG2. For comparison, the quinazoline scaffold was reduced to the significantly smaller 4-methylpyrimidine basic structure. Furthermore, the cytotoxicity and the ability to reverse MDR was tested with the chemotherapeutic agents SN-38 and mitoxantrone (MX). Interaction of the compounds with ABCG2 was investigated by a colorimetric ATPase assay. Enzyme kinetic studies were carried out with Hoechst 33342 as fluorescent dye and substrate of ABCG2 to elucidate the compounds binding modes. Matrix metalloproteinases (MMPs) are central to cancer development and metastasis. They are highly active in the tumor environment and absent or inactive in normal tissues; therefore they represent viable targets for cancer drug discovery. In this study we evaluated in silico docking to develop MMP-subtype-selective tumor-activated prodrugs. Proof of principle for this therapeutic approach was demonstrated in vitro against an aggressive human glioma model, with involvement of MMPs confirmed using pharmacological inhibition. The anti and syn isomers of tolvaptan-type compounds, N-benzoyl-5-hydroxy-1-benzazepines (5a-c), were prepared in a stereocontrolled manner by-biasing the conformation with a methyl group at C9 and C6, respectively, and the enantiomeric forms were separated. Examination of the affinity at the human vasopressin receptors revealed that the axial chirality (aS) plays a more important role than the central chirality at C5 in receptor recognition, and the most preferable form was shown to be (E,aS,5S). Polymeric nanoparticles (PNPs) may efficiently deliver in vivo therapeutics to tumors when conjugated to specific targeting agents. Gint4.T aptamer specifically recognizes platelet-derived growth factor receptor,8 and can cross the blood brain barrier (BBB). We synthesized Gint4.T-conjugated PNPs able of high uptake into U87MG glioblastoma (GBM) cells and with astonishing EC50 value (38 pM) when loaded with a PI3K-mTOR inhibitor. We also demonstrated in vivo BBB passage and tumor accumulation in a GBM orthotopic model. Direct numerical simulation is performed on a 38.1% scale Hypersonic International Flight Research Experimentation Program Flight 5 forebody to study stationary crossflow instability. Computations use the US3D Navier-Stokes solver to simulate Mach 6 flow at Reynolds numbers of 8.1 x 10(6)/m and 11.8 x 10(6)/m, which are conditions used by quiet-tunnel experiments at Purdue University. Distributed roughness with point-to-point height variation on the computational grid and maximum heights of 0.5-4.0 mu m is used with the intent to emulate smooth-body transition and excite the naturally occurring most unstable disturbance wavenumber. Cases at the low-Reynolds number condition use three grid sizes, and hence three different roughness patterns, and demonstrate that the exact flow solution is dependent on the particular roughness pattern. The same roughness pattern is interpolated onto each grid, which yields similar solutions, indicating grid convergence. A steady physical mechanism is introduced for the sharp increase in wall heat flux seen in both computations and experiment at the high-Reynolds number condition. Evolution of disturbance spanwise wavelength is computed, and is found to be more sensitive to Reynolds number than roughness, indicating that the disturbance wavelength is primarily the naturally occurring, flow-selected wavelength for these cases. The aerodynamic performances of many devices are influenced by vortical flow structures. Methods to identify and track these features are necessary for characterizing the flowfield, and robust detection methods could be exploited as an aerodynamic flow control sensor. This paper details a new approach to detect and track near-wall vortices by applying crease detection methods to locate their signatures in a wall static pressure field. The method is used to locate vortex signatures in pressure-sensitive paint measurements on the upper surface of a delta wing and to track the movement of a vortex downstream of a micro vortex generator using a grid of discrete pressure taps. The latter case demonstrates that the position of a near-wall vortex could be tracked experimentally using a coarse wall pressure distribution. Flow structure oscillations and tone generation mechanisms in an underexpanded round jet impinging on a flat plate normally have been investigated using compressible large-eddy simulations. At the exit of a pipe nozzle of diameter D, the jet is characterized by a nozzle pressure ratio of 4.03, an exit Mach number of 1, a fully expanded Mach number of 1.56, and a Reynolds number of 6 x 10(4). Four distances between the nozzle and the plate of 2.08D, 2.80D, 3.65D, and 4.66D are considered. Snapshots of vorticity, density, pressure, and mean velocity flowfields are first presented. The latter results compare well with data of the literature. In three cases, in particular, a Mach disk appears to form just upstream from the plate. The convection velocity of flow structures between the nozzle and the plate, and its dependence on the nozzle-to-plate distance, are then examined. The properties of the jet near pressure fields are subsequently described using Fourier analysis. Tones emerge in the spectra at frequencies consistent with those expected for an aeroacoustic feedback loop between the nozzle and the plate as well as with measurements. Their amplitudes are particularly high in the presence of a near-wall Mach disk. The axisymmetric or helical natures of the jet oscillations at the tone frequencies are determined. The motions of the Mach disk found just upstream from the plate for certain nozzle-to-plate distances are then explored. As noted for the jet oscillations, axially pulsing and helical motions are observed, in agreement with experiments. Finally, the intermittency of the tone intensities is studied. They significantly vary in time, except for the two cases where the near-wall Mach disk has a nearly periodic motion at the dominant tone frequency. A co-simulation strategy for modeling the unsteady dynamics of flying insects and small birds as well as biologically inspired flapping-wing micro-air-vehicles is developed in this work. In particular, the dynamic system under study is partitioned in two subsystems (the structural model and the aerodynamic model) that exchange information in a strong way. The vehicle or insect system is modeled as a collection of rigid bodies and lifting surfaces that can undergo deformations such as spanwise twisting, in-plane bending, out-of-plane bending, and an arbitrary combination of these deformation mechanisms. To account for the loads associated with the airflow, an aerodynamic model based on an extended version of the unsteady vortex-lattice method is used. The motion equations are integrated by using a fourth-order predictor-corrector method along with a procedure to stabilize the solution of the resulting differential-algebraic equations. The numerical results obtained for the unsteady lift and dynamics of a fruit fly in free hover flight are found to be in close agreement with prior experimental results reported in the literature. Furthermore, the inclusion of an adequate wing deformation pattern results in an increase of the lift force compared with that of a rigid wing surface, pointing to the importance of wing flexibility on aerodynamic performance. From the findings reported in this paper, it is believed that the numerical simulation framework presented here could serve as a computational tool for further studies of flying insects and micro-air-vehicles. Tethered lifting bodies have attracted significant attention from surveillance, communications, and (most recently) wind energy domains. As with many aerospace systems, the costs of full-scale testing act as a bottleneck to development, especially when accurate dynamic models do not exist, as is the case with a number of lifting-body designs that deviate from the traditional aerostat shape. This paper demonstrates the efficacy of using a laboratory-scale water channel-based test platform for the dynamic characterization of tethered systems with rigid lifting bodies, focusing on a lighter-than-air wind energy system as a case study. The platform overcomes the financial burden of full-scale testing, which simultaneously alleviates the short-term need for high-fidelity dynamic models and aids in the long-term development of high-fidelity models. In this paper, it is shown how a dimensional analysis provides a qualitative correlation between full-scale and 1/100th-scale flight behavior, as well as how this laboratory-scale platform is used to evaluate different lifting-body designs. The investigation identified and characterized the changes in the near wake of a bluff body produced by trailing-edge spanwise sinusoidal perturbation, which resulted in an overall reduction in the base drag. This study investigated a flat-plate body with an elliptic leading edge and blunt trailing edge, with and without trailing-edge spanwise sinusoidal perturbation. Base-pressure measurements were used to quantify the base-drag reduction associated with spanwise sinusoidal perturbation. In addition, particle-image-velocimetry measurements were performed in the near wake with and without spanwise sinusoidal perturbation, and a proper-orthogonal-decomposition analysis was used to analyze the velocity data. The findings suggest that the spanwise sinusoidal perturbation redistributed the relative kinetic energy, which enhanced the streamwise vortices and suppressed the von Karman-Benard vortices as a result. Tri-electrode plasma actuator (TED-PA), which has a third electrode with DC high voltage, can generate stronger jet than that of conventional two-electrode plasma actuator. However, the behavior of the jet in TED-PA is totally different from that in a conventional plasma actuator; the induced jet is deflected upward. To clarify the jet deflection mechanism and performance improvement mechanism of TED-PA, the discharge plasma and flow field are numerically simulated. The results show that the jet deflection appears in both cases of negative and positive DC voltage; a rightward jet from the AC electrode and a leftward jet from the DC electrode are generated, and the collision of the two jets generates the upward jet. The jet becomes stronger by the DC high voltage due to two factors: negative body force generation around the DC electrode and positive body force enhancement around the AC electrode. The negative body force generation results from the drift motion of the positive ions in the positive DC voltage case, and, on the other hand, that of the negative ions in the negative DC voltage case. The positive force enhancement is due to the electric field intensification by the DC voltage. The ability of vortex generators to reduce the unsteady distortion at the exit plane of an S duct is investigated. The three components of the velocity at the aerodynamic interface plane were measured using a stereo particle velocimetry system with high spatial resolution. This enabled an assessment of the synchronous swirl distortion at the duct exit. A total of nine vortex generator cases have been investigated with a systematic variation of key design variables. Overall, the vortex generators change the duct secondary flows and separation and are able to substantially restructure the flowfield at the aerodynamic interface plane. The pressure distortion could be reduced up to 50%, and a reduction in pressure loss of 30% was achieved for the mean flowfield. The vortex generators had a substantial influence on the unsteadiness of the flowfield with a reduction in peak swirl unsteadiness of 61% and an overall reduction of unsteady swirl distortion of 67%. They also suppressed the primary unsteady flow switching mechanism of the datum configuration, which is associated with the oscillation of bulk and twin swirl regimes. Consequently, extreme events that lead to high swirl intensity are suppressed, which lower by 45% the maximum swirl intensity for the vortex generator cases. The unsteady distorted flowfields generated within convoluted aeroengine intakes can compromise the engine performance and operability. Therefore, there is a need for a better understanding of the complex characteristics of the distorted flow at the exit of S-shaped intakes. This work presents a detailed analysis of the unsteady swirl distortion based on synchronous, high-spatial-resolution measurements using stereoscopic particle image velocimetry. Two S-duct configurations with different centerline offsets are investigated. The high-offset duct shows greater levels of dynamic and steady swirl distortion and a notably greater tendency toward bulk swirl patterns associated with high swirl distortion. More discrete distortion patterns with locally high swirl levels and the potential to impact the engine operability are identified. The most energetic coherent structures of the flowfield are observed using proper orthogonal decomposition. A switching mode is identified that promotes the alternating swirl switching mechanism and is mostly associated with the occurrence of potent bulk swirl events. A vertical mode that characterizes a perturbation of the vertical velocity field promotes most of the twin swirl flow distortion topologies. It is postulated that it is associated with the unsteadiness of the centerline shear layer. The dynamic flow distortion generated within convoluted aeroengine intakes can affect the performance and operability of the engine. There is a need for a better understanding of the main flow mechanisms that promote flow distortion at the exit of S-shaped intakes. This paper presents a detailed analysis of the main coherent structures in an S-duct flowfield based on a delayed detached-eddy simulation. The capability of this numerical approach to capture the characteristics of the highly unsteady flowfield is demonstrated against high-resolution, synchronous stereoscopic particle image velocimetry measurements at the aerodynamic interface plane. The flowfield mechanisms responsible for the main perturbations at the duct outlet are identified. Clockwise and counterclockwise streamwise vortices are alternately generated around the separation region at a frequency of St = 0.53, which promote the swirl switching at the duct outlet. Spanwise vortices are also shed from the separation region at a frequency of St = 1.06 and convect downstream along the separated centerline shear layer. This results in a vertical modulation of the main loss region and a fluctuation of the velocity gradient between the high-and low-velocity flow at the aerodynamic interface plane. This paper proposes a family of high-pressure capturing wing configurations that aim to improve the aerodynamic performance of hypersonic vehicles with large volumes. The predominant visual feature of such configurations is a thin wing called a high-pressure capturing wing attached to the top of an upwarp airframe. When flying in the hypersonic regime, high-pressure airflow compressed by the upper surface of the vehicle acts on the high-pressure capturing wing and significantly augments lift on the vehicle with only a small increase in drag, producing a correspondingly high increase in its lift-to-drag ratio. A series of numerical validations were carried out on the basis of both inviscid and viscous computational models in which ideal cones with different cone angles and combined cone-waverider bodies with different volumes were used as airframes. The results clearly demonstrate that a configuration using a high-pressure capturing wing has a significantly higher lift (with a correspondingly high value of lift-to-drag ratio) than one without a high-pressure capturing wing, especially for vehicles with large volumes. This paper contains a preliminary, results-based report of the conditions under which high-pressure capturing wing configurations were tested. A method to estimate buffeting loads on lifting surfaces immersed in turbulent streams using steady Reynolds-averaged Navier-Stokes equation solutions is presented. A generalization of a model developed by Liepmann ("On the Application of Statistical Concepts to the Buffeting Problem," Journal of the Aeronautical Sciences, Vol. 19, No. 12, Dec. 1952, pp. 793-800.) that is based on thin airfoil theory and statistical concepts is employed. Mean flow and turbulence-derived quantities required by the method are supplied by steady Reynolds-averaged Navier-Stokes equation model data. The shear-stress transport turbulence model is used here. The predictive capability of the method is assessed by comparison to unsteady turbulence simulations of the stream buffeting the lifting surface. A half-step is also taken wherein turbulence simulation results are used to close the Liepmann model, allowing that model's performance to be isolated from the impact of using a Reynolds-averaged Navier-Stokes model. The E-2D Advanced Hawkeye rotodome exposed to a compressible turbulent plume is used as a test case. The half-step results show that the Liepmann model itself performs well when both the upper and lower surfaces of the rotodome are within the stream. Estimates obtained using steady Reynolds-averaged Navier-Stokes equation-based results within the Liepmann model compare less favorably due to mean flow prediction differences. But, they are reasonable and have been found to be useful in an environment where a large number of cases needs to be quickly analyzed. High-fidelity computational modeling and optimization of aircraft configurations have the potential to enable engineers to create more efficient designs that require fewer unforeseen modifications late in the design process. Although aerodynamic shape optimization has the potential to produce high-performance transonic wing designs, these designs remain susceptible to buffet. To address this issue, a separation-based constraint formulation is developed that constrains buffet onset in an aerodynamic shape optimization. This separation metric is verified against a common buffet prediction method and validated against experimental wind-tunnel data. A series of optimizations based on the AIAA Aerodynamic Design Optimization Discussion Group's wing-body-tail case are presented to show that buffet-onset constraints are required and to demonstrate the effectiveness of the proposed approach. Although both single-point and multipoint optimizations without separation constraints are vulnerable to buffeting, the optimizations using the proposed approach move the buffet boundary to make the designs feasible. The numerical prediction of transition from laminar to turbulent flow has proven to be an arduous challenge for computational fluid dynamics, with few approaches providing routine accurate results within the cost confines of engineering applications. The recently proposed gamma-R epsilon(theta) transition model shows promise for predicting attached and mildly separated boundary layers in the transitional regime, but its accuracy diminishes for massively separated flows. In this effort, a new turbulence closure is proposed that combines the strengths of the local dynamic kinetic energy model and the widely adopted gamma-R epsilon(theta) transition model using an additive hybrid filtering approach. This method has the potential for accurately capturing massively separated boundary layers in the transitional Reynolds number range at a reasonable computational cost. Comparisons are evaluated on several cases, including a transitional flat plate, NACA 63-415 wing, and circular cylinder in crossflow. The new closure captures the physics associated with a separated wake (circular cylinder) across a range of Reynolds numbers from 10 to 2 million (2 x 10(6)) and performed significantly better in capturing performance and flowfield features of engineering interest than existing turbulence models. The transitional hybrid approach is numerically robust and requires less than 2% extra computational work per iteration as compared with the baseline Langtry-Menter transition model. A proper-orthogonal-decomposition analysis was performed on the turbulent flow in the near-wake region of an S805 airfoil in deep stall at an angle of attack of 30 deg. The flow was measured using tomographic particle image velocimetry at a Reynolds number of 4600. Instantaneous turbulence structures, which were significant contributors to the first two proper-orthogonal-decomposition modes, were studied. These structures included the large-scale Karman vortex and small-scale shear-layer vortices that were interacting with the Karman vortex. An interesting correspondence was observed between the rotational vectors in proper-orthogonal-decomposition modes 4 and 5 and the locations of the leading-edge shear-layer vortices in the flowfields, which were major contributors to these two modes. Similarity was found in the autospectral functions computed from the velocity fields, which were significant contributors to the first five proper-orthogonal-decomposition modes. The wavelengths corresponding to the peak values of autospectral functions could be related to the size of the induced flows by the Karman vortex and the streamwise spacing of the shear-layer vortices. The efficiency and accuracy of viscous flow simulations depend crucially on the quality of the boundary-layer mesh. Too coarse meshes can result in inaccurate predictions and in some cases lead to numerical instabilities, whereas too fine meshes produce accurate predictions at the expense of long simulation times. Constructing an optimal (or near-optimal) boundary-layer mesh has been recognized as an important problem in computational fluid dynamics. For few simple flows, one may be able to construct such a mesh a priori before simulation. For most viscous flow simulations, however, it is difficult to generate such mesh in advance. In this paper, a boundary-layer adaptivity method is developed for the efficient computation of steady viscous flows. This method turns the problem of determining the location of the mesh nodes into a set of equations that are solved simultaneously with the flow equations. The mesh equations are designed so that the boundary-layer mesh adapts to the viscous layers as the flow solver marches toward the converged solution. Extensive numerical experiments are presented to demonstrate the performance of the method. A solution algorithm using Hamiltonian paths and strand grids is presented for turbulent flows and unsteady flow calculations around representative geometries initially consisting of a mixed-element unstructured surface mesh. Line-based reconstruction schemes and approximate factorization techniques are robustly implemented on unstructured grids. The Baldwin-Lomax and Spalart-Allmaras turbulence models are integrated to the present method for both two-and three-dimensional flows, and the predicted aerodynamic flows are validated by comparing against those obtained from established solvers and/or experiments. In addition, time-accurate methods with dual-time-stepping strategies are explored to predict flows over canonical time-dependent problems. It is observed that a various high-order-type reconstruction scheme can be used on "lines" constructed from purely unstructured grids in the current framework, along with good spatial and temporal accuracy. A novel procedure for the Riemann solver flux calculation is proposed in this paper. With this simple normal velocity reconstruction procedure, all of the commonly used flux solvers (such as Godunov, Roe, Harten-Lax-van Leer-Contact, Advection-Upstream-Splitting-Method, etc.) turn out to be carbuncle-free and shock-stable. The normal velocity reconstruction procedure is done by a linear reconstruction of the cell interface normal velocity with the transverse neighbor cells, in consideration of the information transport in the transverse direction, which is neglected in the conventional finite wvolume/difference method. Some typical cases are performed to show that, when the normal velocity reconstruction procedure is used, various schemes (e.g., Roe) become carbuncle-free and shock-stable. In addition, the normal velocity reconstruction procedure has no influence on the contact-preserving property of the original flux solvers, and it adds very little computational cost. The mechanism of the normal velocity reconstruction procedure also is analyzed by a matrix-based stability method, and the results indicate that the normal velocity reconstruction procedure effectively reduces the system's positive eigenvalues that are leading to shock instability. This work investigates the self-excited shock-wave oscillations in a three-dimensional planar overexpanded nozzle turbulent flow by means of detached-eddy simulations. Time-resolved wall pressure measurements are used as primary diagnostics. The statistical analysis reveals that the shock unsteadiness has common features in terms of the standard deviation of the pressure fluctuations with other classical shock-wave/boundary-layer interactions, like compression ramps and incident shocks on a flat plate. The Fourier transform and the continuous wavelet transform are used to conduct the spectral analysis. The results of the former indicate that the pressure in the shock region is characterized by a broad low-frequency content, without any resonant tone. The wavelet analysis, which is well suited to study non stationary process, reveals that the pressure signal is characterized by an amplitude and a frequency modulation in time. Focus boom occurs when a flight vehicle accelerates or maneuvers at supersonic speed. Its overpressure is typically more than three times greater than that of a cruise boom. As a result, future supersonic transports are likely to face restrictions on their flight conditions. To investigate ways to alleviate this problem, this paper presents the effects of the focusing of several sonic boom signatures. So-called low-boom waveforms have promising characteristics, not only in cruise but also in transition flight phases including acceleration from Mach 1. Computations of the focused waveform for several accelerations at a constant altitude reveal that the low-boom ramp and flattop waveforms show strong reductions of the peak overpressure and perceived level measured on the ground track of the aircraft. Acceleration strongly impacts on the size of the noise exposure footprint of the focus boom, whereas the peak overpressure and the metric are not sensitive. Also shown is the importance of applying the low-boom concept to not only the front but also the rear section of the boom signature. Thermomechanical concepts and modeling are used to describe the response of inert and reactive gases to transient, spatially resolved thermal energy deposition. The ultimate goal is to establish the cause-effect relationship between combustion-generated energy deposition and the mechanical disturbances responsible for operationally observed pressure oscillations in liquid-propellant rocket engine combustion chambers as well as to identify physical processes that convert thermal energy to kinetic energy. Asymptotic formulations of the nondimensional describing transient conservation equations for both inert and reactive gases are used to identify nondimensional parameters that characterize fundamental physics occurring as the gas responds to localized heating. The characteristics of the responses depend upon the magnitudes of the suite of parameters. Some are described by hyperbolic partial differential equations; others involve either nearly constant density or nearly isobaric phenomena. Thermomechanical concepts are used to explain how initially imposed small pressure, density, temperature, and velocity disturbances can be the sources of a thermal response that evolves to relatively larger thermomechanical disturbances. The competition between localized, spatially distributed chemical energy addition from a highactivation-energy, one-step, Arrhenius reaction and compressibility effects associated with localized gas compression/expansion is the driver for diverse outcomes. Sufficiently robust thermal energy addition can cause a thermal explosion after an induction time period, followed by relatively large changes in the thermodynamic variables and induced velocity (instability) on a time scale exponentially short compared to that of the induction time. The propellant sloshing problem is of increasing concern in aerospace engineering. Computational-fluid-dynamics simulations have been carried out to predict three-dimensional large-amplitude liquid sloshing in spherical tanks. Basically an arbitrary Lagrangian-Eulerian method is followed. The main challenges are tracking the motion of the contact line and free surface, defining the nodal velocities on the space curved wall boundary, and maintaining rational computational mesh at the same time. A novel mesh moving strategy is presented to update the mesh and meanwhile to suppress the mesh distortion. The mesh nodes on the free surface and contact line are restricted to move along prescribed orbits, and the nodes on the container wall are moved by an algebraic mesh moving algorithm. The finite element method combined with the characteristic-based split algorithm is adopted. The numerical results are compared to analytical and published experimental results for validation, and good agreement is observed. One of the most important branches of applied mechanics is the theory of plates-defined to be plane structural elements whose thickness is very small when compared to the two planar dimensions. There is an abundance of plate models in the literature owing to advantages such as reduced computational effort and a simpler, yet elegant, resulting mathematical formulation. Recently, there has been a steady growth of interest in modeling materials with microstructure that exhibit length-scale dependent behavior, generally known as Cosserat elastic materials. Traditional plate models derived from classical elasticity theory, such as the Reissner-Mindlin type, are incapable of accounting for such length-scale effects, which can only be predicted when one starts from a higher-order elasticity theory. The objective of this work is the formulation of a theory of Cosserat elastic plates. The mathematical foundation of the approach used is the variational asymptotic method, a powerful tool used to construct asymptotically correct reduced-dimensional models. Unlike existing Cosserat plate models in the literature, the variational asymptotic method allows for a plate formulation that is free of ad hoc assumptions regarding the kinematics. The result is a systematic derivation of the two-dimensional constitutive relations and a set of geometrically exact, fully intrinsic equations governing the motion of a plate. An important consequence is the extraction of the so-called drilling stiffness associated with the drilling degree of freedom. This stiffness cannot be extracted from classical elasticity theory and is therefore only associated with higher-order elasticity theories. The present approach connects the two-dimensional Cosserat plate theory with a three-dimensional elasticity theory, in stark contrast to the usual approach of regarding the Cosserat plate theory as phenomenological and thus disconnected from the three-dimensional world. Probabilistic analyses allow predicting the stochastic distribution of an output variable (e.g., the buckling load of a structure) based on the stochastic distribution of input parameters (e.g., material and geometric properties). In the probabilistic analysis of composite structures, one important quantity that can be subject to scatter is the ply or fiber orientation of the layers. Laminates with a large number of plies lead to a large number of random variables, which makes the probabilistic analysis very time consuming. Lamination parameters allow describing any ply layup by a maximum of 12 parameters. They have therefore been used for the design optimization of thick laminates. In the current paper, a two-step procedure is considered for using lamination parameters for probabilistic analyses of composite structures with many plies. In the first step, the stochastic distribution of lamination parameters is determined. In the second step, the actual probabilistic analysis is performed. Aclosed-form solution for the stochastic moment of lamination parameters is presented. Furthermore, the paper discusses under which circumstances lamination parameters are approximately normally distributed, which allows increasing the efficiency of both steps. In this study, we investigate the enhancement of mechanical properties that shallow biangle, thin-ply laminates bring to fiber reinforced polymer composites. Coupon-and structural-level tests are conducted along with numerical simulations. According to the coupon tests, we find that shallow biangle fibers and thin plies can increase the axial stiffness of laminates significantly at the cost of a relatively small decrease in their transverse and shear moduli. For the structural tests, we fabricate composite wing structures using an out-of-autoclave vacuum-assisted resin transfer molding process. By conducting static bending tests, we show superior structural performances of the wing structure that employs shallow biangle fibers and thin-ply fabric, compared to those that use conventional fiber-angle and thick plies. Shallow biangle and thin-ply technologies can open new routes to designing composite structures with improved stiffness and strength with fast and cost-effective fabrication processes. The natural frequencies of a hermetic capsule of a circular cylinder closed with hemispheroidal caps at both ends and having variable thickness are determined by the Ritz method using a three-dimensional analysis. However, in the traditional shell analysis, mathematically two-dimensional thin-shell theories or higher-order thick-shell theories, which make very limiting assumptions about the displacement variation through the shell thickness, have usually applied. Although most researchers have used three-dimensional shell coordinates that are normal and tangent to the shell midsurface, the present analysis is based upon the circular cylindrical coordinates. Using the Ritz method, the Legendre polynomials, which are mathematically orthonormal, are used as admissible functions instead of ordinary simple algebraic polynomials. The potential and kinetic energies of the hermetic capsule are formulated, and upper bound values of the frequencies are obtained by minimizing the frequencies. As the degree of the Legendre polynomials is increased, frequencies converge to the exact values. Convergence to four-digit exactitude is demonstrated for the first five frequencies. The frequencies from the present three-dimensional method are compared with those from other three-dimensional approaches and two-dimensional shell theories by previous researchers. The present three-dimensional analysis is applicable to very thick shells as well as thin shells. Pain is personal, subjective, and best treated when the patient's experience is fully understood. Hospitalization contributes to the physical and psychological complications of acute and chronic pain experienced by patients with inflammatory bowel disease (IBD). The purpose of this qualitative phenomenological study was to develop an understanding of the unique experience of pain in hospitalized patients with an admitting diagnosis of IBD and related care or surgery. Following institutional review board approval, purposeful sampling was used to recruit 16 patients (11 female, 5 male, mean age 41.8 years) from two 36-bed colorectal units of a large academic medical center in the Midwest. Individual, audio-recorded interviews were conducted by a researcher at each participant's bedside. Recordings and transcripts were systematically reviewed by the research team using Van Manen's approach to qualitative analysis. Subsequently, 5 major themes were identified among the data: feeling discredited and misunderstood, desire to dispel the stigma, frustration with constant pain, need for caregiver knowledge and understanding, and nurse as connector between patient and physician. Hospitalized patients with IBD have common issues with pain care. Nurses caring for them can provide better pain management when they understand these issues/themes. Further research into the themes discovered here is recommended. Bowel management is a concern in patients with spina bifida. We evaluated the status of bowel management in children with spina bifida (SB) and the effects on quality of life (QoL) of children and their caregivers. Data were collected from 173 children with SB between January and June 2011, whose bowel management status and QoL were assessed using a self-administered questionnaire. Of the 173 children, 38 (22.0%) reported normal defecation, 73 (42.2%) reported constipation only, and 62 (35.8%) reported fecal incontinence with/without constipation. For defecation, 59 children (34.1%) used digital stimulation or manual extraction, 28 (16.2%) used suppositories or enemas, 35 (20.3%) used laxatives, 4 (2.3%) used an antegrade continence enema, and 3 (1.7%) used transanal irrigation. There were significant differences in QoL, depending on defecation symptoms. Children with fecal incontinence and their caregivers had difficulties in travel and socialization (p < .0001), caregivers' emotions (p < .0001), family relationships (p < .0001), and finances (p < .0001). Constipation and fecal incontinence affect QoL of children with SB and their caregivers. Therefore, more attention should be paid to bowel problems and help should be provided to children and their caregivers to improve QoL. The purpose of this article was to determine whether scripted pre-procedural fall risk patient education and nurses' intention to assist patients after receiving sedation improves receptiveness of nursing assistance during recovery and decreases fall risk in an outpatient endoscopy suite. We prospectively identified high fall risk patients using the following criteria: (1) use of an assistive device, (2) fallen two or more times within the last year, (3) sustained an injury in a fall within a year, (4) age greater than 85 years, or (5) nursing judgment of high fall risk. Using a scripted dialogue, nurses educated high-risk patients of their fall risk and the nurses' intent to assist them to and in the bathroom. Documentation of patient education, script use, and assistance was monitored. Over 24 weeks, 892 endoscopy patients were identified as high fall risk; 790 (88.5%) accepted post-procedural assistance. Documentation of assistance significantly increased from 33% to 100%. Patients receiving education and postprocedural assistance increased from 27.9% to 100% at week 24. No patient falls occurred 12 months following implementation among patients identified as high fall risk. Scripted pre-procedural fall risk education increases patient awareness and receptiveness to assistance and can lead to decreased fall rates. Demand for colonoscopy exceeds capacity in the Veterans Health Administration and the private sector. A small number of innovative Veterans Affairs and private sector facilities have created colonoscopy-training fellowships for nurse practitioners and physician assistants (nonphysicians). Additionally, a gastroenterology community of practice might provide knowledge sharing and professional networking opportunities for nonphysician colonoscopists based on assessment of their need for professional activities. A critical appraisal of related literature pointed out key motivational and structural elements of communities of practice. The survey draft was reviewed by content experts and piloted by four nurse practitioner colonoscopists. Using snowball sampling, the survey was sent to nonphysician endoscopists to capture training experiences, interest in membership, and identified preferences for the structure and delivery of a community of practice. Although the sample size was small (N = 7), results validated similar training experiences and confirmed strong interest in launching a gastroenterology community of practice. The purpose of this study was to assess the effectiveness of a dietary support program for patients with Crohn disease based on behavior analysis and designed to maintain remission and improve satisfaction with meals. The core of the program consisted of self-monitoring by patients and evaluation by a healthcare professional. The 32-week program consisted of a 4-week baseline period, 20-week intervention period, and 8-week follow-up period. Participants filled out questionnaires measuring outcomes every 4 weeks, for a total of nine questionnaires per patient. Of the 13 patients who started the program, 11 completed the study. Of these, nine showed increased frequency of testing foods during the intervention period, with seven maintaining testing during the follow-up period. No patient experienced a worsening of health conditions. Of the 11 patients who completed the program, seven reported increased satisfaction with meals. In conclusion, this program helped increase the frequency of testing foods in patients with Crohn disease, while maintaining health conditions and improving satisfaction with meals. Hepatitis B virus (HBV) antiviral therapies potentially suppress HBV viral load to an undetectable level reducing the risk of progressive liver disease and the development of HBV-related hepatocellular carcinoma. Adherence to antiviral therapies is imperative to achieve and maintain viral suppression. To date, there has been limited research on adherence to HBV therapies. Our study aimed to explore factors influencing adherence to antiviral therapy. A total of 29 participants consented to in-depth qualitative interviews at three outpatient clinics in Sydney, New South Wales, Australia. Interviews were digitally recorded and transcribed. Transcripts were initially classified as adherent or non-adherent and thematic analysis was used to identify dominant themes. Adherent behavior was reported by 59% (n = 17) of participants. Several themes influenced adherence including routine, fear of HBV-related disease progression, clinician-patient communication, treatment knowledge, and forgetfulness. To our knowledge, this is the first qualitative study to explore adherence to HBV antiviral therapy. An interplay of several dominant themes emerged from our data including fear of chronic HBV disease progression, clinician-patient communication, treatment knowledge, routine, and forgetfulness. Study findings have the potential to change nursing clinical practice, especially the way nurses and other clinicians target key HBV treatment messages and education, while monitoring adherence. Introduction: Despite improvements in revascularization, major amputation remains a significant part of the case-mix in vascular surgical units. These patients tend to be elderly with complex pathology, resulting in poor outcomes and longer lengths of stay (LOS). Aim: This series review provides a description of the patient complexities and outcomes in an Australian cohort undergoing major lower limb amputation for peripheral arterial disease. Method: Medical records coded for major amputation between July 2012 and June 2013 in an Australian government funded, tertiary hospital were retrospectively reviewed and descriptively analyzed. Findings: Twenty-five patients had 29 major amputations including four conversions from below to above knee. Seventeen had multiple vascular procedures before amputation. The average LOS exceeded the national target, and there was substantial morbidity and 30-day mortality. Conclusion: Major amputation continues to present challenges because of patient frailty and the high rate of complications. These issues need to be considered in a robust care planning framework that includes consideration of cognitive decline and other markers of frailty. Opportunities to optimize the physical condition of these patients and to reduce delays in proceeding to surgery require further investigation. It is not uncommon that patients with peripheral arterial disease (PAD) need to undergo a lower limb amputation, with or without previous revascularization attempts. Despite that, the patient's experience of the amputation has been scarcely studied. The aim of this qualitative study was to describe the patient's experience of amputation due to PAD. Thirteen interviews were conducted with vascular patients who had undergone a lower limb amputation at tibia, knee, or femoral level. Data were analyzed with content analysis. Our findings of the patient's experiences during the amputation process resulted in three themes with additional time sequences: the decision phase "From irreversible problem to amputation decision'', the surgical phase "A feeling of being in a vacuum,'' and the rehabilitation phase "Adaptation to the new life''. One main finding was that the patients felt abandoned during the surgical period. Despite that, most of the participants were satisfied with the decision, some of them even regretted that they had not undergone an amputation earlier in the process. It is important for the patient's well-being to develop a partnership with the surgeon to increase a feeling of being participating in the care. Vascular patients need better information on lower limb amputation, and its consequences so as to be better prepared for the whole process. To increase the patient's quality of life and reduce unnecessary suffering, amputation may be presented earlier in the process as a valuable treatment option. The aim of this retrospective study was to assess the hospitalizations of patients with or without diabetes mellitus (DM) who underwent nontraumatic lower extremity amputation (NLEA) with regard to demographic and hospitalization-related variables. It is a high proportion of hospital beds in developing countries, for patients with diabetes mellitus with lower extremity complications. Nontraumatic amputations of lower extremities rates is an important indicator to assess the effectiveness of efforts to reduce chronic complications related to diabetic foot. A total of 2,296 hospital admissions were analyzed with regard to gender, age, length of stay, type of financing, origin, diagnosis, number of hospital admissions and readmissions, and hospitalization outcome from 2001 to 2008 in a municipality of Southeast Brazil. The association between the independent variables and the number of hospitalizations of patients with or without diabetes was assessed using chi-square tests for gender, type of financing, and hospitalization outcome and using the Mann-Whitney U test for age and length of stay. A total of 58% were patients without diabetes, 62.6% were male, 74.5% were treated at a public health care service, and 7.6% died. The mean age was 62.7 years, the mean length of stay was of 9.5 days, and the mean number of readmissions was 2.29 times. The length of stay was higher (P < .001), and the number of men was lower (P = .001) among the patients with diabetes who were hospitalized compared with patients without diabetes. The number of hospitalizations related to NLEA increased among patients with diabetes but reduced among those without diabetes between 2001 and 2008. The prevalence of abdominal aortic aneurysm (AAA) is reported to be 2.2%-8% among men >65 years. During recent years, screening programs have been developed to detect AAA, prevent ruptures, and thereby saving lives. Therefore, most men with the diagnosis are monitored conservatively with regular reviews. The objective of the study was to describe how men diagnosed with abdominal aortic aneurysm <55 mm discovered by screening experience the process and diagnosis from invitation to 1 year after screening. A total of eleven 65-year-old men were included in three focus groups performed in a University Hospital in Sweden. These were qualitatively analyzed using manifest and latent content analysis. The experience of the screening process and having an abdominal aortic aneurysm in a long-term perspective revealed three categories: "trusting the health care system,'' emphasizing the need for continual follow-ups to ensure feelings of security; "the importance size,'' meaning that the measure was abstract and hard to understand; and "coping with the knowledge of abdominal aortic aneurysm,'' denoting how everyday life was based mostly on beliefs, since a majority lacked understanding about the meaning of the condition. The men want regular surveillance and surrendered to the health care system, but simultaneously experienced a lack of support thereof. Knowing the size of the aorta was important. The men expressed insecurity about how lifestyle might influence the abdominal aortic aneurysm and what they could do to improve their health condition. This highlights the importance of communicating knowledge about the abdominal aortic aneurysm to promote men's feelings of security and giving space to discuss the size of the aneurysm and lifestyle changes. Cardiovascular diseases are the leading cause of death globally, and nurses have a crucial role in informing cardiovascular disease patients about their diseases. The aim of the present study was to identify the effect of activities of daily living on the self-care agency of patients in a cardiovascular surgery clinic. This descriptive study was conducted between June 2014 and January 2015 with 180 patients hospitalized in the cardiovascular surgery clinic of a university hospital in the province of Erzincan in the Eastern region of Turkey. The data of the study were gathered using a descriptive form designed by the authors, Katz index of activities of daily living scale (ADLS), Lawton-Brody instrumental activities of daily living scale (IADLS), and self-care agency scale (SCAS). The data were processed using computer software and assessed using percentages, means, t test for independent groups, one-way analysis of variance, and correlation analysis tests. It was found that 50.6% of the patients were >= 65 years, 66.1% were male, 46.1% perceived their health status as moderate, and 35.6% had previously had heart attacks. The patients' mean ADLS score was 16.39 +/- 2.30, their mean IADLS score was 19.23 +/- 4.16, and their mean SCAS score was 92.11 +/- 18.81. The patients' education level and perceived health were found to affect their mean SCAS scores. In addition, there was a positive correlation between the patients' ADLS and IADLS scores and their SCAS score. It was also noted that patients were more independent on the ADL and IADL, and that their self-care agency was higher. A medical adhesive can be defined as a product used to secure a device (ie, tape, dressing, catheter, electrode, and ostomy pouch) to the skin. Skin injury related to medical adhesive usage occurs across all care settings with medical adhesive-related skin injuries (MARSIs) playing a significant role with patient safety. The purpose of this descriptive prospective study was to assess all adult patients with wounds seen in the vascular clinic for MARSI by the CWOCN NP over a 3-month time period. One hundred twenty patients comprising a total of 207 visits were seen by the CWOCN NP over the 3-month time frame. Seven patients presented to the clinic from home with MARSI for a frequency of 5.8%. There were four males and three females with ages ranging from 52 to 83 years with a mean age of 67.7 years. All patients had a diagnosis of peripheral vascular disease with MARSI present on the lower extremities. Six of the seven MARSI cases were related to having paper tape removed from the periwound skin at home resulting in epidermal stripping either by the home health care professional (N = 4) or by the patient themselves (N = 2). The other MARSI was related to tension blister from steri-strips applied with benzoin by health care professional on a lower leg incision. Patients were unclear as far as when these injuries had occurred and often remarked that they thought that tape injuries were unpreventable. There is a need for additional research studies examining MARSI frequency across care settings such as the vascular population to identify those at risk and then implement measures to prevent it. Programmed cell death or apoptosis of infected host cells is an important defense mechanism in response to viral infections. This process is regulated by proapoptotic and prosurvival members of the B-cell lymphoma 2 (Bcl-2) protein family. To counter premature death of a virus-infected cell, poxviruses use a range of different molecular strategies including the mimicry of prosurvival Bcl-2 proteins. One such viral prosurvival protein is the fowlpox virus protein FPV039, which is a potent apoptosis inhibitor, but the precise molecular mechanism by which FPV039 inhibits apoptosis is unknown. To understand how fowlpox virus inhibits apoptosis, we examined FPV039 using isothermal titration calorimetry, small-angle X-ray scattering, and X-ray crystallography. Here, we report that the fowlpox virus prosurvival protein FPV039 promiscuously binds to cellular proapoptotic Bcl-2 and engages all major proapoptotic Bcl-2 proteins. Unlike other identified viral Bcl-2 proteins to date, FPV039 engaged with cellular proapoptotic Bcl-2 with affinities comparable with those of Bcl-2's endogenous cellular counterparts. Structural studies revealed that FPV039 adopts the conserved Bcl-2 fold observed in cellular prosurvival Bcl-2 proteins and closely mimics the structure of the prosurvival Bcl-2 family protein Mcl-1. Our findings suggest that FPV039 is a pan-Bcl-2 protein inhibitor that can engage all host BH3-only proteins, as well as Bcl-2-associated X, apoptosis regulator (Bax) and Bcl-2 antagonist/killer (Bak) proteins to inhibit premature apoptosis of an infected host cell. This work therefore provides a mechanistic platform to better understand FPV039-mediated apoptosis inhibition. Histone modifications, including lysine methylation, are epigenetic marks that influence many biological pathways. Accordingly, many methyltransferases have critical roles in various biological processes, and their dysregulation is often associated with cancer. However, the biological functions and regulation of many methyltransferases are unclear. Here, we report that a human homolog of the methyltransferase SET (SU(var), enhancer of zeste, and trithorax) domain containing 3 (SETD3) is cell cycle-regulated; SETD3 protein levels peaked in S phase and were lowest in M phase. We found that the beta-isoform of the tumor suppressor F-box and WD repeat domain containing 7 (FBXW7 beta) specifically mediates SETD3 degradation. Aligning the SETD3 sequence with those of well known FBXW7 substrates, we identified six potential non-canonical Cdc4 phosphodegrons (CPDs), and one of them, CPD1, is primarily phosphorylated by the kinase glycogen synthase kinase 3 (GSK3 beta), which is required for FBXW7 beta-mediated recognition and degradation. Moreover, depletion or inhibition of GSK3 beta or FBXW7 beta resulted in elevated SETD3 levels. Mutations of the phosphorylated residues in CPD1 of SETD3 abolished the interaction between FBXW7 beta and SETD3 and prevented SETD3 degradation. Our data further indicated that SETD3 levels positively correlated with cell proliferation of liver cancer cells and liver tumorigenesis in a xenograft mouse model, and that overexpression of FBXW7 beta counteracts the SETD3's tumorigenic role. We also show that SETD3 levels correlate with cancer malignancy, indicated by SETD3 levels that the 54 liver tumors are 2-fold higher than those in the relevant adjacent tissues. Collectively, these data elucidated that a GSK3 beta-FBXW7 beta-dependent mechanism controls SETD3 protein levels during the cell cycle and attenuates its oncogenic role in liver tumorigenesis. The accumulation of alpha-synuclein (alpha-syn) fibrils in neuronal inclusions is the defining pathological process in Parkinson's disease (PD). A pathogenic role for alpha-syn fibril accumulation is supported by the identification of dominantly inherited alpha-syn (SNCA) gene mutations in rare cases of familial PD. Fibril formation involves a spontaneous nucleation event in which soluble alpha-syn monomers associate to form seeds, followed by fibril growth during which monomeric alpha-syn molecules sequentially associate with existing seeds. To better investigate this process, we developed sensitive assays that use the fluorescein arsenical dye FlAsH (fluorescein arsenical hairpin binder) to detect soluble oligomers and mature fibrils formed from recombinant alpha-syn protein containing an N-terminal bicysteine tag (C2--alpha-syn). Using seed growth by monomer association (SeGMA) assays to measure fibril growth over 3 h in the presence of C2-alpha-syn monomer, we observed that some familial PD-associated alpha-syn mutations (i.e. H50Q and A53T) greatly increased growth rates, whereas others (E46K, A30P, and G51D) decreased growth rates. Experiments with wild-type seeds extended by mutant monomer and vice versa revealed that single-amino acid differences between seed and monomer proteins consistently decreased growth rates. These results demonstrate that alpha-syn monomer association during fibril growth is a highly ordered process that can be disrupted by misalignment of individual amino acids and that only a subset of familial-PD mutations causes fibril accumulation through increased fibril growth rates. The SeGMA assays reported herein can be utilized to further elucidate structural requirements of alpha-syn fibril growth and to identify growth inhibitors as a potential therapeutic approach in PD. Obesity and its associated complications such as insulin resistance and non-alcoholic fatty liver disease are reaching epidemic proportions. In mice, the TGF-beta superfamily is implicated in the regulation of white and brown adipose tissue differentiation. The kielin/chordin-like protein (KCP) is a secreted regulator of the TGF-beta superfamily pathways that can inhibit both TGF-beta and activin signals while enhancing bone morphogenetic protein (BMP) signaling. However, KCP's effects on metabolism and obesity have not been studied in animal models. Therefore, we examined the effects of KCP loss or gain of function in mice that were maintained on either a regular or a high-fat diet. KCP loss sensitized the mice to obesity and associated complications such as glucose intolerance and adipose tissue inflammation and fibrosis. In contrast, transgenic mice that expressed KCP in the kidney, liver, and adipose tissues were resistant to developing high-fat diet-induced obesity and had significantly reduced white adipose tissue. Moreover, KCP overexpression shifted the pattern of SMAD signaling in vivo, increasing the levels of phospho (P)-SMAD1 and decreasing P-SMAD3. Adipocytes in culture showed a cell-autonomous effect in response to added TGF-beta 1 or BMP7. Metabolic profiling indicated increased energy expenditure in KCP-overexpressing mice and reduced expenditure in the KCP mutants with no effect on food intake or activity. These findings demonstrate that shifting the TGF-beta superfamily signaling with a secreted protein can alter the physiology and thermogenic properties of adipose tissue to reduce obesity even when mice are fed a high-fat diet. Thiol isomerases such as protein-disulfide isomerase (PDI) direct disulfide rearrangements required for proper folding of nascent proteins synthesized in the endoplasmic reticulum. Identifying PDI substrates is challenging because PDI catalyzes conformational changes that cannot be easily monitored (e.g. compared with proteolytic cleavage or amino acid phosphorylation); PDI has multiple substrates; and it can catalyze either oxidation, reduction, or isomerization of substrates. Kinetic-based substrate trapping wherein the active site motif CGHC is modified to CGHA to stabilize a PDI-substrate intermediate is effective in identifying some substrates. A limitation of this approach, however, is that it captures only substrates that are reduced by PDI, whereas many substrates are oxidized by PDI. By manipulating the highly conserved-GH-residues in the CGHC active site of PDI, we created PDI variants with a slowed reaction rate toward substrates. The prolonged intermediate state allowed us to identify protein substrates that have biased affinities for either oxidation or reduction by PDI. Because extracellular PDI is critical for thrombus formation but its extracellular substrates are not known, we evaluated the ability of these bidirectional trapping PDI variants to trap proteins released from platelets and on the platelet surface. Trapped proteins were identified by mass spectroscopy. Of the trapped substrate proteins identified by mass spectroscopy, five proteins, cathepsin G, glutaredoxin-1, thioredoxin, GP1b, and fibrinogen, showed a bias for oxidation, whereas annexin V, heparanase, ERp57, kallekrein-14, serpin B6, tetranectin, and collagen VI showed a bias for reduction. These bidirectional trapping variants will enable more comprehensive identification of thiol isomerase substrates and better elucidation of their cellular functions. Pathogenic Acinetobacter species, including Acinetobacter baumannii and Acinetobacter nosocomialis, are opportunistic human pathogens of increasing relevance worldwide. Although their mechanisms of drug resistance are well studied, the virulence factors that govern Acinetobacter pathogenesis are incompletely characterized. Here we define the complete secretome of A. nosocomialis strain M2 in minimal medium and demonstrate that pathogenic Acinetobacter species produce both a functional type I secretion system (T1SS) and a contact-dependent inhibition (CDI) system. Using bioinformatics, quantitative proteomics, and mutational analyses, we show that Acinetobacter uses its T1SS for exporting two putative T1SS effectors, an Repeatsin-Toxin (RTX)-serralysin-like toxin, and the biofilm-associated protein (Bap). Moreover, we found that mutation of any component of the T1SS system abrogated type VI secretion activity under nutrient-limited conditions, indicating a previously unrecognized cross-talk between these two systems. We also demonstrate that the Acinetobacter T1SS is required for biofilm formation. Last, we show that both A. nosocomialis and A. baumannii produce functioning CDI systems that mediate growth inhibition of sister cells lacking the cognate immunity protein. The Acinetobacter CDI systems are widely distributed across pathogenic Acinetobacter species, with many A. bauman-nii isolates harboring two distinct CDI systems. Collectively, these data demonstrate the power of differential, quantitative proteomics approaches to study secreted proteins, define the role of previously uncharacterized protein export systems, and observe cross-talk between secretion systems in the pathobiology of medically relevant Acinetobacter species. Hydroxyurea (HU) has a long history of clinical and scientific use as an antiviral, antibacterial, and antitumor agent. It inhibits ribonucleotide reductase and reversibly arrests cells in S phase. However, high concentrations or prolonged treatment with low doses of HU can cause cell lethality. Although the cytotoxicity of HU may significantly contribute to its therapeutic effects, the underlying mechanisms remain poorly understood. We have previously shown that HU can induce cytokinesis arrest in the erg11-1 mutant of fission yeast, which has a partial defect in the biosynthesis of fungal membrane sterol ergosterol. Here, we report the identification of a new mutant in heme biosynthesis, hem13-1, that is hypersensitive to HU. We found that the HU hypersensitivity of the hem13-1 mutant is caused by oxidative stress and not by replication stress or a defect in cellular response to replication stress. The mutation is hypomorphic and causes heme deficiency, which likely sensitizes the cells to the HU-induced oxidative stress. Because the heme biosynthesis pathway is highly conserved in eukaryotes, this finding, as we show in our separate report, may help to expand the therapeutic spectrum of HU to additional pathological conditions. Hrd1 is the core structural component of a large endoplasmic reticulum membrane-embedded protein complex that coordinates the destruction of folding-defective proteins in the early secretory pathway. Defining the composition, dynamics, and ultimately, the structure of the Hrd1 complex is a crucial step in understanding the molecular basis of glycoprotein quality control but has been hampered by the lack of suitable techniques to interrogate this complex under native conditions. In this study we used genome editing to generate clonal HEK293 (Hrd1. KI) cells harboring a homozygous insertion of a small tandem affinity tag knocked into the endogenous Hrd1 locus. We found that steady-state levels of tagged Hrd1 in these cells are indistinguishable from those of Hrd1 in unmodified cells and that the tagged variant is functional in supporting the degradation of well characterized luminal and membrane substrates. Analysis of detergent-solubilized Hrd1. KI cells indicates that the composition and stoichiometry of Hrd1 complexes are strongly influenced by Hrd1 expression levels. Analysis of affinity-captured Hrd1 complexes from these cells by size-exclusion chromatography, immunodepletion, and absolute quantification mass spectrometry identified two major high-molecular-mass complexes with distinct sets of interacting proteins and variable stoichiometries, suggesting a hitherto unrecognized heterogeneity in the functional units of Hrd1-mediated protein degradation. 2-Alkylquinolone (2AQ) alkaloids are pharmaceutically and biologically important natural products produced by both bacteria and plants, with a wide range of biological effects, including antibacterial, cytotoxic, anticholinesterase, and quorum-sensing signaling activities. These diverse activities and 2AQ occurrence in vastly different phyla have raised much interest in the biosynthesis pathways leading to their production. Previous studies in plants have suggested that type III polyketide synthases (PKSs) might be involved in 2AQ biosynthesis, but this hypothesis is untested. To this end, we cloned two novel type III PKSs, alkyldiketide-CoA synthase (ADS) and alkylquinolone synthase (AQS), from the 2AQ-producing medicinal plant, Evodia rutaecarpa (Rutaceae). Functional analyses revealed that collaboration of ADS and AQS produces 2AQ via condensations of N-methylanthraniloyl-CoA, a fatty acyl-CoA, with malonyl-CoA. We show that ADS efficiently catalyzes the decarboxylative condensation of malonyl-CoA with a fatty acyl-CoA to produce an alkyldiketide-CoA, whereas AQS specifically catalyzes the decarboxylative condensation of an alkyldiketide acid with N-methylanthraniloyl-CoA to generate the 2AQ scaffold via C-C/C-N bond formations. Remarkably, the ADS and AQS crystal structures at 1.80 and 2.20 angstrom resolutions, respectively, indicated that the unique active-site architecture with Trp-332 and Cys-191 and the novel CoA-binding tunnel with Tyr-215 principally control the substrate and product specificities of ADS and AQS, respectively. These results provide additional insights into the catalytic versatility of the type III PKSs and their functional and evolutionary implications for 2AQ biosynthesis in plants and bacteria. Ribonucleotide reductase (RR) is the rate-limiting enzyme in DNA synthesis, catalyzing the reduction of ribonucleotides to deoxyribonucleotides. During each enzymatic turnover, reduction of the active site disulfide in the catalytic large subunit is performed by a pair of shuttle cysteine residues in its C-terminal tail. Thioredoxin (Trx) and glutaredoxin (Grx) are ubiquitous redox proteins, catalyzing thiol-disulfide exchange reactions. Here, immunohistochemical examination of clinical colorectal cancer (CRC) specimens revealed that human thioredoxin1 (hTrx1), but not human glutaredoxin1 (hGrx1), was up-regulated along with human RR large subunit (RRM1) in cancer tissues, and the expression levels of both proteins were correlated with cancer malignancy stage. Ectopically expressed hTrx1 significantly increased RR activity, DNA synthesis, and cell proliferation and migration. Importantly, inhibition of both hTrx1 and RRM1 produced a synergistic anticancer effect in CRC cells and xenograft mice. Furthermore, hTrx1 rather than hGrx1 was the efficient reductase for RRM1 regeneration. We also observed a direct protein-protein interaction between RRM1 and hTrx1 in CRC cells. Interestingly, besides the known two conserved cysteines, a third cysteine (Cys779) in the RRM1 C terminus was essential for RRM1 regeneration and binding to hTrx1, whereas both Cys32 and Cys35 in hTrx1 played a counterpart role. Our findings suggest that the up-regulated RRM1 and hTrx1 in CRC directly interact with each other and promote RR activity, resulting in enhancedDNAsynthesis and cancer malignancy. Wepropose that the RRM1-hTrx1 interaction might be a novel potential therapeutic target for cancer treatment. O-GlcNAcylation is the covalent addition of an O-linked beta-N-acetylglucosamine (O-GlcNAc) sugar moiety to hydroxyl groups of serine/threonine residues of cytosolic and nuclear proteins. O-GlcNAcylation, analogous to phosphorylation, plays critical roles in gene expression through direct modification of transcription factors, such as NF-kappa B. Aberrantly increased NF-kappa B O-GlcNAcylation has been linked to NF-kappa B constitutive activation and cancer development. Therefore, it is of a great biological and clinical significance to dissect the molecular mechanisms that tune NF-kappa B activity. Recently, we and others have shown that O-GlcNAcylation affects the phosphorylation and acetylation of NF-kappa B subunit p65/ RelA. However, the mechanism of how O-GlcNAcylation activates NF-kappa B signaling through phosphorylation and acetylation is not fully understood. In this study, we mappedO-GlcNAcylation sites of p65 at Thr-305, Ser319, Ser-337, Thr-352, and Ser-374. O-GlcNAcylation of p65 at Thr-305 and Ser-319 increased CREB-binding protein (CBP)/ p300-dependent activating acetylation of p65 at Lys-310, contributing to NF-kappa B transcriptional activation. Moreover, elevation of O-GlcNAcylation by overexpression of OGT increased the expression of p300, IKK alpha, and IKK beta and promoted IKKmediated activating phosphorylation of p65 at Ser-536, contributing to NF-kappa B activation. In addition, we also identified phosphorylation of p65 at Thr-308, which might impair the O-GlcNAcylation of p65 at Thr-305. These results indicate mechanisms through which both non-pathological and oncogenicO- GlcNAcylation regulate NF-kappa Bsignaling through interplay with phosphorylation and acetylation. Calcium-activated chloride channels (CaCCs) are key players in transepithelial ion transport and fluid secretion, smooth muscle constriction, neuronal excitability, and cell proliferation. The CaCC regulator 1 (CLCA1) modulates the activity of the CaCC TMEM16A/Anoctamin 1 (ANO1) by directly engaging the channel at the cell surface, but the exact mechanism is unknown. Here we demonstrate that the von Willebrand factor type A (VWA) domain within the cleaved CLCA1 N-terminal fragment is necessary and sufficient for this interaction. TMEM16A protein levels on the cell surface were increased in HEK293T cells transfected with CLCA1 constructs containing theVWAdomain, and TMEM16A-like currents were activated. Similar currents were evoked in cells exposed to secreted VWA domain alone, and these currents were significantly knocked down byTMEM16AsiRNA. VWA-dependentTMEM16Amodulation was not modified by the S357N mutation, a VWA domain polymorphism associated with more severe meconium ileus in cystic fibrosis patients. VWA-activated currents were significantly reduced in the absence of extracellular Mg2+, and mutation of residues within the conserved metal ion-dependent adhesion site motif impaired the ability of VWA to potentiate TMEM16A activity, suggesting that CLCA1-TMEM16A interactions are Mg2+ -and metal ion-dependent adhesion site-dependent. Increase in TMEM16A activity occurred within minutes of exposure to CLCA1 or after a short treatment with nocodazole, consistent with the hypothesis that CLCA1 stabilizes TMEM16A at the cell surface by preventing its internalization. Our study hints at the therapeutic potential of the selective activation of TMEM16A by the CLCA1 VWA domain in loss-of-function chloride channelopathies such as cystic fibrosis. Obesity causes excess fat accumulation in white adipose tissues (WAT) and also in other insulin-responsive organs such as the skeletal muscle, increasing the risk for insulin resistance, which can lead to obesity-related metabolic disorders. Peroxisome proliferator-activated receptor-alpha (PPAR alpha) is a master regulator of fatty acid oxidation whose activator is known to improve hyperlipidemia. However, the molecular mechanisms underlying PPAR alpha activator-mediated reduction in adiposity and improvement of metabolic disorders are largely unknown. In this study we investigated the effects of PPAR alpha agonist (fenofibrate) on glucose metabolism dysfunction in obese mice. Fenofibrate treatment reduced adiposity and attenuated obesity-induced dysfunctions of glucose metabolism in obese mice fed a high-fat diet. However, fenofibrate treatment did not improve glucose metabolism in lipodystrophic A-Zip/F1 mice, suggesting that adipose tissue is important for the fenofibrate-mediated amelioration of glucose metabolism, although skeletal muscle actions could not be completely excluded. Moreover, we investigated the role of the hepatokine fibroblast growth factor 21 (FGF21), which regulates energy metabolism in adipose tissue. In WAT of WT mice, but not of FGF21-deficient mice, fenofibrate enhanced the expression of genes related to brown adipocyte functions, such as Ucp1, Pgc1a, and Cpt1b. Fenofibrate increased energy expenditure and attenuated obesity, whole body insulin resistance, and adipocyte dysfunctions in WAT in high-fat-diet-fed WT mice but not in FGF21-defi-the fenofibrate-mediated improvement of whole body glucose metabolism in obese mice via the amelioration of WAT dysfunctions. Sequential metabolic enzymes in glucose metabolism have long been hypothesized to form multienzyme complexes that regulate glucose flux in living cells. However, it has been challenging to directly observe these complexes and their functional roles in living systems. In this work, we have used wide-field and confocal fluorescence microscopy to investigate the spatial organization of metabolic enzymes participating in glucose metabolism in human cells. We provide compelling evidence that human liver-type phosphofructokinase 1 (PFKL), which catalyzes a bottleneck step of glycolysis, forms various sizes of cytoplasmic clusters in human cancer cells, independent of protein expression levels and of the choice of fluorescent tags. We also report that these PFKL clusters colocalize with other ratelimiting enzymes in both glycolysis and gluconeogenesis, supporting the formation of multienzyme complexes. Subsequent biophysical characterizations with fluorescence recovery after photobleaching and FRET corroborate the formation of multienzyme metabolic complexes in living cells, which appears to be controlled by post-translational acetylation on PFKL. Importantly, quantitative high-content imaging assays indicated that the direction of glucose flux between glycolysis, the pentose phosphate pathway, and serine biosynthesis seems to be spatially regulated by the multienzyme complexes in a clustersize-dependent manner. Collectively, our results reveal a functionally relevant, multienzyme metabolic complex for glucose metabolism in living human cells. The role of mechanosensitive (MS) Ca2+ -permeable ion channels in platelets is unclear, despite the importance of shear stress in platelet function and life-threatening thrombus formation. We therefore sought to investigate the expression and functional relevance ofMSchannels in human platelets. The effect of shear stress on Ca2+ entry in human platelets and Meg-01 megakaryocytic cells loaded with Fluo-3 was examined by confocal microscopy. Cells were attached to glass coverslips within flow chambers that allowed applications of physiological and pathological shear stress. Arterial shear (1002.6 s(-1)) induced a sustained increase in [Ca2+] i in Meg-01 cells and enhanced the frequency of repetitive Ca2+ transients by 80% in platelets. These Ca2+ increases were abrogated by the MS channel inhibitor Grammostola spatulata mechanotoxin 4 (GsMTx-4) or by chelation of extracellular Ca2+. Thrombus formation was studied on collagen-coated surfaces using DiOC(6)-stained platelets. In addition, [Ca2+](i) and functional responses of washed platelet suspensions were studied with Fura-2 and light transmission aggregometry, respectively. Thrombus size was reduced 50% by GsMTx-4, independently of P2X1 receptors. In contrast, GsMTx-4 had no effect on collagen-induced aggregation or on Ca2+ influx via TRPC6 or Orai1 channels and caused only a minor inhibition of P2X1-dependent Ca2+ entry. The Piezo1 agonist, Yoda1, potentiated shear-dependent platelet Ca2+ transients by 170%. Piezo1 mRNA transcripts and protein were detected with quantitative RT-PCR and Western blotting, respectively, in both platelets and Meg-01 cells. We conclude that platelets and Meg-01 cells express the MS cation channel Piezo1, which may contribute to Ca2+ entry and thrombus formation under arterial shear. Human leukocyte antigen (HLA)-DQ2.5 (DQA1* 05/ DQB1* 02) is a class-II major histocompatibility complex protein associated with both type 1 diabetes and celiac disease. One unusual feature of DQ2.5 is its high class-II-associated invariant chain peptide (CLIP) content. Moreover, HLA-DQ2.5 preferentially binds the non-canonical CLIP2 over the canonical CLIP1. To better understand the structural basis of HLA-DQ2.5 ' s unusual CLIP association characteristics, better insight into the HLADQ2.5.CLIP complex structures is required. To this end, we determined the X-ray crystal structure of the HLA-DQ2.5.CLIP1 and HLA-DQ2.5.CLIP2 complexes at 2.73 and 2.20 angstrom, respectively. We found that HLA-DQ2.5 has an unusually large P4 pocket and a positively charged peptide-binding groove that together promote preferential binding of CLIP2 over CLIP1. An alpha 9-alpha 22-alpha 24-alpha 31-beta 86-beta 90 hydrogen bond network located at the bottom of the peptide-binding groove, spanning from the P1 to P4 pockets, renders the residues in this region relatively immobile. This hydrogen bond network, along with a deletion mutation at alpha 53, may lead to HLA-DM insensitivity in HLA-DQ2.5. A molecular dynamics simulation experiment reported here and recent biochemical studies by others support this hypothesis. The diminished HLA-DM sensitivity is the likely reason for the CLIP-rich phenotype of HLA-DQ2.5. Ribonucleotide reductases (RNRs) catalyze the conversion of nucleoside diphosphate substrates (S) to deoxynucleotides with allosteric effectors (e) controlling their relative ratios and amounts, crucial for fidelity of DNA replication and repair. Escherichia coli class Ia RNR is composed of alpha and beta subunits that form a transient, active alpha 2 beta 2 complex. The E. coli RNR is rate- limited by S/e- dependent conformational change(s) that trigger the radical initiation step through a pathway of 35 angstrom across the subunit (alpha/beta) interface. The weak subunit affinity and complex nucleotide- dependent quaternary structures have precluded a molecular understanding of the kinetic gating mechanism( s) of the RNR machinery. Using a docking model of alpha 2 beta 2 created from X- ray structures of alpha and beta and conserved residues from a new subclassification of the E. coli Ia RNR (Iag), we identified and investigated four residues at the alpha/beta interface (Glu(350) and Glu(52) in beta 2 and Arg(329) and Arg(639) in alpha 2) of potential interest in kinetic gating. Mutation of each residue resulted in loss of activity and with the exception of E52Q-beta 2, weakened subunit affinity. An RNR mutant with 2,3,5- trifluorotyrosine radical (F3Y122 center dot) replacing the stable Tyr122(center dot) in WT-beta 2, a mutation that partly overcomes conformational gating, was placed in the E52Q background. Incubation of this double mutant with His(6)-alpha 2/S/e resulted in anRNRcapable of catalyzing pathway- radical formation (Tyr(356 center dot) -beta 2), 0.5 eq of dCDP/F3Y122 center dot, and formation of an alpha 2 beta 2 complex that is isolable in pulldown assays over 2 h. Negative stainEMimages with S/e (GDP/TTP) revealed the uniformity of the alpha 2 beta 2 complex formed. Legionnaires' disease is a severe form of pneumonia caused by the bacterium Legionella pneumophila. L. pneumophila pathogenicity relies on secretion of more than 300 effector proteins by a type IVb secretion system. Among these Legionella effectors, WipA has been primarily studied because of its dependence on a chaperone complex, IcmSW, for translocation through the secretion system, but its role in pathogenicity has remained unknown. In this study, we present the crystal structure of a large fragment of WipA, WipA435. Surprisingly, this structure revealed a serine/threonine phosphatase fold that unexpectedly targets tyrosine-phosphorylated peptides. The structure also revealed a sequence insertion that folds into an alpha-helical hairpin, the tip of which adopts a canonical coiledcoil structure. The purified protein was a dimer whose dimer interface involves interactions between the coiled coil of one WipA molecule and the phosphatase domain of another. Given the ubiquity of protein-protein interaction mediated by interactions between coiled-coils, we hypothesize that WipA can thereby transition from a homodimeric state to a heterodimeric state in which the coiled-coil region of WipA is engaged in a protein-protein interaction with a tyrosine-phosphorylated host target. In conclusion, these findings help advance our understanding of the molecular mechanisms of an effector involved in Legionella virulence and may inform approaches to elucidate the function of other effectors. The insulin-like growth factors IGF1 and IGF2 are closely related proteins that are essential for normal growth and development in humans and other species and play critical roles in many physiological and pathophysiological processes. IGF actions are mediated by transmembrane receptors and modulated by IGF-binding proteins. The importance of IGF actions in human physiology is strengthened by the rarity of inactivating mutations in their genes and by the devastating impact caused by such mutations on normal development and somatic growth. Large-scale genome sequencing has the potential to provide new insights into human variation and disease susceptibility. Toward this end, the availability of DNA sequence data from 60,706 people through the Exome Aggregation Consortium has prompted the analyses presented here. Results reveal a broad range of potential missense and other alterations in the coding regions of every IGF family gene, but the vast majority of predicted changes were uncommon. The total number of different alleles detected per gene in the population varied over an similar to 15fold range, from 57 for IGF1 to 872 for IGF2R, although when corrected for protein length the rate ranged from 0.22 to 0.59 changes/codon among the 11 genes evaluated. Previously characterized disease-causing mutations in IGF2, IGF1R, IGF2R, or IGFALS all were found in the general population but with allele frequencies of < 1: 30,000. A few new highly prevalent amino acid polymorphisms were also identified. Collectively, these data provide a wealth of opportunities to understand the intricacies of IGF signaling and action in both physiological and pathological contexts. Dominant mutations in voltage-gated sodium channelNa(V)1.7 cause inherited erythromelalgia, a debilitating pain disorder characterized by severe burning pain and redness of the distal extremities. Na(V)1.7 is preferentially expressed within peripheral sensory and sympathetic neurons. Here, we describe a novel Na(V)1.7 mutation in an 11-year-old male with underdevelopment of the limbs, recurrent attacks of burning pain with erythema, and swelling in his feet and hands. Frequency and duration of the episodes gradually increased with age, and relief by cooling became less effective. The patient's sister had short stature and reported similar complaints of erythema and burning pain, but with less intensity. Genetic analysis revealed a novel missense mutation in Na(V)1.7 (2567G>C; p.Gly856Arg) in both siblings. The G856R mutation, located within the DII/S4-S5 linker of the channel, substitutes a highly conserved non-polar glycine by a positively charged arginine. Voltage-clamp analysis of G856R currents revealed that the mutation hyperpolarized (-11.2 mV) voltage dependence of activation and slowed deactivation but did not affect fast inactivation, compared with wildtype channels. A mutation of Gly-856 to aspartic acid was previously found in a family with limb pain and limb underdevelopment, and its functional assessment showed hyperpolarized activation, depolarized fast inactivation, and increased ramp current. Structural modeling using the Rosetta computational modeling suite provided structural clues to the divergent effects of the substitution of Gly-856 by arginine and aspartic acid. Although the proexcitatory changes in gating properties of G856R contribute to the pathophysiology of inherited erythromelalgia, the link to limb underdevelopment is not well understood. Extracellular matrix proteins are biosynthesized in the rough endoplasmic reticulum (rER), and the triple-helical protein collagen is the most abundant extracellular matrix component in the human body. Many enzymes, molecular chaperones, and post-translational modifiers facilitate collagen biosynthesis. Collagen contains a large number of proline residues, so the cis/trans isomerization of proline peptide bonds is the rate-limiting step during triple-helix formation. Accordingly, the rER-resident peptidyl prolyl cis/trans isomerases (PPIases) play an important role in the zipper-like triple-helix formation in collagen. We previously described this process as "Ziploc-ing the structure" and now provide additional information on the activity of individual rER PPIases. We investigated the substrate preferences of these PPIases in vitro using type III collagen, the unhydroxylated quarter fragment of type III collagen, and synthetic peptides as substrates. We observed changes in activity of six rER-resident PPIases, cyclophilin B (encoded by the PPIB gene), FKBP13 (FKBP2), FKBP19 (FKBP11), FKBP22 (FKBP14), FKBP23 (FKBP7), and FKBP65 (FKBP10), due to posttranslational modifications of proline residues in the substrate. Cyclophilin B and FKBP13 exhibited much lower activity toward post- translationally modified substrates. In contrast, FKBP19, FKBP22, and FKBP65 showed increased activity toward hydroxyproline-containing peptide substrates. Moreover, FKBP22 showed a hydroxyproline- dependent effect by increasing the amount of refolded type III collagen in vitro and FKBP19 seems to interact with triple helical type I collagen. Therefore, we propose that hydroxyproline modulates the rate of Ziploc-ing of the triple helix of collagen in the rER. Fluid-phase pinocytosis of LDL by macrophages is regarded as a novel promising target to reduce macrophage cholesterol accumulation in atherosclerotic lesions. The mechanisms of regulation of fluid-phase pinocytosis in macrophages and, specifically, the role of Akt kinases are poorly understood. We have found previously that increased lipoprotein uptake via the receptor-independent process in Akt3 kinase-deficient macrophages contributes to increased atherosclerosis in Akt3(-/-) mice. The mechanism by which Akt3 deficiency promotes lipoprotein uptake in macrophages is unknown. Wenow report that Akt3 constitutively suppresses macropinocytosis in macrophages through a novel WNK1/SGK1/Cdc42 pathway. Mechanistic studies have demonstrated that the lack of Akt3 expression in murine and human macrophages results in increased expression of with-no-lysine kinase 1 (WNK1), which, in turn, leads to increased activity of serum and glucocorticoid-inducible kinase 1 (SGK1). SGK1 promotes expression of the Rho family GTPase Cdc42, a positive regulator of actin assembly, cell polarization, and pinocytosis. Individual suppression of WNK1 expression, SGK1, or Cdc42 activity in Akt3-deficient macrophages rescued the phenotype. These results demonstrate that Akt3 is a specific negative regulator of macropinocytosis in macrophages. Voltage-dependent anion channel-1 (VDAC1) is a highly regulated beta-barrel membrane protein that mediates transport of ions and metabolites between the mitochondria and cytosol of the cell. VDAC1 co-purifies with cholesterol and is functionally regulated by cholesterol, among other endogenous lipids. Molecular modeling studies based on NMR observations have suggested five cholesterol-binding sites in VDAC1, but direct experimental evidence for these sites is lacking. Here, to determine the sites of cholesterol binding, we photolabeled purified mouse VDAC1 (mVDAC1) with photoactivatable cholesterol analogues and analyzed the photolabeled sites with both top-down mass spectrometry (MS), and bottom-upMSpaired with a clickable, stable isotope-labeled tag, FLI-tag. Using cholesterol analogues with a diazirine in either the 7 position of the steroid ring (LKM38) or the aliphatic tail (KK174), we mapped a binding pocket in mVDAC1 localized to Thr(83) and Glu(73), respectively. When Glu(73) was mutated to a glutamine, KK174 no longer photolabeled this residue, but instead labeled the nearby Tyr(62) within this same binding pocket. The combination of analytical strategies employed in this work permits detailed molecular mapping of a cholesterol-binding site in a protein, including an orientation of the sterol within the site. Our work raises the interesting possibility that cholesterol-mediated regulation of VDAC1 may be facilitated through a specific binding site at the functionally important Glu(73) residue. GTPases of immunity-associated proteins (GIMAPs) are expressed in lymphocytes and regulate survival/death signaling and cell development within the immune system. We found that human GIMAP6 is expressed primarily in T cell lines. By sorting human peripheral blood mononuclear cells and performing quantitative RT-PCR, GIMAP6 was found to be expressed in CD3+ cells. In Jurkat cells that had been knocked down for GIMAP6, treatment with hydrogen peroxide, FasL, or okadaic acid significantly increased cell death/apoptosis. Exogenous expression of GMAP6 protected Huh-7 cells from apoptosis, suggesting that GIMAP6 is an anti-apoptotic protein. Furthermore, knockdown of GIMAP6 not only rendered Jurkat cells sensitive to apoptosis but also accelerated T cell activation under phorbol 12-myristate 13-acetate/ionomycin treatment conditions. Using this experimental system, we also observed a down-regulation of p65 phosphorylation (Ser-536) in GIMAP6 knockdown cells, indicating that GIMAP6 might display antiapoptotic function through NF-kappa B activation. The conclusion from the study on cultured T cells was corroborated by the analysis of primary CD3+ T cells, showing that specific knockdown of GIMAP6 led to enhancement of phorbol 12-myristate 13-acetate/ionomycin-mediated activation signals. To characterize the biochemical properties of GIMAP6, we purified the recombinant GIMAP6 to homogeneity and revealed that GIMAP6 had ATPase as well as GTPase activity. We further demonstrated that the hydrolysis activity of GIMAP6 was not essential for its anti-apoptotic function inHuh-7cells. Combining the expression data, biochemical properties, and cellular features, we conclude that GIMAP6 plays a role in modulating immune function and that it does this by controlling cell death and the activation of T cells. The steroid hormone-activated glucocorticoid receptor (GR) regulates cellular stress pathways by binding to genomic regulatory elements of target genes and recruiting coregulator proteins to remodel chromatin and regulate transcription complex assembly. The coregulator hydrogen peroxide-inducible clone 5 (Hic-5) is required for glucocorticoid (GC) regulation of some genes but not others and blocks the regulation of a third gene set by inhibiting GR binding. How Hic-5 exerts these gene-specific effects and specifically how it blocks GR binding to some genes but not others is unclear. Here we show that site-specific blocking of GR binding is due to gene-specific requirements for ATP-dependent chromatin remodeling enzymes. By depletion of 11 different chromatin remodelers, we found that ATPases chromodomain helicase DNA-binding protein 9 (CHD9) and Brahma homologue (BRM, a product of the SMARCA2 gene) are required for GC-regulated expression of the blocked genes but not for other GC-regulated genes. Furthermore, CHD9 and BRM were required for GR occupancy and chromatin remodeling at GR-binding regions associated with blocked genes but not at GR-binding regions associated with other GC-regulated genes. Hic-5 selectively inhibits GR interaction with CHD9 and BRM, thereby blocking chromatin remodeling and robust GR binding at GR-binding sites associated with blocked genes. Thus, Hic-5 regulates GR binding site selection by a novel mechanism, exploiting gene-specific requirements for chromatin remodeling enzymes to selectively influence DNA occupancy and gene regulation by a transcription factor. Tissue factor pathway inhibitor (TFPI), the main inhibitor of initiation of coagulation, exerts an important anticoagulant role through the factor Xa (FXa)-dependent inhibition of tissue factor/ factor VIIa. Protein S is a TFPI cofactor, enhancing the efficiency of FXa inhibition. TFPI can also inhibit prothrombinase assembly by directly interacting with coagulation factor V (FV), which has been activated by FXa. Because full-length TFPI associates with FV in plasma, we hypothesized that FV may influence TFPI inhibitory function. Using pure component FXa inhibition assays, we found that although FV alone did not influence TFPI-mediated FXa inhibition, it further enhanced TFPI in the presence of protein S, resulting in an similar to 8-fold reduction in K-i compared with TFPI alone. A FV variant (R709Q/R1018Q/R1545Q, FV Delta IIa) that cannot be cleaved/activated by thrombin or FXa also enhanced TFPI-mediated inhibition of FXa similar to 12-fold in the presence of protein S. In contrast, neither activated FV nor recombinant B-domain-deleted FV could enhance TFPI-mediated inhibition of FXa in the presence of protein S, suggesting a functional contribution of the B domain. Using TFPI and protein S variants, we show further that the enhancement of TFPI-mediated FXa inhibition by protein S and FV depends on a direct protein S/TFPI interaction and that the TFPI C-terminal tail is not essential for this enhancement. In FXa-catalyzed prothrombin activation assays, both FV and FV IIa (but not activated FV) enhanced TFPI function in the presence of protein S. These results demonstrate a new anticoagulant (cofactor) function of FV that targets the early phase of coagulation before prothrombinase assembly. Inactivation of the tumor suppressor protein p53 by mutagenesis, chemical modification, protein-protein interaction, or aggregation has been associated with different human cancers. Although DNA is the typical substrate of p53, numerous studies have reported p53 interactions with RNA. Here, we have examined the effects ofRNAof varied sequence, length, and origin on the mechanism of aggregation of the core domain of p53 (p53C) using light scattering, intrinsic fluorescence, transmission electron microscopy, thioflavin-T binding, seeding, and immunoblot assays. Our results are the first to demonstrate that RNA can modulate the aggregation of p53C and full-length p53. We found bimodal behavior of RNA in p53C aggregation. A low RNA: protein ratio (similar to 1: 50) facilitates the accumulation of large amorphous aggregates of p53C. By contrast, at a high RNA: protein ratio (>= 1: 8), the amorphous aggregation of p53C is clearly suppressed. Instead, amyloid p53C oligomers are formed that can act as seeds nucleating de novo aggregation of p53C. We propose that structured RNAs prevent p53C aggregation through surface interaction and play a significant role in the regulation of the tumor suppressor protein. A positive electrostatic field emanating from the center of the aquaporin (AQP) water and solute channel is responsible for the repulsion of cations. At the same time, however, a positive field will attract anions. In this regard, L-lactate/lactic acid permeability has been shown for various isoforms of the otherwise highly water and neutral substrate selective AQP family. The structural requirements rendering certain AQPs permeable for weak monoacids and the mechanism of conduction have remained unclear. Here, we show by profiling pH-dependent substrate permeability, measurements of media alkalization, and proton decoupling that AQP9 acts as a channel for the protonated, neutral monocarboxylic acid species. Intriguingly, the obtained permeability rates indicate an up to 10 times higher probability of passage via AQP9 than given by the fraction of the protonated acid substrate at a certain pH. We generated AQP9 point mutants showing that this effect is independent from properties of the channel interior but caused by the protein surface electrostatics. Monocarboxylic acid-conducting AQPs thus employ a mechanism similar to the family of formate-nitrite transporters for weak monoacids. On a more general basis, our data illustrate semiquantitatively the contribution of surface electrostatics to the interaction of charged molecule substrates or ligands with target proteins, such as channels, transporters, enzymes, or receptors. Voltage-dependent Ca2+ channels (VDCCs) mediate neurotransmitter release controlled by presynaptic proteins such as the scaffolding proteins Rab3-interacting molecules (RIMs). RIMs confer sustained activity and anchoring of synaptic vesicles to the VDCCs. Multiple sites on the VDCC alpha(1) and beta subunits have been reported to mediate the RIMs-VDCC interaction, but their significance is unclear. Because alternative splicing of exons 44 and 47 in the P/Q-type VDCC alpha(1) subunit Ca(V)2.1 gene generates major variants of the Ca(V)2.1 C-terminal region, known for associating with presynaptic proteins, we focused here on the protein regions encoded by these two exons. Co-immunoprecipitation experiments indicated that the C-terminal domain (CTD) encoded by Ca(V)2.1 exons 40-47 interacts with the alpha-RIMs, RIM1 alpha and RIM2 alpha, and this interaction was abolished by alternative splicing that deletes the protein regions encoded by exons 44 and 47. Electrophysiological characterization of VDCC currents revealed that the suppressive effect of RIM2 alpha on voltage-dependent inactivation (VDI) was stronger than that of RIM1 alpha for the Ca(V)2.1 variant containing the region encoded by exons 44 and 47. Importantly, in the CaV2.1 variant in which exons 44 and 47 were deleted, strong RIM2 alpha-mediated VDI suppression was attenuated to a level comparable with that of RIM1 alpha-mediated VDI suppression, which was unaffected by the exclusion of exons 44 and 47. Studies of deletion mutants of the exon 47 region identified 17 amino acid residues on the C-terminal side of a polyglutamine stretch as being essential for the potentiated VDI suppression characteristic of RIM2 alpha. These results suggest that the interactions of the Ca(V)2.1 CTD with RIMs enable Ca(V)2.1 proteins to distinguish alpha-RIM isoforms in VDI suppression of P/Q-type VDCC currents. Cholesterol synthesis is a highly oxygen-consuming process. As such, oxygen deprivation (hypoxia) limits cholesterol synthesis through incompletely understood mechanisms mediated by the oxygen-sensitive transcription factor hypoxia-inducible factor 1 alpha (HIF-1 alpha). We show here that HIF-1 alpha links pathways for oxygen sensing and feedback control of cholesterol synthesis in human fibroblasts by directly activating transcription of the INSIG-2 gene. Insig-2 is one of two endoplasmic reticulum membrane proteins that inhibit cholesterol synthesis by mediating sterol-induced ubiquitination and subsequent endoplasmic reticulum-associated degradation of the rate-limiting enzyme in the pathway, HMG-CoA reductase (HMGCR). Consistent with the results in cultured cells, hepatic levels of Insig-2 mRNA were enhanced in mouse models of hypoxia. Moreover, pharmacologic stabilization of HIF-1 alpha in the liver stimulatedHMGCRdegradation via a reaction that requires the protein's prior ubiquitination and the presence of the Insig-2 protein. In summary, our results show that HIF-1 alpha activates INSIG-2 transcription, leading to accumulation of Insig-2 protein, which binds to HMGCR and triggers its accelerated ubiquitination and degradation. These results indicate that HIF-mediated induction of Insig-2 and degradation of HMGCR are physiologically relevant events that guard against wasteful oxygen consumption and inappropriate cell growth during hypoxia. In malaria, CD36 plays several roles, including mediating parasite sequestration to host organs, phagocytic clearance of parasites, and regulation of immunity. Although the functions of CD36 in parasite sequestration and phagocytosis have been clearly defined, less is known about its role in malaria immunity. Here, to understand the function of CD36 in malaria immunity, we studied parasite growth, innate and adaptive immune responses, and host survival in WT and Cd36(-/-) mice infected with a non-lethal strain of Plasmodium yoelii. Compared with Cd36(-/-) mice, WT mice had lower parasitemias and were resistant to death. At early but not at later stages of infection, WT mice had higher circulatory proinflammatory cytokines and lower anti-inflammatory cytokines than Cd36(-/-) mice. WT mice showed higher frequencies of proinflammatory cytokine-producing and lower frequencies of anti-inflammatory cytokine-producing dendritic cells (DCs) and natural killer cells than Cd36(-/-) mice. Cytokines produced by co-cultures of DCs from infected mice and ovalbumin-specific, MHC class II-restricted alpha/beta(OT-II)T cells reflected CD36-dependentDCfunction. WT mice also showed increased Th1 and reduced Th2 responses compared with Cd36(-/-) mice, mainly at early stages of infection. Furthermore, in infected WT mice, macrophages and neutrophils expressed higher levels of phagocytic receptors and showed enhanced phagocytosis of parasite-infected erythrocytes than those in Cd36(-/-) mice in an IFN-gamma-dependent manner. However, there were no differences in malaria- induced humoral responses betweenWTand Cd36(-/-) mice. Overall, the results show that CD36 plays a significant role in controlling parasite burden by contributing to proinflammatory cytokine responses by DCs and natural killer cells, Th1 development, phagocytic receptor expression, and phagocytic activity. The tongue is one of the major structures involved in human food intake and speech. Tongue malformations such as aglossia, microglossia, and ankyloglossia are congenital birth defects, greatly affecting individuals' quality of life. However, the molecular basis of the tissue-tissue interactions that ensure tissue morphogenesis to form a functional tongue remains largely unknown. Here we show that ShhCre-mediated epithelial deletion of Wntless (Wls), the key regulator for intracellular Wnt trafficking, leads to lingual hypoplasia in mice. Disruption of epithelial Wnt production by Wls deletion in epithelial cells led to a failure in lingual epidermal stratification and loss of the lamina propria and the underlying superior longitudinal muscle in developing mouse tongues. These defective phenotypes resulted from a reduction in epithelial basal cells positive for the basal epidermal marker protein p63 and from impaired proliferation and differentiation in connective tissue and paired box 3 (Pax3)-and Pax7-positive muscle progenitor cells. We also found that epithelial Wnt production is required for activation of the Notch signaling pathway, which promotes proliferation of myogenic progenitor cells. Notch signaling in turn negatively regulated Wnt signaling during tongue morphogenesis. We further show that Pax7 is a direct Notch target gene in the embryonic tongue. In summary, our findings demonstrate a key role for the lingual epithelial signals in supporting the integrity of the lamina propria and muscular tissue during tongue development and that a Wnt/Notch/Pax7 genetic hierarchy is involved in this development. Prostate cancer is a very common malignant disease and a leading cause of death for men in the Western world. Tumorigenesis and progression of prostate cancer involves multiple signaling pathways, including the Hippo pathway. Yes-associated protein (YAP) is the downstream transcriptional co-activator of the Hippo pathway, is overexpressed in prostate cancer, and plays a vital role in the tumorigenesis and progression of prostate cancer. However, the role of the YAP paralog and another downstream effector of the Hippo pathway, transcriptional coactivator with PDZ-binding motif (TAZ), in prostate cancer has not been fully elucidated. Here, we show that TAZ is a basal cell marker for the prostate epithelium. We found that overexpression of TAZ promotes the epithelial-mesenchymal transition (EMT), cell migration, and anchorage-independent growth in the RWPE1 prostate epithelial cells. Of note, knock down of TAZ in the DU145 prostate cancer cells inhibited cell migration and metastasis. Wealso found that SH3 domain binding protein 1 (SH3BP1), a RhoGAP protein that drives cell motility, is a direct target gene of TAZ in the prostate cancer cells, mediating TAZ function in enhancing cell migration. Moreover, the prostate cancer-related oncogenic E26 transformation-specific (ETS) transcription factors, ETV1, ETV4, and ETV5, were required for TAZ gene transcription in PC3 prostate cancer cells. MAPK inhibitor U0126 treatment decreased TAZ expression in RWPE1 cells, and ETV4 overexpression rescued TAZ expression in RWPE1 cells with U0126 treatment. Our results show a regulatory mechanism of TAZ transcription and suggest a significant role for TAZ in the progression of prostate cancer. Nitric oxide (NO) is an intercellular messenger involved in multiple bodily functions. Prolonged NO exposure irreversibly inhibits respiration by covalent modification of mitochondrial cytochrome oxidase, a phenomenon of pathological relevance. However, the speed and potency of NO's metabolic effects at physiological concentrations are incompletely characterized. To this end, we set out to investigate the metabolic effects of NO in cultured astrocytes from mice by taking advantage of the high spatiotemporal resolution afforded by genetically encoded Frster resonance energy transfer (FRET) nanosensors. NO exposure resulted in immediate and reversible intracellular glucose depletion and lactate accumulation. Consistent with cytochrome oxidase involvement, the glycolytic effect was enhanced at a low oxygen level and became irreversible at a high NO concentration or after prolonged exposure. Measurements of both glycolytic rate and mitochondrial pyruvate consumption revealed significant effects even at nanomolar NO concentrations. We conclude that NO can modulate astrocytic energy metabolism in the short term, reversibly, and at concentrations known to be released by endothelial cells under physiological conditions. These findings suggest that NO modulates the size of the astrocytic lactate reservoir involved in neuronal fueling and signaling. Aim: Skeletal muscle nitric oxide-cyclic guanosine monophosphate (NO-cGMP) pathways are impaired in Duchenne and Becker muscular dystrophy partly because of reduced nNOS mu and soluble guanylate cyclase (GC) activity. However, GC function and the consequences of reduced GC activity in skeletal muscle are unknown. In this study, we explore the functions of GC and NO-cGMP signaling in skeletal muscle. Results: GC1, but not GC2, expression was higher in oxidative than glycolytic muscles. GC1 was found in a complex with nNOSl and targeted to nNOS compartments at the Golgi complex and neuromuscular junction. Baseline GC activity and GC agonist responsiveness was reduced in the absence of nNOS. Structural analyses revealed aberrant microtubule directionality in GC1(-/-) muscle. Functional analyses of GC1(-/-) muscles revealed reduced fatigue resistance and postexercise force recovery that were not due to shifts in type IIA-IIX fiber balance. Force deficits in GC1(-/-) muscles were also not driven by defects in resting mitochondrial adenosine triphosphate (ATP) synthesis. However, increasing muscle cGMP with sildenafil decreased ATP synthesis efficiency and capacity, without impacting mitochondrial content or ultrastructure. Innovation: GC may represent a new target for alleviating muscle fatigue and that NO-cGMP signaling may play important roles in muscle structure, contractility, and bioenergetics. Conclusions: These findings suggest that GC activity is nNOS dependent and that muscle-specific control of GC expression and differential GC targeting may facilitate NO-cGMP signaling diversity. They suggest that nNOS regulates muscle fiber type, microtubule organization, fatigability, and postexercise force recovery partly through GC1 and suggest that NO-cGMP pathways may modulate mitochondrial ATP synthesis efficiency. Aims: Hemangiomas are endothelial cell tumors and the most common soft tissue tumors in infants. They frequently cause deformity and can cause death. Current pharmacologic therapies have high-risk side-effect profiles, which limit the number of children who receive treatment. The objectives of this work were to identify the mechanisms through which standardized berry extracts can inhibit endothelial cell tumor growth and test these findings in vivo. Results: EOMA cells are a validated model that generates endothelial cell tumors when injected subcutaneously into syngeneic (129P/3) mice. EOMA cells treated with a blend of powdered natural berry extracts (NBE) significantly inhibited activity of multidrug resistance protein-1 (MRP-1) compared to vehicle controls. This resulted in nuclear accumulation of oxidized glutathione (GSSG) and apoptotic EOMA cell death. When NBE-treated EOMA cells were injected into mice, they generated smaller tumors and had a higher incidence of apoptotic cell death compared to vehicle-treated EOMA cells as demonstrated by immunocytochemistry. Kaplan-Meier survival curves for tumor-bearing mice showed that NBE treatment significantly prolonged survival compared to vehicle-treated controls. Innovation: These are the first reported results to show that berry extracts can inhibit MRP-1 function that causes apoptotic tumor cell death by accumulation of GSSG in the nucleus of EOMA cells where NADPH oxidase is hyperactive and causes pathological angiogenesis. Conclusions: These findings indicate that berry extract inhibition of MRP-1 merits consideration and further investigation as a therapeutic intervention and may have application for other cancers with elevated MRP-1 activity. A thorough analysis of clinical trial data in the Ionis integrated safety database (ISDB) was performed to determine if there is a class effect on platelet numbers and function in subjects treated with 2'-O-methoxyethyl (2' MOE)-modified antisense oligonucleotides (ASOs). The Ionis ISDB includes over 2,600 human subjects treated with 16 different 2' MOE ASOs in placebo-controlled and open-label clinical trials over a range of doses up to 624mg/week and treatment durations as long as 4.6 years. This analysis showed that there is no class generic effect on platelet numbers and no incidence of confirmed platelet levels below 50 K/mL in subjects treated with 2' MOE ASOs. Only 7 of 2,638 (0.3%) subjects treated with a 2' MOE ASO experienced a confirmed postbaseline (BSLN) platelet count between 100 and 50 K/mL. Three of sixteen 2' MOE ASOs had > 10% incidence of platelet decreases > 30% from BSLN, suggesting that certain sequences may associate with clinically insignificant platelet declines. Further to these results, we found no evidence that 2' MOE ASOs alter platelet function, as measured by the lack of clinically relevant bleeding in the presence or absence of other drugs that alter platelet function and/or number and by the results from trials conducted with the factor XI (FXI) ASO. Splice-switching antisense oligonucleotides are emerging treatments for neuromuscular diseases, with several splice-switching oligonucleotides (SSOs) currently undergoing clinical trials such as for Duchenne muscular dystrophy (DMD) and spinal muscular atrophy (SMA). However, the development of systemically delivered antisense therapeutics has been hampered by poor tissue penetration and cellular uptake, including crossing of the blood-brain barrier (BBB) to reach targets in the central nervous system (CNS). For SMA application, we have investigated the ability of various BBB-crossing peptides for CNS delivery of a splice-switching phosphorodiamidate morpholino oligonucleotide (PMO) targeting survival motor neuron 2 (SMN2) exon 7 inclusion. We identified a branched derivative of the well-known ApoE (141-150) peptide, which as a PMO conjugate was capable of exon inclusion in the CNS following systemic administration, leading to an increase in the level of full-length SMN2 transcript. Treatment of newborn SMA mice with this peptide-PMO (P-PMO) conjugate resulted in a significant increase in the average lifespan and gains in weight, muscle strength, and righting reflexes. Systemic treatment of adult SMA mice with this newly identified P-PMO also resulted in small but significant increases in the levels of SMN2 pre-messenger RNA (mRNA) exon inclusion in the CNS and peripheral tissues. This work provides proof of principle for the ability to select new peptide paradigms to enhance CNS delivery and activity of a PMO SSO through use of a peptide-based delivery platform for the treatment of SMA potentially extending to other neuromuscular and neurodegenerative diseases. Clinical efficacy of antisense oligonucleotides (AONs) for the treatment of neuromuscular disorders depends on efficient cellular uptake and proper intracellular routing to the target. Selection of AONs with highest in vitro efficiencies is usually based on chemical or physical methods for forced cellular delivery. Since these methods largely bypass existing natural mechanisms for membrane passage and intracellular trafficking, spontaneous uptake and distribution of AONs in cells are still poorly understood. Here, we report on the unassisted uptake of naked AONs, so-called gymnosis, in muscle cells in culture. We found that gymnosis works similarly well for proliferating myoblasts as for terminally differentiated myotubes. Cell biological analyses combined with microscopy imaging showed that a phosphorothioate backbone promotes efficient gymnosis, that uptake is clathrin mediated and mainly results in endosomal-lysosomal accumulation. Nuclear localization occurred at a low level, but the gymnotically delivered AONs effectively modulated the expression of their nuclear RNA targets. Chloroquine treatment after gymnotic delivery helped increase nuclear AON levels. In sum, we demonstrate that gymnosis is feasible in proliferating and non-proliferating muscle cells and we confirm the relevance of AON chemistry for uptake and intracellular trafficking with this method, which provides a useful means for bio-activity screening of AONs in vitro. RNA has enormous potential as a therapeutic, yet, the successful application depends on efficient delivery strategies. In this study, we demonstrate that a designed artificial viral coat protein, which self-assembles with DNA to form rod-shaped virus-like particles (VLPs), also encapsulates and protects mRNA encoding enhanced green fluorescent protein (EGFP) and luciferase, and yields cellular expression of these proteins. The artificial viral coat protein consists of an oligolysine (K-12) for binding to the oligonucleotide, a silk protein-like midblock S-10 = (GAGAGAGQ)(10) that self-assembles into stiff rods, and a long hydrophilic random coil block C that shields the nucleic acid cargo from its environment. With mRNA, the C-S-10-K-12 protein coassembles to form rod-shaped VLPs each encapsulating about one to five mRNA molecules. Inside the rod-shaped VLPs, the mRNAs are protected against degradation by RNAses, and VLPs also maintain their shape following incubation with serum. Despite the lack of cationic surface charge, the mRNA VLPs transfect cells with both EGFP and luciferase, although with a much lower efficiency than obtained by a lipoplex transfection reagent. The VLPs have a negligible toxicity and minimal hemolytic activity. Our results demonstrate that VLPs yield efficient packaging and shielding of mRNA and create the basis for implementation of additional virus-like functionalities to improve transfection and cell specificity, such as targeting functionalities. Herein we described the synthesis of siRNA-NES (nuclear export signal) peptide conjugates by solid phase fragment coupling and the application of them to silencing of bcr/abl chimeric gene in human chronic myelogenous leukemia cell line K562. Two types of siRNA-NES conjugates were prepared, and both sense strands at 5' ends were covalently linked to a NES peptide derived from TFIIIA and HIV-1 REV, respectively. Significant enhancement of silencing efficiency was observed for both of them. siRNA-TFIIIA NES conjugate suppressed the expression of BCR/ABL gene to 8.3% at 200nM and 11.6% at 50 nM, and siRNA-HIV-1REV NES conjugate suppressed to 4.0% at 200nM and 6.3% at 50 nM, whereas native siRNA suppressed to 36.3% at 200nM and 30.2% at 50 nM. We could also show complex of siRNA-NES conjugate and designed amphiphilic peptide peptide beta 7 could be taken up into cells with no cytotoxicity and showed excellent silencing efficiency. We believe that the complex siRNA-NES conjugate and peptideb7 is a promising candidate for in vivo use and therapeutic applications. The bacterial cell wall presents a barrier to the uptake of unmodified synthetic antisense oligonucleotides, such as peptide nucleic acids, and so is one of the greatest obstacles to the development of their use as therapeutic anti-bacterial agents. Cell-penetrating peptides have been covalently attached to antisense agents, to facilitate penetration of the bacterial cell wall and deliver their cargo into the cytoplasm. Although they are an effective vector for antisense oligonucleotides, they are not specific for bacterial cells and can exhibit growth inhibitory properties at higher doses. Using a bacterial cell growth assay in the presence of cefotaxime (CTX 16 mg/L), we have developed and evaluated a self-assembling non-toxic DNA tetrahedron nanoparticle vector incorporating a targeted anti-bla(CTX-M-group) (1) antisense peptide nucleic acid (PNA4) in its structure for penetration of the bacterial cell wall. A dose-dependent CTX potentiating effect was observed when PNA4 (0-40 mu M) was incorporated into the structure of a DNA tetrahedron vector. The minimum inhibitory concentration (to CTX) of an Escherichia coli field isolate harboring a plasmid carrying bla(CTX-M-3) was reduced from 35 to 16 mg/L in the presence of PNA4 carried by the DNA tetrahedron vector (40 mM), contrasting with no reduction in MIC in the presence of PNA4 alone. No growth inhibitory effects of the DNA tetrahedron vector alone were observed. This paper attempts to describe the conditions related to the representation of 63 navigational paths carried out on Moodle's platform. The hypothesis that we advance is that learning styles determine the mode in which users browse through websites. The post-observational notes and the analysis of logs (register of users' activity) indicate two main facts: on the one hand, the elaboration of a tool that is able to convert log files into graphs that could be visualized as courses of educational browsing. On the other hand, the confirmation of the relation between ways of learning and styles of browsing, giving rise to a method that anticipates choices made by users on the digital platform. There has been stated that assimilative and convergent learning students solve the assigned task consulting, preferably, the content modules on communicative interaction. For their part, accommodating learning students follow precise instructions and rely on his peers' activity. Finally, divergent learning students tend to prefer collaborative activities and they also help each other. The practical application of the results aims at the usefulness of the findings in university education context, which can be used in the elaboration of quality assessments and the identification of the needs of educational mediation. Research about the interrelation between anxiety and corrective feedback has mostly been conducted in face-to-face environments. The present study examines the relationship between students showing anxiety when speaking a FL and feedback as a potential anxiety inhibitor in an online oral synchronous communication task. Two questionnaires, the Foreign Language Anxiety Scale (FLAS) and the Corrective Feedback Belief Scale (CFBS), were administered to 50 students from the School of Languages in an online learning environment. T-test analysis showed a significant difference between the learners' preferences in two methods of CF. Recast and Metalinguistic feedback were better rated by the students who reported higher levels of anxiety in oral communication classes. Additionally, the High-level anxiety group rated teacher feedback as more effective than the other sources of feedback when compared to the Low-level anxiety group. These results indicate the need to take into account individual differences in terms of anxiety foreign language learning and students' beliefs about CF in order to help them achieve their learning goals in an interactive online environment. OBJECT: The object of this study is to determine the roles of psychiatry nurses within the therapeutic environment of psychiatry clinics in Turkey. METHODS: This study was performed in a Cross-sectionatand descriptive design in 195 institutes comprising psychiatry clinics in Turkey. RESULTS: When the responsibilities of nurses for clinical activities were asked, the following answers were obtained: playing with patients or painting at a rate of 54,4%. It was determined that in the majority of psychiatry clinics, there were educational activities which were conducted by nurses. CONCLUSION: The researchers propose that the increase in the roles and responsibilities of nurses in such activities be supported. (C) 2017 Elsevier Inc. All rights reserved. This study examined socio-demographic and psychological correlates of posttraumatic growth (PTG) among Korean Americans (KAs) with traumatic life experiences. A total of 286 Ms were included. Being a woman or having a lower annual household income had positive associations with PTG, while having no religion had a negative association with it. In addition, praying and visiting a mental health professional for coping with stress or for psychological problems was positively associated with PTG. Higher resilience scores increased PTG, while depressive symptoms decreased it. We suggest reinforcing help seeking behaviors and accessibility to care facilities, and gender specific strengthening programs for enhancing PTG among KAs. (C) 2017 Elsevier Inc. All rights reserved. OBJECTIVES: The objective of this study was answer to the question: to what extent are the anger of the caregivers of patients diagnosed with schizophrenia and their perceived level of burden are related? METHOD: The study is a descriptive and correlational study. The information form prepared by the researchers which questions the socio-demographic information of the individuals along with the "Caregiving Burden Inventory" which examines the burden of the caregiver as well as "Trait Anger and Anger Expression Style Scale (TAAES)" which determines the anger levels of the caregivers were used. RESULTS: The caregiving burdens of the caregivers according to the score averages were detern-iined as 11.88 9.78 for time and dependency burden, 11.93 +/- 8.46 for developmental burden, 8.47 +/- 6.63 for physical burden, 5.61 +/- 5.26 for social burden, 6.29 +/- 5.25 for emotional burden and the total burden score was determined as 44.19 +/- 26.75. According to the trait anger and anger expression style scale score averages; trait anger was determined as 15.12 +/- 5.95, anger expression as 9.70 +/- 3.43, anger-in as 15.22 +/- 4.02, anger control as 28.05 +/- 5.57 and anger total score average as 68.11 +/- 9.97. CONCLUSION: According to the results obtained from this study, caregivers of schizophrenia patients experience developmental, physical, social and emotional burdens in addition to trait anger. The caregivers of schizophrenia patients need knowledge and support in order to control the burden and the anger they experience during the caregiving process. (C) 2017 Elsevier Inc. All rights reserved. INTRODUCTION: Care of patients with Alzheimer's disease is one of the most difficult types of care that exposes the caregiver to a high level of care strain. The present research aimed at determining the effect of spiritual care on caregiver strain of the elderly with Alzheimer's disease. METHODS: An experimental study was carried out on 100 caregivers who were selected by convenience sampling and randomly divided into intervention, control one and control two groups. Group spiritual therapy was performed on the intervention group for five weeks, Control one participate in the group sessions without any particular interventions, and control two received no interventions. Data was collected through a demographic questionnaire and Robinson's (1983) Caregiver Strain Index, and analyzed using the Chi-square, Fisher's Exact test, one-way analysis of variance and paired t-test. Statistical significance level was considered as 0.05. RESULTS: In the intervention group mean of the posttest care strain score 32.43 +/- 2.73 was significantly lower than pretest 37.16 +/- 1.26 (P < 0.001). The mean posttest score of care strain was significantly lower in the intervention group compared to the two other groups (P < 0.001). CONCLUSION: Spiritual care can reduce care strain in home caregivers of the elderly with Alzheimer's disease. (C) 2017 Elsevier Inc. All rights reserved. INTRODUCTION: People with alcohol dependency have lower self-esteem than controls and when their alcohol use increases, their self-esteem decreases. Coping skills in alcohol related issues are predicted to reduce vulnerability to relapse. It is important to adapt care to individual needs so as to prevent a return to the cycle of alcohol use. The Tidal Model focuses on providing support and services to people who need to live a constructive life. AIM: The aim of the randomized study was to determine the effect of the psychiatric nursing approach based on the Tidal Model on coping and self-esteem in people with alcohol dependency. METHOD: The study was semi-experimental in design with a control group, and was conducted on 36 individuals (18 experimental, 18 control). An experimental and a control group were formed by assigning persons to each group using the stratified randomization technique in the order in which they were admitted to hospital. The Coping Inventory (COPE) and the Coopersmith Self-Esteem Inventory (CSEI) were used as measurement instruments. The measurement instruments were applied before the application and three months after the application. In addition to routine treatment and follow-up, the psychiatric nursing approach based on the Tidal Model was applied to the experimental group in the One-to-One Sessions. RESULTS: The psychiatric nursing approach based on the Tidal Model is an approach which is effective in increasing the scores of people with alcohol dependency in positive reinterpretation and growth, active coping, restraint, emotional social support and planning and reducing their scores in behavioral disengagement. It was seen that self-esteem rose, but the difference from the control group did not reach significance. DISCUSSION: The psychiatric nursing approach based on the Tidal Model has an effect on people with alcohol dependency in maintaining their abstinence. (C) 2017 Elsevier Inc. All rights reserved. This study identified risk factors for suicide ideation among adolescents through a secondary analysis using data collected over five years from the 5th-9th Korea Youth Risk Behavior Survey. We analyzed 370,568 students' responses to questions about suicidality. The risk factors for suicide ideation included demographic characteristics, such as gender (girls), low grades, low economic status, and not living with one or both parents. Behavioral and mental health risk factors affecting suicide ideation were depression, low sleep satisfaction, high stress, alcohol consumption, smoking, and sexual activity. Health care providers should particularly target adolescents manifesting the above risk factors when developing suicide prevention programs for them. (C) 2017 Elsevier Inc. All rights reserved. BACKGROUND: There are no data about the frequency of major depression in patients with liver disease related to Hepatitis B virus (HBV) in China. This study examined the prevalence of major depression and its clinical correlates and association with quality of life (QOL) in patients with HBV-related liver diseases. METHOD: Altogether 634 patients with HBV-related liver diseases met study entry criteria and completed the survey. The diagnosis of major depression was established with the Mini International Neuropsychiatric Interview (MINI). Socio-demographic and clinical characteristics, Global Assessment of Functioning (GAF) and QOL were measured. RESULTS: The prevalence of major depression was 6.4%. Multivariable logistic regression analyses revealed that insomnia (P = 0.01, OR = 5.5, 95%CI = 1.4-21.6) and global functioning (P < 0.001, OR = 0.6, 95% CI = 0.5-0.7) were independently associated with major depression. Major depression was associated with both poor physical (F ((1, 634)) = 4.0, P = 0.04) and mental QOL (F ((1, 634)) = 262, P < 0.001). CONCLUSIONS: Given the negative impact of depression on patients' QOL, more attempts should be made to identify and treat it in HBV-related diseases. (C) 2017 Published by Elsevier Inc. Obsessive Compulsive Inventory-Child Version (OCI-CV) assesses six dimensions of OCD symptoms in childhood and adolescence. The current study used confirmatory methods to assess factor structure and reliability of the Italian OCI-CV in community children and adolescents. 1408 community children and adolescents completed the OCI-CV and a subgroup (n = 855) completed measures of other anxiety and depression symptoms. A six correlated factor structure showed good fit. Reliability was excellent for total OCI-CV and for the other scales ranged from good to acceptable. The OCI-CV confirmed good properties in terms of factor structure and reliability. (C) 2017 Elsevier Inc. All rights reserved. This article describes how the Internet Intervention Model (IIM) was used as an organizing framework to design a theoretically based Internet intervention for emerging adults who experience troubled intimate partner relationships. In the design process, the team addressed six fundamental questions related to the several components of the IIM. Decisions made regarding the design of the intervention based on the six questions are described. We focus in particular on how the intervention is based on the Theory of Emerging Adulthood and the Theory of Narrative Identity. (C) 2017 Elsevier Inc. All rights reserved. More than 3.5 million in the US are diagnosed with autism spectrum disorder (ASD) and caregivers experience stress that adversely affects their well-being. Positive thinking training (1111) intervention can minimize that stress. However, before testing the effectiveness of PTT, its fidelity must be established. This pilot intervention trial examined fidelity of an online PIT intervention for ASD caregivers with a random assignment of 73 caregivers to either the online PIT intervention or to the control group. Quantitative data [Positive Thinking Skills Scale (PTSS)] and qualitative data (online weekly homework) were collected. The mean scores for the PTSS improved for the intervention group and decreased for the control group post intervention. Evidence for use of PTT was found in caregivers' online weekly homework. The findings provide evidence of the implementation fidelity of PIT intervention and support moving forward to test PTT effectiveness in promoting caregivers' well-being. (C) 2017 Elsevier Inc. All rights reserved. Major depression (MDD) is a common and disabling disorder. Research has shown that most people with MDD receive either no treatment or inadequate treatment. Computer and mobile technologies may offer solutions for the delivery of therapies to untreated or inadequately treated individuals with MDD. The authors review currently available technologies and research aimed at relieving symptoms of MDD. These technologies include computer-assisted cognitive-behavior therapy (CCBT), web-based self-help, Internet self-help support groups, mobile psychotherapeutic interventions (i.e., mobile applications or apps), technology enhanced exercise, and biosensing technology. (C) 2017 Elsevier Inc. All rights reserved. Enuresis constitutes a frequently encountered problem area for children that may adversely affect social and emotional adjustment This type of incontinence has been of concern to the human family for centuries. A brief history of enuresis is presented followed by current conceptualizations, diagnostic criteria, prevalence rates and psychiatric comorbidities. Historic notions of causation together with ineffective, sometimes barbaric treatments are then discussed, ending with a presentation of evidence-based treatment modalities, with the urine alarm being an essential element of effective treatment. An intervention termed dry bed training combines the urine alarm with a series of procedures designed in part to reduce relapse potential and should be a primary consideration for implementation by treatment professionals. Finally, a brief case study is presented illustrating special etiological and treatment considerations with juvenile psychiatric patients. (C) 2017 Elsevier Inc. All rights reserved. Borderline personality disorder is highly associated with suicidal behaviors. The authors of the current case study present the introduction model of original Crisis Intervention Procedure for Borderline Patients (CIP-BP) which is a method focused on restoring emotional balance, reducing the severity of symptoms and the risk of suicidal behavior, as well as developing optimum solutions for further action. Its aim is to enable the patient to regain control of their emotional memory, increase autonomy and restore important interpersonal relations by using the original resources of this person. The procedure aims at providing nursing personnel with a practical tool to effectively avert the crisis and prevent further decompensation of BPD patients. Further pre-post study is required to determine the effectiveness of the procedure. (C) 2017 Elsevier Inc. All rights reserved. It has been reported that MLN4924 can inhibit cell growth and metastasis in various kinds of cancer. We have reported that MLN4924 is able to inhibit angiogenesis through the induction of cell apoptosis both in vitro and in vivo models. Moreover, Neddylation inhibition using MLN4924 triggered the accumulation of pro-apoptotic protein NOXA in Human umbilical vein endothelial cells (HUVECs). However, the mechanism of MLN4924-induced NOXA up-regulation has not been addressed in HUVECs yet. In this study, we investigated how MLN4924 induced NOXA expression and cellular apoptosis in HUVECs treated with MLN4924 at indicated concentrations. MLN4924-induced apoptosis was evaluated by Annexin V-FITC/PI analysis and expression of genes associated with apoptosis was assessed by Quantitative RT-PCR and western blotting. As a result, MLN4924 triggered NOXA-dependent apoptosis in a dose-dependent manner in HUVECs. Mechanistically, inactivation of Neddylation pathway caused up regulation of activating transcription factor 4 (ATF-4), a substrate of Cullin-Ring E3 ubiquitin ligases (CRL). NOXA was subsequently transactivated by ATF-4 and further induced apoptosis. More importantly, knockdown of ATF-4 by siRNA significantly decreased NOXA expression and apoptotic induction in HUVECs. In summary, our study reveals a new mechanism underlying MLN4924-induced NOXA accumulation in HUVECs, which may help extend further study of MLN4924 for angiogenesis inhibition treatment. (C) 2017 Elsevier Inc. All rights reserved. MicroRNAs (miRNAs) play an important role in regulating immune system function by mRNA destabilisation or inhibition of translation. Recently, miR-155 was detected to be significantly up-regulated in colonic tissues of patients with active UC. However, it is unknown whether miR-155 is involved in the pathogenesis of UC and how it influences immune response in dextran sulfate sodium (DSS)-induced colitis mice. Here, we investigated the role of miR-155 in UC. Firstly, through bioinformatics analysis and luciferase report assay, we found Jarid2 was a direct target of miR-155; then, we carried out in situ hybridization, immunofluorescence and flow cytometry, and revealed that miR-155 levels were increased, Jarid2 levels were decreased and the frequency of Th17 cells was elevated in DSS-induced mice; we also used lentiviral vector to deliver miR-155 inhibition sequences to silence miR-155 that was effectively taken up by epithelial cells. MiR-155 inhibition attenuated DSS-induced colonic damage and inhibited Th17 cells differentiation. This study suggests that miR-155 plays a host-damaging role during DSS-induced colitis mice and induces Th17 differentiation by targeting Jarid2. (C) 2017 Elsevier Inc. All rights reserved. Aesculin (AES), a coumarin compound derived from Aesculus hippocasanum L, is reported to exert protective role against inflammatory diseases, gastric disease and cancer. However, direct effect of AES in bone metabolism is deficient. In this study, we examined the effects of AES on osteoclast (OC) differentiation in receptor activator of NF-kappa B ligand (RANKL)-induced RAW264.7 cells. AES inhibits the OC differentiation in both dose- and time-dependent manner within non-toxic concentrations, as analyzed by Tartrate Resistant Acid Phosphatase (TRAP) staining. The actin ring formation manifesting OC function is also decreased by AES. Moreover, expressions of osteoclastogenesis related genes Trap, Atp6v0d2, Cathepsin K and Mmp-9 are decreased upon AES treatment. Mechanistically, AES attenuates the activation of MAPKs and NF-kappa B activity upon RANKL induction, thus leading to the reduction of Nfatc1 mRNA expression. Moreover, AES inhibits Rank expression, and RANK overexpression markedly decreases AES's effect on OC differentiation and NF-kappa B activity. Consistently, AES protects against bone mass loss in the ovariectomized and dexamethasone treated rat osteoporosis model. Taken together, our data demonstrate that AES can modulate bone metabolism by suppressing osteoclastogenesis and related transduction signals. AES therefore could be a promising agent for the treatment of osteoporosis. (C) 2017 Elsevier Inc. All rights reserved. Our previous study had suggested Tribbles homolog 3 (TRB3) might be involved in metabolic syndrome via adipose tissue. Given prior studies, we sought to determine whether TRB3 plays a major role in adipocytes and adipose tissue with beneficial metabolic effects in obese and diabetic rats. Fully differentiated 3T3-L1 adipocytes were incubated to induce insulin resistant adipocytes. Forty male Sprague-Dawley rats were all fed high-fat (HF) diet. Type 2 diabetic rat model was induced by high-fat diet and low-dose streptozotocin (STZ). Compared with control group, in insulin resistant adipocytes, protein levels of insulin receptor substrate-1(IRS-1), glucose transporter 4(GLUT4) and phosphorylated-AMP-activated protein kinase (p-AMPK)were reduced, TRB3 protein level and triglyceride level were significantly increased, glucose uptake was markedly decreased. TRB3 silencing alleviated adipocytes insulin resistance. With TRB3 gene silencing, protein levels of IRS-1, GLUT4 and p-AMPK were significantly increased in adipocytes. TRB3 gene silencing decreased blood glucose, ameliorated insulin sensitivity and adipose tissue remodeling in diabetic rats. TRB3 silencing decreased triglyceride, increased glycogen simultaneously in diabetic epididymal and brown adipose tissues (BAT). Consistently, p-AMPK levels were increased in diabetic epididymal adipose tissue, and BAT after TRB3-siRNA treatment. TRB3silencing increased phosphorylation of Akt in liver, and improved liver insulin resistance. (C) 2017 Published by Elsevier Inc. Context: An extensive body of literature indicates a relationship between insulin resistance and the up-regulation of the kynurenine pathway, i.e. the preferential conversion of tryptophan to kynurenine, with subsequent overproduction of diabetogenic downstream metabolites, such as kynurenic acid. Case description: We have measured the concentration of kynurenine pathway metabolites (kynurenines) in the brain and pancreas of two young (27 and 28 yrs) insulin resistant, normoglycemic subjects (M-values 2 and 4 mg/kg/min, respectively) using quantitative C-11-alpha-methyl-tryptophan PET/CT imaging. Both subjects underwent a preventive 12-week metformin treatment regimen (500 mg daily) prior to the PET/CT study. Whereas treatment was successful in one of the subject (M-value increased from 2 to 12 mg/kg/min), response was poor in the other subjects (M-value changed from 4 to 5 mg/kg/min). Brain and pancreas concentrations of kynurenines observed in the responder were similar to that in a healthy control subject, whereas kynurenines determined in the non-responder were about 25% higher and similar to those found in a severely insulin resistant patient. Consistent with this outcome, M-values were negatively correlated with both kynurenic acid levels (R-2 = 0.68, p = 0.09) as well as with the kynurenine to tryptophan ratio (R-2 = 0.63, p = 0.11). Conclusion: The data indicates that kynurenine pathway metabolites are increased in subjects with insulin resistance prior to overt manifestation of hyperglycemia. Moreover, successful metformin treatment leads to a normalization of tryptophan metabolism, most likely as a result of decreased contribution from the kynurenine metabolic pathway. Published by Elsevier Inc. Glioblastoma multiforme (GBM) is a highly aggressive brain tumor with a median survival time of only 14 months after treatment. It is urgent to find new therapeutic drugs that increase survival time of GBM patients. To achieve this goal, we screened differentially expressed genes between long-term and shortterm survived GBM patients from Gene Expression Omnibus database and found gene expression signature for the long-term survived GBM patients. The signaling networks of all those differentially expressed genes converged to protein binding, extracellular matrix and tissue development as revealed in BiNGO and Cytoscape. Drug repositioning in Connectivity Map by using the gene expression signature identified repaglinide, a first-line drug for diabetes mellitus, as the most promising novel drug for GBM. In vitro experiments demonstrated that repaglinide significantly inhibited the proliferation and migration of human GBM cells. In vivo experiments demonstrated that repaglinide prominently prolonged the median survival time of mice bearing orthotopic glioma. Mechanistically, repaglinide significantly reduced Bcl-2, Beclin-1 and PD-LI expression in glioma tissues, indicating that repaglinide may exert its anti-cancer effects via apoptotic, autophagic and immune checkpoint signaling. Taken together, repaglinide is likely to be an effective drug to prolong life span of GBM patients. (C) 2017 Published by Elsevier Inc. In the treatment of type 2 diabetes, improvements in glucose control are often linked to side effects such as weight gain and altered lipid metabolism, increasing the risk of cardiovascular disease. It is therefore important to develop antidiabetic drugs that exert beneficial effects on insulin sensitivity and lipid metabolism at the same time. Here we demonstrate that syringin, a naturally occurring glucoside, improves glucose tolerance without increased weight gain in high-fat diet-induced obese mice. Syringin augmented insulin-stimulated Akt phosphorylation in skeletal muscle, epididymal adipose tissue (EAT), and the liver, showing an insulin-sensitizing activity. Syringin-treated mice also showed markedly elevated adiponectin production in EAT and suppressed expression of pro-inflammatory cytokines in peripheral tissues, indicating a significant reduction in low-grade chronic inflammation. Additionally, syringin enhanced AMP-activated protein kinase activity and decreased the expression of lipogenic genes in skeletal muscle, which was associated with reduced endoplasmic reticulum (ER) stress. Taken together, our data suggest that syringin attenuates HFD-induced insulin resistance through the suppressive effect of adiponectin on low-grade inflammation, lipotoxicity, and ER stress, and show syringin as a potential therapeutic agent for prevention and treatment of type 2 diabetes with low risk of adverse effects such as weight gain and dysregulated lipid metabolism. (C) 2017 Elsevier Inc. All rights reserved. Osteoporosis is one of the most prevalent age-related diseases worldwide, of which vertebral fracture is by far the most common osteoporotic fracture. Reduced bone formation caused by senescence is a main cause for senile osteoporosis, however, how to improve the osteogenic differentiation of osteoporotic bone marrow mesenchymal stem cells (BMSCs) remains a challenge. This study aimed to investigate the autophagic level changes in osteoporotic BMSCs derived from human vertebral body, and then influence osteogenesis through the regulation of autophagy. We found that hBMSCs from osteoporotic patients displayed the senescence-associated phenotypes and significantly reduced autophagic level compared to those derived from healthy ones. Meanwhile, the osteogenic potential remarkably decreased in osteoporotic hBMSCs, suggesting an inherent relationship between autophagy and osteogenic differentiation. Furthermore, rapymycin (RAP) significantly improved osteogenic differentiation through autophagy activatoin. However, the osteogenesis of hBMSCs was reversed by the autophagy inhibitor 3methyladenine (3-MA). To provide more solid evidence, the hBMSCs pretreated with osteogenesis induction medium in the presence of 3-MA or RAP were implanted into nude mice. In vivo analysis showed that RAP treatment induced larger ectopic bone mass and more osteoid tissues, however, this restored ability of osteogenic potential was significantly inhibited by 3-MA pretreatment. In conclusion, our study indicated the pivotal role of autophagy for the osteo-differentiation hBMSCs, and offered novel therapeutic target for osteoporosis treatment. (C) 2017 Elsevier Inc. All rights reserved. Soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) proteins mediate intracellular membrane fusion by forming a ternary SNARE complex. A minimalist approach utilizing proteoliposomes with reconstituted SNARE proteins yielded a wealth of information pinpointing the molecular mechanism of SNARE-mediated fusion and its regulation by accessory proteins. Two important attributes of a membrane fusion are lipid-mixing and the formation of an aqueous passage between apposing membranes. These two attributes are typically observed by using various fluorescent dyes. Currently available in vitro assay systems for observing fusion pore opening have several weaknesses such as cargo-bleeding, incomplete removal of unencapsulated dyes, and inadequate information regarding the size of the fusion pore, limiting measurements of the final stage of membrane fusion. In the present study, we used a biotinylated green fluorescence protein and streptavidin conjugated with Dylight 594 (DyStrp) as a Foster resonance energy transfer (FRET) donor and acceptor, respectively. This FRET pair encapsulated in each v-vesicle containing synaptobrevin and t-vesicle containing a binary acceptor complex of syntaxin la and synaptosomal-associated protein 25 revealed the opening of a large fusion pore of more than 5 nm, without the unwanted signals from unencapsulated dyes or leakage. This system enabled determination of the stoichiometry of the merging vesicles because the FRET efficiency of the FRET pair depended on the molar ratio between dyes. Here, we report a robust and informative assay for SNARE-mediated fusion pore opening. (C) 2017 Elsevier Inc.All rights reserved. microRNA-125b has been reported to play an novel biological function in the progression and development of several kinds of leukemia. However, the detail role of miR-125b in acute myeloid leukemia (AML) is remains largely unknown. The present study aimed to investigate the biological role of miR125b in AML and the potential molecular mechanism involved in this process. Our results showed that overexpression of miR-125b suppressed AML cells proliferation, invasion and promotes cells apoptosis in a dose-dependent manner, while the miR-NC did not show the same effect. In addition, miR125b induced AML cells G2/M cell cycle arrest in vitro. Overexpression of miR-125b resulted in a significant decrease of the expression of p-IKB-alpha and inhibition of IKB-alpha degradation, and the nuclear translocation of NF-kappa B subunit p65 was abrogated by miR-125b simutaneously. To further verify that miR-125b targeted NF-kappa B signaling pathway, the NF-kappa B-regulated downstream genes that were associated with cell cycle arrest and apoptosis was also determined. The results showed that, miR-125b also affect NF-kappa B-regulated genes expression involved in cell cycle arrest and apoptosis. In conclusion, the present work certificates that miR-125b can significantly inhibit human AML cells invasion, proliferation and promotes cells apoptosis by targeting the NF-kappa B signaling pathway, and thus it can be viewed as an promising therapeutic target for AML. (C) 2017 Published by Elsevier Inc. Long non-coding RNAs (IncRNAs) have emerged as critical regulators of the progression of human cancers, including colorectal cancer (CRC). The study of genome-wide IncRNA expression patterns in metastatic CRC could provide novel mechanism underlying CRC carcinogenesis. In here, we determined the lncRNA expression profiles correlating to CRC with or without lymph node metastasis (LNM) based on microarray analysis. We found that 2439 lncRNAs and 1654 mRNAs were differentially expressed in metastatic CRC relative to primary CRC. Among these dysregulated IncRNAs, FBXL19-AS1 was the most significantly upregulated IncRNA in metastatic tumors. Functionally, knockdown of FBXL19-AS1 played tumor-suppressive effects by inhibiting cell proliferation, migration and invasion in vitro and tumor growth and metastasis in vivo. Overexpression of FBXL19-AS1 was markedly correlated with TNM stage and LNM in CRC. Bioinformatics analysis predicted that miR-203 was potentially targeted by FBXL19-AS1, which was confirmed by dual-luciferase reporter assay. Pearson's correlation analysis showed that miR203 expression was negatively related to FBXL19-AS1 in tumor tissues. Finally, miR-203 inhibition abrogated the effect of FBXL19-AS1 knockdown on the proliferation and invasion of LoVo cells. Our results reveal the cancer-promoting effect of FBXL19-AS1, acting as a molecular sponge in negatively modulating miR-203, which might provide a new insight for understanding of CRC development. (C) 2017 Elsevier Inc. All rights reserved. The Ca2+ sensor proteins STIM1 and STIM2 are crucial elements of store-operated calcium entry (SOCE) in breast cancer cells. Increased SOCE activity may contribute to epithelial mesenchymal transitions (EMT) and increase cell migration and invasion. However, the roles of STIM1 and STIM2 in TGF-13-induced EMT are still unclear. In this study, we demonstrate roles of STIMs in TGF-beta-induced EMT in breast cancer cells. In particular, STIM1 and STIM2 expression affected TGF-beta-induced EMT by mediating SOCE in MDAMB-231 and MCF-7 breast cancer cells. The specific SOCE inhibitor YM58483 blocked TGF-beta-induced EMT, and differing effects of STIM1 and STIM2 on TGF-beta-induced EMT correlated with differing roles in SOCE. Finally, we showed that STIM2 is associated with non-store-operated calcium entry (non-SOCE) during TGF-f3-induced EMT, whereas STIM1 is not. What's more, non-SOCE have a large possibility to be ROCE. In conclusion, STIM1 and STIM2 proteins play important roles in TGF-beta-induced EMT and these effects are related to both SOCE and non-SOCE. (C) 2017 Elsevier Inc. All rights reserved. STAP-2 is an adaptor molecule regulating several signaling pathways, including TLRs and cytokine/chemokine receptors in immune cells. We previously reported that STAP-2 enhances SDF-1 a-induced Vavl/Racl-mediated T-cell chemotaxis. However, the detailed mechanisms of STAP-2 involvement in enhancing T-cell chemotaxis remain unknown. In the present study, we demonstrate that STAP-2 directly interacts with Pyk2, which is a key molecule in the regulation of SDF-la/CXCR4-mediated T-cell chemotaxis, and increases phosphorylation of Pyk2. Pyk2 itself can induce STAP-2 Y250 phosphorylation, and this phosphorylation is critical for maximal interactions between STAP-2 and Pyk2. Finally, SDF-1 a induced T-cell chemotaxis is inhibited by treatment with Pyk2 siRNA or AG17, an inhibitor of Pyk2, in Jurkat cells overexpressing STAP-2. Taken together, the Pyk2/STAP-2 interaction is a novel mechanism to regulate SDF-1 alpha-dependent T-cell chemotaxis. (C) 2017 Elsevier Inc. All rights reserved. Nitric oxide (NO) plays an essential role in a myriad of physiological and pathological processes, but the molecular mechanism of the action and the corresponding direct targets have remained largely unknown. We used cellular, biochemical, and genetic approaches to decipher the potential role of NO in root growth in Arabidopsis thaliana. We specifically demonstrate that exogenous application of NO simulates the phenotype of NO overproducing mutant (noxl), displaying reduced root growth and meristem size. Using root specific cell marker lines, we show that the cell in the cortex layer are more sensitive to NO as they show enhanced size. Examination of total S-nitrosylated proteins showed higher levels in noxl mutant than wild type. Using an in vitro assay we demonstrate that plastidial glyderaldehyde-3-phosphate dehydrogenase (GAPDH) is one of NO direct targets. The function of GAPDH in glycolysis provide a rational for S-nitrosylation of this enzyme and its subsequent reduced activity and ultimately reduced growth in roots. Indeed, the rescue of the root growth phenotype in noxl by exogenous application of glycine and serine, the downstream products of plastidial GAPDH provide unequivocal evidence for mechanism of NO action through S-nitrosylation of key proteins, thereby delicately balancing growth and stress responses. (C) 2017 Elsevier Inc. All rights reserved. In P. falciparum, antioxidant proteins of the glutathione and thioredoxin systems are compartmentalized. Some subcellular compartments have only a partial complement of these proteins. This lack of key antioxidant proteins in certain sub-cellular compartments might be compensated by functional complementation between these systems. By assessing the cross-talk between these systems, we show for the first time, that the glutathione system can reduce thioredoxins that are poor substrates for thioredoxin reductase (Thioredoxin-like protein 1 and Thioredoxin 2) and thioredoxins that lack access to thioredoxin reductase (Thioredoxin 2). Our data suggests that crosstalk between the glutathione and thioredoxin systems does exist; this could compensate for the absence of certain antioxidant proteins from key subcellular compartments. (C) 2017 Elsevier Inc. All rights reserved. CSN5 (also known as COPS5) is a newly characterized oncogene involved in various types of cancer. However, its expression pattern and biological functions in renal cell carcinoma (RCC) is unknown. Here, we found that CSN5 expression was elevated in RCC tissues than those in paired normal renal tissues. Additionally, we demonstrated that high CSN5 level was closely correlated with tumor progression and poor survival in RCC patients. Our results showed that increased expression of CSN5 was observed in RCC cell lines and knockdown of CSN5 significantly suppressed the migration and invasion of RCC cells in vitro and in vivo. Additionally, CSN5 contributes to the metastasis and EMT (epithelial-mesenchymal transition) of RCC cells. Further investigation revealed that CSN5 led to the metastasis and EMT activation of RCC cells through increasing ZEB1 expression. Mechanistically, we found that CSN5 directly bound ZEB1 and decreased its ubiquitination to enhance the protein stability of ZEB1 in RCC cells. Taken together, our data identified CSN5 as a critical oncoprotein involved in migration and invasion of RCC cells, which could serve as a potential therapeutic target in RCC patients. (C) 2017 Elsevier Inc. All rights reserved. To investigate the effects of the PI3K inhibitors on the differentiation of insulin-producing cells derived from human embryonic stem cells. Here, we report that human embryonic stem cells induced by phosphatidylinositol-3-kinase (PI3K) p110 beta inhibitors could produce more mature islet-like cells. Findings were validated by immunofluorescence analysis, quantitative real-time PCR, insulin secretion in vitro and cell transplantation for the diabetic SCID mice. Immunofluorescence analysis revealed that unihormonal insulin-positive cells were predominant in cultures with rare polyhormonal cells. Real-time PCR data showed that islet-like cells expressed key markers of pancreatic endocrine hormones and mature pancreatic beta cells including MAFA. Furthermore, this study showed that the expression of most pancreatic endocrine hormones was similar between groups treated with the LY294002 (nonselective PI3K inhibitor) and TGX-221 (PI3K isoform selective inhibitors of class 1 beta) derivatives. However, the level of insulin mRNA in TGX-221-treated cells was significantly higher than that in LY294002-treated cells. In addition, islet-like cells displayed glucose-stimulated insulin secretion in vitro. After transplantation, islet-like cells improved glycaemic control and ameliorated the survival outcome in diabetic mice. This study demonstrated an important role for PI3K p110 beta in regulating the differentiation and maturation of islet-like cells derived from human embryonic stem cells. (C) 2017 Elsevier Inc. All rights reserved. Hypothalamic insulin receptor signaling regulates energy balance and glucose homeostasis via agouti related protein (AgRP). While protein tyrosine phosphatase 1B (PTP1B) is classically known to be a negative regulator of peripheral insulin signaling by dephosphorylating both insulin receptor beta (IR beta) and insulin receptor substrate, the role of PTP1B in hypothalamic insulin signaling remains to be fully elucidated. In the present study, we investigated the role of PTP1B in hypothalamic insulin signaling using PTP1B deficient (KO) mice in vivo and ex vivo. For the in vivo study, hypothalamic insulin resistance induced by a high-fat diet (HFD) improved in KO mice compared to wild-type (WT) mice. Hypothalamic AgRP mRNA expression levels were also significantly decreased in KO mice independent of body weight changes. In an ex vivo study using hypothalamic organotypic cultures, insulin treatment significantly increased the phosphorylation of both IR beta and Akt in the hypothalamus of KO mice compared to WT mice, and also significantly decreased AgRP mRNA expression levels in KO mice. While incubation with inhibitors of phosphatidylinositol-3 kinase (PI3K) had no effect on basal levels of Akt phosphorylation, these suppressed insulin induction of Akt phosphorylation to almost basal levels in WT and KO mice. The inhibition of the PI3K-Akt pathway blocked the downregulation of AgRP mRNA expression in KO mice treated with insulin. These data suggest that PTP1B acts on the hypothalamic insulin signaling via the PI3K-Akt pathway. Together, our results suggest a deficiency of PTP1B improves hypothalamic insulin sensitivity resulting in the attenuation of AgRP mRNA expression under HFD conditions. (C) 2017 Elsevier Inc. All rights reserved. Up-frameshift (Upf) complex facilitates the degradation of aberrant mRNAs containing a premature termination codon (PTC) and its products in yeast. Here we report that Sse1, a member of the Hsp110 family, and Hsp70 play a crucial role in Upf-dependent degradation of the truncated FLAG-Pgk1-300 protein derived from PGK1 mRNA harboring a PTC at codon position 300. Sse1 was required for Upf-dependent rapid degradation of the FLAG-Pgk1-300. FLAG-Pgk1-300 was significantly destabilized in ATP hydrolysis defective sse1-1 mutant cells than in wild type cells. Consistently, Sse1 and Hsp70 reduced the level of an insoluble form of FLAG-Pgk1-300. We propose that the Sse1/Hsp70 complex maintains the solubility of FLAG-Pgk1-300, thereby stimulating its Upf-dependent degradation by the proteasomes. (C) 2017 Elsevier Inc. All rights reserved. Membrane contact sites between organelles serve as molecular hubs for the exchange of metabolites and signals. In yeast, the Endoplasmic Reticulum - Mitochondrion Encounter Structure (ERMES) tethers these two organelles likely to facilitate the non-vesicular exchange of essential phospholipids. Present in Fungi and Amoebas but not in Metazoans, ERMES is composed of five distinct subunits; among those, Mdm12, Mmm1 and Mdm34 each contain an SMP domain functioning as a lipid transfer module. We previously showed that the SMP domains of Mdm12 and Mmm1 form a hetero-tetramer. Here we describe our strategy to diversify the number of Mdm12/Mmm1 complexes suited for structural studies. We use sequence analysis of orthologues combined to protein engineering of disordered regions to guide the design of protein constructs and expand the repertoire of Mdm12/Mmm1 complexes more likely to crystallize. Using this combinatorial approach we report crystals of Mdm12/Mmm1 ERMES complexes currently diffracting to 4.5 angstrom resolution and a new structure of Mdm12 solved at 4.1 angstrom resolution. Our structure reveals a monomeric form of Mdm12 with a conformationally dynamic N-terminal beta-strand; it differs from a previously reported homodimeric structure where the N-terminal beta strands where swapped to promote dimerization. Based on our electron microscopy data, we propose a refined pseudo atomic model of the Mdm12/Mmm1 complex that agrees with our crystallographic and small-angle X-ray scattering (SAXS) solution data. (C) 2017 Elsevier Inc. All rights reserved. Sterol regulatory element-binding proteinl (SREBP1) is a key regulatory factor that controls lipid homeostasis. Overactivation of SREBP1 and elevated lipid biogenesis are considered the major characteristics in malignancies of prostate cancer, endometrial cancer, and glioblastoma. However, the impact of SREBP1 activation in the progression of pancreatic cancer has not been explored. The present study examines the effect of suppression of SREBP1 activation by its inhibitors like fatostatin and PF429242 besides analyzing the impact of inhibitory effects on SREBP1 downstream signaling cascade such as fatty acid synthase (FAS), hydroxymethylglutaryl-CoA reductase (HMGCoAR), stearoyl-CoA desaturase-1 (SCD-1), and tumor suppressor protein p53 in MIA PaCa-2 pancreatic cancer cells. Both fatostatin and PF429242 inhibited the growth of MIA PaCa-2 cells in a time and concentration-dependent manner with maximal inhibition attained at 72 h time period with IC50 values of 14.5 mu M and 24.5 mu M respectively. Detailed Western blot analysis performed using fatostatin and PF429242 at 72 h time point led to significant decrease in the levels of the active form of SREBP1 and its downstream signaling proteins such as FAS, SCD-1 and HMGCoAR and the mutant form of tumor suppressor protein, p53, levels in comparison to the levels observed in vehicle treated control group of MIA PaCa-2 pancreatic cells over the same time period. Our in vitro data suggest that SREBP1 may contribute to pancreatic tumor growth and its inhibitors could be considered as a potential target in the management of pancreatic cancer cell proliferation. (C) 2017 Elsevier Inc. All rights reserved. Phosphatidylcholine (PtdCho) is a common and abundant phospholipid in most eukaryotic organisms. Although it has been known that the model green alga Chlamydomonas reinhardtii lacks PtdCho, we recently detected PtdCho in four Chlamydomonas species. Homology search of draft genomic sequences of the four PtdCho-containing algae suggested existence of phosphoethanolamine-N-methyltransferase (PEAMT) in C applanata and C asymmetrica, which is the key enzyme in PtdCho biosynthesis in land plants. Here we analyzed the putative genes encoding PEAMT in C applanata and C asymmetrica, named CapPEAMT and CasPEAMT, respectively. In vitro assays with recombinant CapPEAMT and CasPEAMT indicated that they have the methylation activity for phosphoethanolamine, but not the methylation activity for phosphomonomethylethanolamine, in contrast with land plant PEAMTs, that possess the three successive methylation activities. (C) 2017 Elsevier Inc. All rights reserved. As important components of the glucosinolate-myrosinase system, specifier proteins mediate plant defense against herbivory and pathogen attacks. After tissue disruption, glucosinolates are hydrolyzed by myrosinases to instable aglucones, which will rearrange to form defensive isothiocyanates. Nevertheless, this reaction could be redirected to form other products by specifier proteins. Up to now, identified specifier proteins include epithiospecifier proteins (ESPs), thiocyanate forming proteins (TFPs), and nitrile-specifier proteins (NSPs). Recently, the structures of ESP and TFP have been reported. However, both the structure and the catalytic mechanism of NSPs remain enigmatic. Here, we solved the crystal structure of the NSP1 protein from Arabidopsis thaliana (AtNSP1). Structural comparisons with ESP and TFP proteins revealed several structural features of AtNSP1 different from those of the two proteins. Subsequent molecular docking studies showed that the R292 residue in AtNSP1 displayed a conformation different from those of the corresponding residues in ESP and TFP proteins, which might account for the product specificity and catalytic mechanism of AtNSP1. Taken together, the present study provides important insights into the molecular mechanisms underlying the different product spectrums between NSPs and the other two types of specifier proteins, and shed light on the future studies of the detailed mechanisms of other specifier proteins. (C) 2017 Elsevier Inc. All rights reserved. The opportunistic pathogen Candida albicans forms invasive filaments that grow into host tissues during disease. The glycosylated, integral plasma membrane protein Dfi1 is important for invasive filamentation in a laboratory model, and for lethality in murine disseminated candidiasis. However, Dfi1 topology and essential domains for Dfi1 biogenesis were undefined. Sequence analysis predicted that Dfi1 contains two transmembrane regions, located near the N- and C-termini. In this communication, we show that Dfi1 remains an integral membrane protein despite deletion of either predicted transmembrane region, whereas deletion of both regions results in a soluble protein. Additionally, Dfi1 that was properly oriented in the membrane, as indicated by N-linked glycosylation, was observed when either trans membrane region was deleted, but was absent when both transmembrane regions were deleted. Interestingly, deletion of the N-terminal transmembrane region resulted in production of two forms of Dfi1. Most of the protein molecules acquired normal N-linked glycosylation and a smaller population failed to become normally N-linked glycosylated. This defect was reversed by replacement of the N-terminal hydrophobic sequence with one synthetic transmembrane sequence but not another. Finally, microscopy studies revealed that Dfi1 lacking the N-terminal transmembrane region was observed at the cell periphery, where full-length Dfi1 normally localizes, whereas the double-truncation mutant was diffusely intracellular. Therefore, mature Dfi1 protein contains two transmembrane domains which contribute to its biogenesis. (C) 2017 Elsevier Inc. All rights reserved. Lipin-1 has dual functions in the regulation of lipid and energy metabolism according to its subcellular localization, which is tightly controlled. However, it is unclear how Lipin-1 degradation is regulated. Here, we demonstrate that Lipin-1 is degraded through its DSGXXS motif. We show that Lipin-1 interacts with either of two E3 ubiquitin ligases, BTRC or FBXW11, and that this interaction is DSGXXS-dependent and mediates the attachment of polyubiquitin chains. Further, we demonstrate that degradation of Lipin-1 is regulated by BTRC in the cytoplasm and on membranes. These novel insights into the regulation of human Lipin-1 stability will be useful in planning further studies to elucidate its metabolic processes. (C) 2017 Elsevier Inc. All rights reserved. We investigated the role of FAD2, which was predicted to encode a fatty acid desaturase of the n-alkane-assimilating yeast Yarrowia lipolytica. Northern blot analysis suggested that FAD2 transcription was upregulated at low temperature or in the presence of n-alkanes or oleic acid. The FAD2 deletion mutant grew as well as the wild-type strain on glucose, n-alkanes, or oleic acid at 30 degrees C, but grew at a slower rate at 12 degrees C, when compared to the wild-type strain. The growth of the FAD2 deletion mutant at 12 degrees C was restored by the addition of 18:2, but not 18:1, fatty acids. The amount of 18:2 fatty acid in the wild-type strain was increased by the incubation at 12 degrees C and in the presence of n-octadecane. In contrast, 18:2 fatty acid was not detected in the deletion mutant of FAD2, confirming that FAD2 encodes the Delta 12-fatty acid desaturase. These results suggest that Delta 12-fatty acid desaturase is involved in the growth of Y. lipolytica at low temperature. (C) 2017 Elsevier Inc. All rights reserved. We investigated the effects of essential amino acids on intestinal stem cell proliferation and differentiation using murine small intestinal organoids (enteroids) from the jejunum. By selectively removing individual essential amino acids from culture medium, we found that 24 h of methionine (Met) deprivation markedly suppressed cell proliferation in enteroids. This effect was rescued when enteroids cultured in Met deprivation media for 12 h were transferred to complete medium, suggesting that Met plays an important role in enteroid cell proliferation. In addition, mRNA levels of the stem cell marker leucine-rich repeat-containing G protein-coupled receptor 5 (Lgr5) decreased in enteroids grown in Met deprivation conditions. Consistent with this observation, Met deprivation also attenuated Lgr5-EGFP fluorescence intensity in enteroids. In contrast, Met deprivation enhanced mRNA levels of the enter-oendocrine cell marker chromogranin A (ChgA) and markers of K cells, enterochromaffin cells, goblet cells, and Paneth cells. Immunofluorescence experiments demonstrated that Met deprivation led to an increase in the number of ChgA-positive cells. These results suggest that Met deprivation suppresses stem cell proliferation, thereby promoting differentiation. In conclusion, Met is an important nutrient in the maintenance of intestinal stem cells and Met deprivation potentially affects cell differentiation. (C) 2017 Elsevier Inc. All rights reserved. Zinc is an essential element for the biological system. However, excessive exogenous Zn2+ would disrupt cellular Zn2+ homeostasis and cause toxicity. In particular, Zinc salts or ZnO nanoparticles exposure could induce respiratory injury. Although previous studies have indicated that organelle damage (including mitochondria or lysosomes) and reactive oxygen species (ROS) production are involved in Zn2+-induced toxicity, the interplay between mitochondria/lysosomes damage and ROS production is obscure. Herein, we demonstrated that Zn2+ could induce deglycosylation of lysosome-associated membrane protein 1 and 2 (LAMP-1 and LAMP-2), which primarily locate in late endosomes/lysosomes, in A549 lung epithelium cells. Intriguingly, LAMP-2 knockdown further aggravated Zn2+-mediated ROS production and cell death, indicating LAMP-2 (not LAMP-1) was involved in Zn2+-induced toxicity. Our results provide a new insight that LAMP-2 contributes to the ROS clearance and cell death induced by Zn2+ treatment, which would help us to get a better understanding of Zn2+-induced toxicity in respiratory system. (C) 2017 Elsevier Inc. All rights reserved. Burkitt lymphoma (BL) is a highly aggressive B-cell neoplasm. Although BL is relatively sensitive to chemotherapy, some patients do not respond to initial therapy or relapse after standard therapy, which leads to poor prognosis. The mechanisms underlying BL chemoresistance remain poorly defined. Here, we report a mechanism for the relationship between the phosphorylation of STAT3 on Tyr705 and BL chemoresistance. In chemoresistant BL cells, STAT3 was activated and phosphorylated on Tyr705 in response to the generation of the reactive oxygen species (ROS), which induced Src Tyr416 phosphorylation after multi-chemotherapeutics treatment. As a transcription factor, the elevated phosphorylation level of STAT3(Y705) increased the expression of GPx1 and SOD2, both of which protected cells against oxidative damage. Our findings revealed that the ROS-Src-STAT3-antioxidation pathway mediated negative feedback inhibition of apoptosis induced by chemotherapy. Thus, the phosphorylation of STAT3 on Tyr705 might be a target for the chemo-sensitization of BL. (C) 2017 Elsevier Inc. All rights reserved. Persistent or excess activation of NF-kappa B leads to cancer, autoimmune and inflammatory diseases. Therefore, activated NF-kappa B needs to be terminated after induction, which highlights the physiological significance of NF-kappa B-negative regulators. However, the molecular mechanisms that negatively regulate NF-kappa B are not well understood. Here, we report that Ring Finger Protein 8 (RNF8), an E3 ubiquitin ligase, inhibits TNF alpha-mediated NF-kappa B activation by targeting I kappa B kinase (IKK). Upon TNF alpha stimulation, RNF8 binds to the catalytic subunits of IKK complex, resulting in inhibition of IKK alpha/beta phosphorylation and subsequent NF-kappa B activation. RNF8 targets the IKK complex in a manner independent of its RING domain. We further provide evidence that the silencing of RNF8 results in enhanced TNF alpha-induced IKK activation, and an increase expression of NF-kappa B-induced inflammatory cytokine IL-8. Our study identifies a previously unrecognized role for RNF8 in the negative regulation of NF-kappa B activation by targeting and deactivating the IKK complex. (C) 2017 Elsevier Inc. All rights reserved. C-C chemokine receptor type 4 has been reported to correlate with lung cancer. However, the role of CCR4 in human non-small cell lung cancer patients is not well defined. Here, we demonstrated that increased expression of CCR4 was associated with clinical stage and CCR4 was an independent risk factor for overall survival in NSCLC patients. Moreover, tumor-infiltrating Treg cells were higher expression than matched adjacent tissues in CCR4+ NSCLC. Higher expression of chemokine CCL17 and CCL22 could recruit Treg cells to tumor sites in NSCLC. Treg in TIL exhibit a higher level of suppressive activity on effector T cells than matched adjacent tissues in NSCLC patients. Significant NK cell reduction was observed in tumor regions compared to non-tumor regions. NK cells demonstrated that reduced the killing capacity against target cells and the expression of CD69 (+) in vitro. The addition of Treg cells from NSCLC patients efficiently inhibited the anti-tumor ability of autologous NK cells. Treatment with anti-TGF-beta antibody restored the impaired cytotoxic activity of T cells and NK cells from tumor tissues. Our results indicate that TGF-beta plays an important role in impaired Teff cells and NK cells. It will therefore be valuable to develop therapeutic strategies against CCR4 and TGF-beta pathway for therapy of NSCLC. (C) 2017 Elsevier Inc. All rights reserved. Background: The mechanisms underlying chronic and persistent pain associated with chronic pancreatitis (CP) are not completely understood. The cholinergic system is one of the major neural pathways of the pancreas. Meanwhile, this system plays an important role in chronic pain. We hypothesized that the high affinity choline transporter CHT1, which is a main determinant of cholinergic signaling capacity, is involved in regulating pain associated with CP. Methods: CP was induced by intraductal injection of 2% trinitrobenzene sulfonic acid (TNBS) in Sprague-Dawley rats. Pathological examination was used to evaluate the inflammation of pancreas and hyperalgesia was assessed by measuring the number of withdrawal events evoked by application of the von Frey filaments. CHT1 expression in pancreas-specific dorsal root ganglia (DRGs) was assessed through immunohistochemistry and western blotting. We also intraperitoneally injected the rats with hemicholinium-3 (HC-3, a specific inhibitor of CHT1). Then we observed its effects on the visceral hyperalgesia induced by CP, and on the acetylcholine (ACh) levels in the DRGs through using an acetylcholine/acetylcholinesterase assay kit. Results: Signs of CP were observed 21 days after TNBS injection. Rats subjected to TNBS infusions had increased sensitivity to mechanical stimulation of the abdomen. CHT1-immunoreactive cells were increased in the DRGs from rats with CP compared to naive or sham rats. Western blots indicated that CHT1 expression was significantly up-regulated in TNBS-treated rats when compared to naive or sham-operated rats at all time points following surgery. In the TNBS group, CHT1 expression was higher on day 28 than on day 7 or day 14, but there was no statistical difference in CHT1 expression on day 28 vs. day 21. Treatment with HC-3 (60 mu g/kg, 80 mu g/kg, or 100 mu g/kg) markedly enhanced the mechanical hyperalgesia and reduced ACh levels in a dose-dependent manner in rats with CP. Conclusion: We report for the first time that CHT1 may be involved in pain modulation in CP, as it plays an important role in pain inhibition. Increased CHT1 activity or the up-regulation of its expression may be used to treat pain in patients with CP. (C) 2017 Elsevier Inc. All rights reserved. Poly (ADP-ribose) polymerase 1 (PARP1) is an ADP- ribosylation enzyme and plays important roles in a variety of cellular processes, including DNA damage response and tumor development. However, the post-transcriptional regulation of PARP1 remains largely unknown. In this study, we identified that the mRNA of PARP1 is associated with nuclear factor 90 (NF90) by RNA immunoprecipitation plus sequencing (RIP-seq) assay. The mRNA and protein levels of PARP1 are dramatically decreased in NF90-depleted cells, and NF90 stabilizes PARP1's mRNA through its 3'UTR. Moreover, the expression levels of PARP1 and NF90 are positively correlated in hepatocellular carcinoma (HCC). Finally, we demonstrated that NF90-depleted cells are sensitive to PARP inhibitor Olaparib (AZD2281) and DNA damage agents. Taken together, these results suggest that NF90 regulates PARPI mRNA stability in hepatocellular carcinoma cells, and NF90 is a potential target to inhibit PARP1 activity. (C) 2017 Elsevier Inc. All rights reserved. In eukaryotes, numerous genetic factors contribute to the lifespan including metabolic enzymes, signal transducers, and transcription factors. As previously reported, the forkhead-like transcription factor (FHL1) gene was required for yeast replicative lifespan and cell proliferation. To determine how Fhl1p regulates the lifespan, we performed a DNA microarray analysis of a heterozygous diploid strain deleted for FHL1. We discovered numerous Fhl1p-target genes, which were then screened for lifespan-regulating activity. We identified the ribonucleotide reductase (RNR) 1 gene (RNR1) as a regulator of replicative lifespan. RNR1 encodes a large subunit of the RNR complex, which consists of two large (Rnr1p/Rnr3p) and two small (Rnr2p/Rnr4p) subunits. Heterozygous deletion of FHL1 reduced transcription of RNR1 and RNR3, but not RNR2 and RNR4. Chromatin immunoprecipitation showed that Fhl1p binds to the promoter regions of RNR1 and RNR3. Cells harboring an RNR1 deletion or an mr1-C428A mutation, which abolishes RNR catalytic activity, exhibited a short lifespan. In contrast, cells with a deletion of the other RNR genes had a normal lifespan. Overexpression of RNR1, but not RNR3, restored the lifespan of the heterozygous FHL1 mutant to the wild-type (WT) level. The Delta fhl1/FHL1 mutant conferred a decrease in dNTP levels and an increase in hydroxyurea (HU) sensitivity. These findings reveal that Fhl1p regulates RNR1 gene transcription to maintain dNTP levels, thus modulating longevity by protection against replication stress. (C) 2017 Elsevier Inc. All rights reserved. Clinical evidence has indicated an increased myocardial infarction (MI) morbidity and mortality after exposure to air pollution (particulate matter<2.5 mu m PM2.5). However, the mechanisms by which PM2.5 aggravates MI remain unknown. Present study was to explore the adverse effect of PM2.5 on myocardium after MI and the potential mechanisms. Male mice with MI surgery were treated with PM2.5 by intranasal instillation. Neonatal mice ventricular myocytes (NMVMs) subjected to hypoxia were also incubated with PM2.5 to determine the role of PM2.5 in vitro. Exposure to PM2.5 significantly impaired the cardiac function and increased the infarct size in MI mice. TUNEL assay, flow cytometry and western blotting of Caspase 3, Bax and BCI-2 indicated that PM2.5 exposure could cause cellular apoptosis in vivo and in vitro. Besides, PM2.5 activated NF kappa B pathway and increased gene expression of IL-1 beta and IL-6 in NMVMs with hypoxia, which could be effectively reversed by SN-50-induced blockade of NF kappa B trans location to the nucleus. In summary, air pollution induces myocardium apoptosis and then impairs cardiac function and aggravates MI via NF kappa B activation. (C) 2017 Elsevier Inc. All rights reserved. We previously reported transplantation of brain microvascular endothelial cells (MVECs) into cerebral white matter infarction model improved the animal's behavioral outcome by increasing the number of oligodendrocyte precursor cells (OPCs). We also revealed extracellular vesicles (EVs) derived from MVECs promoted survival and proliferation of OPCs in vitro. In this study, we investigated the mechanism how EVs derived from MVECs contribute to OPC survival and proliferation. Protein mass spectrometry and enzyme-linked immunosorbent assay revealed fibronectin was abundant on the surface of EVs from MVECs. As fibronectin has been reported to promote OPC survival and proliferation via integrin signaling pathway, we blocked the binding between fibronectin and integrins using RGD sequence mimics. Blocking the binding, however, did not attenuate the survival and proliferation promoting effect of EVs on OPCs. Flow cytometric and imaging analyses revealed fibronectin on EVs mediates their internalization into OPCs by its binding to heparan sulfate proteoglycan on OPCs. OPC survival and proliferation promoted by EVs were attenuated by blocking the internalization of EVs into OPCs. These lines of evidence suggest that fibronectin on EVs mediates their internalization into OPCs, and the cargo of EVs promotes survival and proliferation of OPCs, independent of integrin signaling pathway. (C) 2017 Elsevier Inc. All rights reserved. Oxidative stress and inflammation play important roles in the pathogenesis of ischemia/reperfusion (I/R)-injury. The administration of antioxidants and anti-inflammatory agents has been applied to prevent I/R-injury for several decades. Of the numerous compounds administrated therapeutically in anti oxidative stress, nitronyl nitroxide has gained increasing attention due to its continuous ability to scavenge active oxygen radicals. However, its effect is not ideal in clinical therapy. In previous study, we linked the anti-inflammatory amino acid glycine to nitronyl nitroxide and developed a novel glycinenitronyl nitroxide (GNN) conjugate, which showed a synergetic protection against renal ischemia/reperfusion injury. However, the underlying mechanism remains unclear. In this study, a hypoxia/reoxygenation (H/R) injury model was established in human umbilical vein endothelial cells (HUVECs) and we found that the GNN conjugate significantly elevated the cell viability via reducing the apoptosis rate in H/R-treated HUVECs. Meanwhile, GNN conjugate attenuated H/R induced mitochondrial fragmentation, mitochondrial membrane potential reduction, Cytochrome c release and autophagy. To determine the extensive applicability of GNN conjugate in different I/R. models and its effect in remote organs, an in vivo hind limb I/R model was established. As expected, GNN conjugate ameliorated damages of muscle and remote organs. These results demonstrate that GNN conjugate may be an effective agent against ischemia/reperfusion injury in clinical therapy. (C) 2017 Elsevier Inc. All rights reserved. The rise of social media as a marketing channel, and the research that has supported it, has left open questions as to its impact on actual brand performance. The current authors sought to fill the gap of knowledge on the relationship between social media and real-world conversations and outcomes for brands. Building on a decade's worth of research, they used four key metrics volume, sentiment, sharing, and influence to study the potential for correlations between online and offline conversations about brands. There were, in fact, almost no correlations, which suggests the need for marketers to develop separate digital and offline social influence strategies. This study drew on the existing decision process theory to empirically examine the effect of word of mouth (WOM) generated by social media. The prospect theory was adopted to illustrate how both the volume and valence of WOM influence a person's decision to watch a movie through the movie quality evaluation stage. Using U.S. movie industry data and online post data from Twitter, the authors found that the effect of online WOM is well explained by the prospect theory. Particularly, findings suggest that intensively advertising a movie before its release to attract moviegoers could backfire by raising the moviegoers' expectations. Filmmakers increasingly depend on trailers as advertising and to generate word of mouth(WOM). This study investigates the extent to which trailers influence WOM in the prerelease context by testing a conceptual model separately on the three most popular movie genres. When viewers perceive greater understanding of the movie from the trailer, the prospect of liking it is significantly increased. This leads to a substantial increase in viewers' intent to generate WOM and, ultimately, their willingness to pay to see the movie. These novel findings lead to practical implications for studios hoping to stimulate consumer interest, with wider contributions to advertising theory. To encourage viral spread, companies attempt to create captivating and compelling online games. The current research examines the effects of requiring players to use skills when playing games versus rewarding players who recommend games to others. The authors used two methods: an experiment and a field study. In the initial study, calling on players' skills during the game experience positively affected the intention to invite friends to join the game. When marketers added a system of incentives, players no longer were motivated to invite friends to join the game. From an existing viral promotional game database, the authors replicated the study and confirmed the results. Product innovation, the fragmentation of media, and the surge in digital and social media platforms have forced marketers to increase their focus on short-term sales tactics. The challenge is an ever-broadening landscape of short-term incentives, often with limited insight into their relative benefits and effectiveness. The current research reveals that consumers organize this vast array of incentives into 11 basic offering types, each delivering specific benefits. This consumer structure does not entirely align with how manufacturers and retailers typically organize them. Although no one of the 11 offering categories was perceived to be significantly more effective than any other, there was a wide range of perceived effectiveness among the specific tactics within each. Further research, however, is needed to validate this in market. Drawing on media-dilution effects, this article investigates the role of media planning, such as media use and scheduling, in copromotion efficiency, calculated with super efficiency data-envelopment analysis. The results show that concentrated media use was more effective in improving copromotion efficiency than extensive media use was in retail sales. In short-term media scheduling, pulsing scheduling led to higher copromotion efficiency than did continuous scheduling. Two-way interaction effects between the number of media uses and scheduling on copromotion efficiency were found also. The positive effect of concentrated media use was stronger for pulsing scheduling than for continuous scheduling. Finally, long-term, multiyear scheduling contributed to copromotion-efficiency improvement. Kantar Millward Brown regularly analyzes attitudes and reactions toward advertising. Duncan Southgate, global brand director of media and digital, led the firm's 2016 "AdReaction" generational focus on people ages 16 to 45 years, exploring their attitudes and behaviors toward traditional and digital advertising formats. The younger participants (16-19-year-olds) belong to the emerging Generation Z cohort,(1) for whom marketers increasingly will have to adapt short-term media-planning and creative-development tactics. But the implications and importance of Gen Z go well beyond their own spending power (whether personal wealth or via pester power) and should be used to shape long-term strategic marketing decisions, Southgate believes. Marketers, therefore, should be careful not to underinvest in media spend and creative experimentation as well as in research to understand what works and why-when targeting Gen Z consumers. The aim of the study was to assess the accuracy of the memory of experimentally induced pain and the affect that accompanies experimentally induced pain. Sixty-two healthy female volunteers participated in the study. In the first phase of the study, the participants received three pain stimuli and rated pain intensity, pain unpleasantness, state anxiety, and their positive and negative affect. About a month later, in the second phase of the study, the participants were asked to rate the pain intensity, pain unpleasantness, state anxiety, and the emotions they had felt during the first phase of the study. Both recalled pain intensity and recalled pain unpleasantness were found to be underestimated. Although the positive affect that accompanied pain was remembered accurately, recalled negative affect was overestimated and recalled state anxiety was underestimated. Experienced pain, recalled state anxiety, and recalled positive affect accounted for 44% of the total variance in predicting recalled pain intensity and 61% of the total variance in predicting recalled pain unpleasantness. Together with recent research findings on the memory of other types of pain, the present study supports the idea that pain is accompanied by positive as well as negative emotions, and that positive affect influences the memory of pain. (C) 2017 by the American Society for Pain Management Nursing Hospitalized patients with persistent pain are among the most challenging populations to effectively manage because of coexistence with acute pain. Nurses play a vital role in pain management; however, gaps in knowledge and detrimental attitudes exist. The purpose of this study was to evaluate the effectiveness of a targeted evidence-based pain education program to increase nurses' knowledge and attitudes about pain management. One group, paired, pretest/posttest educational intervention. A convenience sample of nurses from three medical and surgical inpatient units were recruited. Participants completed a pretest, the Knowledge and Attitudes Survey Regarding Pain Scale, to assess education needs. Identified gaps were targeted during program design. The program consisted of two 30-minute interactive educational sessions approximately 1 month apart. The first session, delivered by a pharmacist, covered pharmacology and pathophysiology content. The second session, delivered by trained registered nurses, used case studies paired with video scenarios. A total of 51 nurses completed the pretest. The final sample consisted of 24 nurses who completed both the pretest and posttest. The mean age was 30 years; 88% were female, and 92% were baccalaureate prepared. Paired t tests indicated higher posttest total scores (p < .001) after the education program compared with pretest scores. Overall program satisfaction was positive. This study found improvement in persistent pain management knowledge and attitudes among direct care nurses caring for hospitalized patients. A targeted educational program may be an effective and efficient delivery method. (C) 2017 by the American Society for Pain Management Nursing The aim of this cross-sectional study was to evaluate the primary determinants of knowledge and attitudes regarding pain among nurses in a hospital setting. All registered nurses employed at participating units at a university hospital were invited to participate. Information on work experience, education, and hospital unit was evaluated using a questionnaire. The Knowledge and Attitude Survey Regarding Pain instrument was used to assess knowledge on pain management. The difference in knowledge between nurses with different levels of education was assessed with analysis of variance. The discriminatory ability of each question was determined with item response theory, and the association between correct answers to individual items and the total score were calculated using linear regression. Participants were 235 nurses, 51% of the 459 invited. The overall pain knowledge score was 26.1 (standard deviation 5.3, range 8-38) out of a total of 40 possible. Those with an advanced degree in nursing scored on average 2.9 points higher than those who did not have an advance degree (95% confidence interval: 0.9-4.7). Responses to clinical vignette questions showed more difference between nurses with different levels of knowledge of pain management than the other questions. Participants with the correct response to the best discriminatory item had 5.35 (95% confidence interval 4.08-6.61) points higher total score than those with an incorrect answer. Higher education is associated with better knowledge on pain management. To assess pain knowledge, the ability to interpret and solve a clinical vignette leads to better results than answering direct questions. (C) 2017 by the American Society for Pain Management Nursing Little theory-based research has been performed to better understand nurses' perceptions of pain management. Framed by the theory of planned behavior, the aims of the study were to describe nurses' beliefs (behavioral, normative, and control) about pain management for hospitalized elderly patients with postoperative pain; to present an item analysis for beliefs, attitudes, perceived norms, perceived behavioral control, intentions, and behaviors (measured in case study vignettes) for nurses (a) with different durations of nursing experience, (b) working in university, public health, and military hospitals, and (c) who either had or had not received pain management training in the past six months; and to compare differences in the constructs across these three groups. A comparative descriptive cross-sectional design was used with a convenience sample of 140 Thai nurses working in three Bangkok hospitals. Participants responded to pain assessment and management questionnaires. Most nurses expressed fairly strong beliefs about pain assessment and pro re nata (PRN) opioid analgesic administration. Nurses with more than 10 years of experience had the highest scores for attitudes toward pain assessment and perceptions of others' expectations about PRN opioid analgesic administration. Responses of nurses working in different types of hospitals indicated significantly different pain assessment and PRN opioid analgesic administration behaviors. No significant differences were found for nurses who did and did not receive pain management training. The study highlighted the need for improved pain management education for nurses to enhance the quality of patient care. (C) 2017 by the American Society for Pain Management Nursing Assessing and managing chronic pain in women with histories of interpersonal trauma, mood disorders and co-morbid addiction is complex. The aim of this paper is to report on the findings from a quality improvement project exploring women's experiences who have co-occurring mental health issues, addiction and chronic pain. Exploring perceptions was an initial step in implementing the Registered Nurses' Association of Ontario (RNAO) Best Practice Guideline (BPG) on the Assessment and Management of Pain. Focus group discussions were conducted using an exploratory design with 10 women who were hospitalized in an acute psychiatric unit. Our findings suggest that these women view their pain as complex and often feel powerless within an acute psychiatric setting resorting to coping through self management. The women expressed the importance of therapeutic relationships with clinicians in assessing and managing their pain. The implications of this study suggest that patients have a key role in informing the implementation and applicability of best practice guidelines. Validating the patient's personal pain management experience and particular psychological and physical therapies were suggested as strategies to enhance the patient's quality of life. Many clinicians working in mental health are knowledgeable about these therapies, but may not be aware of the application to managing physical pain. (C) 2017 by the American Society for Pain Management Nursing When considering barriers to chronic pain treatment, there is a need to deliver nonpharmacological therapies in a way that is accessible to all individuals who may benefit. To conduct feasibility testing using a guided, Internet-based intervention for individuals with chronic pain, a novel, Internet-based, chronic pain intervention (ICPI) was developed, using concepts proven effective in face-to-face interventions. This study was designed to assess usability of the ICPI and feasibility of conducting larger-scale research, and to collect preliminary data on effectiveness of the intervention. Data were collected at baseline, after each of the six intervention modules, and 12 weeks after intervention completion. Forty-one participants completed baseline questionnaires, and 15 completed the 12-week postintervention questionnaires. At baseline, all participants reported satisfaction with the structure of the intervention and ease of use. Internet-based platforms such as Facebook aided in accrual of participants, making further large-scale study of the ICPI feasible. There is preliminary evidence suggesting that the ICPI improves emotional function but not physical function, with a small but significant decrease in pain intensity and pain interference. Most participants felt they benefited at least minimally as a result of using the ICPI. The ICPI was well received by participants and demonstrated positive outcomes in this preliminary study. Further research with more participants is feasible and necessary to fully assess the effect of this intervention. (C) 2016 by the American Society for Pain Management Nursing The development of the energy sector is an essential precondition for a sustainable growth of the national economy; therefore, the national energy policy is directed not only to the promotion of competition and efficient use of energy resources, but also to an increased security of power supply. Due to the fact that faulty operation of electrical equipment and construction or maintenance of low-quality dangerous electrical equipment may cause serious risks, there is a need for professional qualification certificate stating the person's competence in the sphere. The aim of the study is to analyse the certification process of energy constructors and evaluate the compliance of competence assessment and implementation of supervision of the constructors' independent practice with the requirements laid down in the sphere of electric power in order to develop possible solutions for the improvement of the certification process of energy constructors. The authors' study allows assessing the certification process of energy constructors from the customers' point of view, i.e. finding out what is important to the certified construction specialists, not to the certification bodies or supervisory authorities. The results of the evaluation show that the construction specialists' requirements for the certification service performance are not fully met and there exist some challenges to be solved to ensure the professional competence of persons to be certified with the requirements laid down in the particular industry. We explore the role of institutional investors as a source of market discipline in mitigating earning management (EMGT) by bank holding companies (BHCs). We propose that ownership by monitoring institutions (institutional investors with large and long-term stakes and independence from managers) is associated with less EMGT because they have greater incentives/skills for monitoring their investees than other shareholders. We find that EMGT by BHCs is negatively associated with ownership by monitoring institutions for larger and riskier banks and for the post-SunTrust decision period. Nonmonitoring institutional ownership is unassociated or in some cases weakly or even positively associated with EMGT. Our findings suggest that regulators should facilitate ownership by monitoring institutions as a complement to regulation. Although constrained by rules and regulations, informed short selling (tipping) is present before negative credit watch and certain types of rating downgrade announcements. Using entity credit rating and daily short sale data from April 2004 to December 2009, we find that preannouncement abnormal short selling significantly increases toward the announcement dates and is negatively related to postannouncement stock returns. Furthermore, short selling driven by tipping is more pronounced before more severe and more surprising rating downgrades. This study provides evidence favoring the private information hypothesis (tipping) in the ongoing debate of the informational advantage of short sellers. We systematically study the value of the information contained in closed-end fund (CEF) premiums. We parametrically estimate CEF expected returns as a function of the history of CEF premiums, in addition to the current premium, and buy the quintile of funds with the highest expected returns and sell the quintile of funds with the lowest expected returns. The return on this portfolio suggests that previous studies, which examine the information in current premiums only, have understated the value of the information in premiums. Our strategy values the information in the history of CEF premiums at an annualized return of 18.2%. We investigate the degree to which orders are aggressively priced, paying particular attention to odd lot orders, and examine whether odd lot orders are being successfully used in stealth trading strategies. We find that odd lot orders execute at higher frequencies than larger orders, which is due to odd lot orders being aggressively priced. We find that microstructure changes have not altered the nature of price aggressiveness, but its determinants hold different explanatory power for odd lot orders. We find evidence that informed traders are shredding their orders into odd lot orders and stealth trading is permeating odd lot denominations. This study investigated second language fluency development over a nearly 2-year period which included an academic year abroad and the year immediately following the participants' return to their home university. Data from 24 L1 English learners of Spanish were collected 6 times: once before, 3 times during, and 2 times after a 9-month stay abroad. Participants were recorded orally retelling a picture-based narrative, and data were coded for 9 measures of utterance fluency. Results indicated different developmental trends: Gains in speed fluency appeared quickly and were maintained after the return from study abroad, whereas gains in breakdown fluency often took longer and were more sensitive to attrition after returning home. There were no changes over time in repair fluency. These results appear to indicate that some fluency improvements are more robust and less likely to be affected by the change in context (study abroad vs. home country). The findings fill a gap in our understanding of the relationship between oral fluency development and second language speech production processes, and have implications for study abroad researchers as well as post-study abroad instruction. This study investigates the role of prosodic cues and explicit information (EI) in the acquisition of German accusative case markers. We compared 4 groups of 3rd-semester learners (low intermediate level) who completed 1 of 4 Processing Instruction (PI) treatments that manipulated the presence or absence of EI and focused prosody. The results showed that, when training included EI or prosodic cues, the groups improved on comprehension and production tasks in an immediate posttest. Four weeks after training, the groups sustained gains on the comprehension task, but not on the production task. Participants who did not receive EI or prosody only showed improvement on the comprehension task in the immediate posttest and did not sustain these gains. These findings replicate previous findings on the role of EI in PI, showing an advantage for EI with the target form (e.g., Henry, Culman, & VanPatten, 2009). Moreover, the results suggest that prosodic cues help learners process morphosyntactic forms, and that they can enhance grammar instruction. Syntactic and linguistic complexity have been studied extensively in applied linguistics as indicators of linguistic performance, development, and proficiency. Recent publications have equally highlighted the reductionist approach taken to syntactic complexity measurement, which often focuses on one or two measures representing complexity at the level of clause-linking or the sentence, but eschews complexity measurement at other syntactic levels, such as the phrase or the clause. Previous approaches have also rarely incorporated measures representing the diversity of syntactic structures in learner productions. Finally, complexity development has rarely been considered from a cross-linguistic perspective, so that many questions pertaining to the cross-linguistic validity of complexity measurement remain. This article reports on an empirical study on syntactic complexity development and introduces a range of syntactic diversity measures alongside frequently used measures of syntactic elaboration. The study analyzed 100 English and 100 French second language oral narratives from adolescent native speakers of Dutch, situated at 4 proficiency levels (beginner-advanced), as well as native speaker benchmark data from each language. The results reveal a gradual process of syntactic elaboration and syntactic diversification in both learner groups, while, especially in French, considerable differences between learners and native speakers reside in the distribution of specific clause types. The present study explores the independent and interactive effects of task complexity and task modality on linguistic dimensions of second language (L2) performance and investigates how these effects are modulated by individual differences in working memory capacity. Thirty-two intermediate learners of L2 Spanish completed less and more complex versions of the same type of argumentative task in the speaking and writing modalities. Perceived complexity questionnaires were administered as measures of cognitive load to both L2 learners and native speakers to independently validate task complexity manipulations. Task performance was analyzed in terms of general (complexity and accuracy) as well as task-relevant (conjunctions) linguistic measures. Quantitative analyses revealed that task modality played a larger role than task complexity in inducing improved linguistic performance during task-based work: Speaking tasks brought about more syntactically complex output while writing tasks favored more lexically complex and more accurate language. In addition, relationships of working memory capacity with various linguistic measures were attested, but only when the cognitive complexity of tasks was enhanced. This study investigated the effects of extensive versus intensive recasts. The focus was on the effect of feedback on learning English articles, which, as nonsalient target structures, have been shown to be difficult for many second language learners. Intensive recasts were operationalized as recasts provided on article errors only, while extensive recasts were provided on any errors including article errors. Forty-eight adult intermediate learners of English as a second language (ESL) were divided into 3 groups: an intensive recast group, an extensive recast group, and a control group. Learners conducted 2 communicative tasks with a native-speaker instructor and received feedback on their errors. They were pretested and posttested (immediately and after 2 weeks) using 3 different outcome measures: an oral picture description task, a written grammaticality judgment task, and a written storytelling task. The results revealed that the extensive recast group significantly outperformed the control group on the oral picture description and the grammaticality judgment tasks, whereas the intensive recast group did not. On the written storytelling task, both recast groups outperformed the control group, but the difference was not statistically significant. These findings point to the advantage of extensive recasts and challenge the assumption that recasts on single errors are necessarily more effective than recasts on a wide range of errors. This study examined the similarities and differences in the functioning of component processes underlying first language (L1) and second language (L2) word reading in Chinese. Fourth-grade Chinese children in Singapore were divided into L1 and L2 reader groups based on whether they used Mandarin or English as their home language. Both groups were administered a battery of tasks that assessed their orthographic processing skill (OP), phonological awareness (PA), morphological awareness (MA), oral vocabulary knowledge, as well as the ability to decode characters and multi-character compound words. Separate Structural Equation Modeling (SEM) analyses showed that in the L1 group, over and above all other variables, both OP and MA, as opposed to PA, were significant predictors of word reading, whereas in the L2 group, OP and PA, as opposed to MA, predicted word reading. Multiple-group SEM analysis further revealed that the effects of OP and MA were significantly larger in the L1 than in the L2 group, whereas a converse pattern was found for PA. These findings are discussed in light of the linguistic and language-to-print mapping properties of Chinese as well as the influence of L1 and L2 learners' differential experience on how they learn to read in Chinese. There is a growing body of literature about the qualities of professional teacher educators (TEs) and their impact on preparing professional teachers. However, English Language Teaching (ELT) research has fallen behind in this regard, despite the fact that different programs worldwide suffer from different limitations, due to certain aspects related to TEs' qualities. This qualitative study investigates the qualities of the professional ELT TE and what implications such qualities have for achieving accountable and quality Second Language Teacher Education (SLTE). A form comprising a set of closed and open ended questions was distributed to 63 participants representing 23 countries. Findings and discussion have revealed that the professional ELT TEs are empowering educators and holders of strong disciplinary knowledge who pursue achieving social justice ELT education. The findings can be considered a milestone and have important implications for preparing professional English language teachers and achieving quality and accountability in SLTE. Proper scale development and validation provide the necessary foundation to facilitate future quantitative research in the organizational sciences. Using the framework provided by the Researcher's Notebook, the purpose of this study is twofold. First, we present a modern summary of best practice procedures for scale development, reliability analysis, and validity analysis. Second, we explain and illustrate these best practice procedures by describing each procedure in the context of developing and psychometrically analyzing a new Character Strength Inventory (CSI). Copyright (c) 2017 John Wiley & Sons, Ltd. We combined Bakker and Demerouti's spillover-crossover model with Taylor's biobehavioral perspective, tested the comprehensive model, and pursued a set of gender-related research questions. Negative work interactions were expected to entail two strain responses, high- and low-arousal negative affect. Both should be related to cortisol secretion but transmitted via different social pathways, a positive and a negative one. During a 7-day ambulatory assessment with 56 couples, we assessed daily variations in the severity of negative social interactions at work and at home along with participants' affect and cortisol levels. Using multilevel structural equation modeling, we found evidence for the three hypothesized processes: strain at work as a consequence of social stress, spillover of strain into the home, and crossover to the partner. On socially more stressful days, participants showed increased high- and low-arousal negative affect at work. Low-arousal negative affect spilled over into the home. Only for men, high-arousal negative affect spilled over, and only women showed a tendency for slowed decline of cortisol levels on more socially stressful days (i.e., slower recovery). Surprisingly, high-arousal negative affect at work tended to be negatively related to partners' high-arousal negative affect. Commonalities predominated differences between men and women. Copyright (c) 2016 John Wiley & Sons, Ltd. We theorized and examined a Pygmalion perspective beyond those proposed in past studies in the relationship between transformational leadership and employee voice behavior. Specifically, we proposed that transformational leadership influences employee voice through leaders' voice expectation and employees' voice role perception (i.e., Pygmalion mechanism). We also theorized that personal identification with transformational leaders influences the extent to which employees internalize leaders' external voice expectation as their own voice role perception. In a time-lagged field study, we found that leaders' voice expectation and employees' voice role perception (i.e., the Pygmalion process) mediate the relationship between transformational leadership and voice behavior. In addition, we found transformational leadership strengthens employees' personal identification with the leader, which in turn, as a moderator, amplifies the proposed Pygmalion process. Theoretical and practical implications are discussed. Copyright (c) 2016 John Wiley & Sons, Ltd. Drawing on gestalt characteristics theory, we advance the literature on the effect of job complexity on employee well-being by considering intra-individual variability of job complexity over time. Specifically, we examine how the trend, or trajectory, of job complexity over time can explain unique variance of employee job strain. Across two longitudinal data sets, we consistently find that, with the average level of job complexity during a given period held constant, a positive job complexity trajectory (i.e., an increasing trend in complexity) is associated with higher employee job strain. Based on job-demand-control theory and the exposure-reactivity model, we further establish that job autonomy and employee emotional stability jointly moderate the relationship between job complexity trajectory and employee job strain. Specifically, for employees with high emotional stability, job autonomy mitigates the job strain brought by positive job complexity trajectory, whereas for employees with low emotional stability, job autonomy does not help to reduce the adverse effect of the increasing trend. These findings not only contribute to extend the understanding of the job complexity - strain relationship, but also suggest a promising, dynamic avenue to study the effects of work characteristics on employee well-being as well as other outcomes. Copyright (c) 2016 John Wiley & Sons, Ltd. A well-known downside of organizational mergers is that employees fail to identify with the newly formed organization. We argue that developing an understanding of factors that affect post-merger identification requires taking the pre-merger status of the merger partners relative to each other into account. This is because relative pre-merger status determines employees' susceptibility to different aspects of the merger process. Specifically, for employees of a high status pre-merger organization, we expected post-merger identification to be strongly influenced by (i) pre-merger identification and (ii) the perceived change in the status. In contrast, we expected post-merger identification of employees of a low status pre-merger organization to be strongly affected by the perceived justice of the merger event. Longitudinal data were obtained from a merger of two public sector organizations and the results supported our hypotheses. Our study shows that the extent to which pre-merger identification, status change, and justice are important determinants of post-merger identification depends on the relative pre-merger status of the merger partners. Our discussion focuses on theoretical implications and managerial ramifications of these findings. Copyright (c) 2016 John Wiley & Sons, Ltd. What affects the way that trust develops in negotiations? In two studies, we used an actor-partner interdependence model to investigate how both negotiators' trust propensity prior to the negotiation and two types of behavior during the negotiation affect negotiators' trust development. Study 1 demonstrated that both focal negotiators' (actors') and their counterparts' (partners') trust propensity were positively associated with negotiators' trust development. Study 2 showed that actors' and partners' trust propensity had an indirect effect on trust development via both actors' and partners' negotiation behaviors. Negotiators' trust propensity positively affected their use of Q&A (questions and answers about interests) and negatively affected their use of S&O (substantiation about positions and single-issue offers). Actors and partners' negotiation behaviors in turn affected their own and their partners' trust development. Our studies propose and test a model to understand how negotiators' trust propensity and negotiation behaviors affect the development of trust in negotiation. Copyright (c) 2016 John Wiley & Sons, Ltd. Drawing upon the concept of match, this two-wave study of 206 employees investigated job control (facets of autonomy) and personal control beliefs (locus of control, LOC) as moderators of time pressure-work engagement (WE) and the time pressure-general subjective well-being (SWB) relationships. It was hypothesized that autonomy would accentuate the positive relationship of time pressure with WE and attenuate the negative relationship with SWB and that these moderations would occur only for employees with an internal LOC. Additionally, it was expected that a facet of job control (timing autonomy) that matched the specific demand (time pressure) would be more likely to act as a moderator than less-matching facets (decision making or method autonomy). The results revealed that the interaction between time pressure, autonomy, and LOC for WE was strongest and for SWB only significant when the timing facet of autonomy served as the moderator (thus, when the autonomy facet matched the demand). However, the pattern of moderation was contrary to that expected. When time pressure increased, high autonomy became beneficial for the WE of employees with an external LOC but detrimental for the WE and SWB of employees with an internal LOC. Explanations for the unexpected findings are provided. Copyright (c) 2016 John Wiley & Sons, Ltd. To advance the understanding of socio-relational sources of employee creativity, we investigate the effect of a good marriage on workplace creativity. Drawing on family-work enrichment theory, we propose and test the idea that a satisfying marriage boosts a spillover of psychological resources from family to work that enhances employees' workplace creativity. Using survey data collected from 548 spouse-employee-supervisor triads, we find an indirect positive relationship between employees' marital satisfaction and workplace creativity through a spillover of psychological resources from family to work. We also find that this spillover is most pronounced when both employees and their spouses are satisfied with their marriage. The results further demonstrate that the indirect effect of marital satisfaction on workplace creativity through the spillover of psychological resources is significant for employees with a low creative personality, but not for those with a high creative personality. Copyright (c) 2017 John Wiley & Sons, Ltd. The feathers of owls possess three adaptations that are held responsible for their quiet flight. These are a comb-like structure at the leading edge of the wing, fringes at the trailing edge, and a soft and porous upper surface of the wing. To investigate the effect of the first adaptation, the leading edge comb, on the aerodynamic performance and the noise generation during gliding flight, wind tunnel measurements were performed on prepared wings of a Barn owl (Tyto alba) with and without the comb. In agreement with existing studies it was found that the leading edge comb causes a small increase in lift. Additionally, at high angles of attack the results from the acoustic measurements indicate that the presence of the comb leads to a reduction in gliding flight noise. Although this reduction is relatively small, it further helps the owl to approach its prey during the final stages of the landing phase. Local fluctuations in a Mach 1.3 cold jet are tracked to understand the genesis of nearfield directivity and intermittency. A newly developed approach leveraging two synchronized large-eddy simulations is employed to solve the forced Navier-Stokes equations, linearized about the evolving unsteady base flow. The results are summarized by exposing the effect of two acoustically significant turbulent regions: the lip-line and core collapse location. The near-acoustic field displays the clear signature of the two regions. However, for both regions, the nearfield evolution of the perturbation field is characterized by generation of intermittent wavepackets, which propagate into the near-acoustic field and gradually acquire their expected broadband and narrowband characteristics at sideline and downstream angles respectively. The simulations elucidate how higher frequencies are obtained in the sideline directions as lower frequencies are filtered out of the forcing fluctuations. Likewise, shallow-angle acoustic signals arise through filtering of high frequency content in that direction. The directivity and intermittency are connected to the filtering of scales by jet turbulence with empirical mode decomposition. The observations highlight the gradual evolution of seemingly random core turbulence into well-defined intermittent wavepackets in the nearfield of the jet. The manner in which centerline fluctuations are segregated into upstream, sideline, and downstream components is examined through narrowband correlations. A similar analysis for the lipline contribution shows primarily upstream and downstream patterns because of the larger structures in the shear layer. This paper investigates different methodologies for the evaluation of the acoustic disturbance emitted by helicopter's main rotors during unsteady maneuvers. Nowadays, the simulation of noise emitted by helicopters is of great interest to designers, both for the assessment of the acoustic impact of helicopter flight on communities and for the identification of optimal-noise trajectories. Typically, the numerical predictions consist of the atmospheric propagation of a near-field noise model, extracted from an appropriate database determined through steady-state flight simulations/measurements (quasi-steady approach). In this work, three techniques for maneuvering helicopter noise predictions are compared: one considers a fully unsteady solution process, whereas the others are based on quasi-steady approaches. These methods are based on a three-step solution procedure: first, the main rotor aeroelastic response is evaluated by a nonlinear beam-like rotor blade model coupled with a boundary element method for potential flow aerodynamics; then, the aeroacoustic near field is evaluated through the 1A Farassat formulation; finally, the noise is propagated to the ground by a ray tracing model. Only the main rotor component is examined, although tail rotor contribution might be included as well. The numerical investigation examines the differences among the noise predictions provided by the three techniques, focusing on the assessment of the reliability of the results obtained through the two quasi-steady approaches as compared with those from the fully unsteady aeroacoustic solver. The aerodynamic noise of a NACA 0012 and NACA 0021 aerofoil is measured and compared in order to determine whether there are differences in their noise signatures with a focus on the onset of stall. Measurements of the self-noise of each aerofoil are measured in an open-jet Anechoic Wind Tunnel at Reynolds numbers of 64,000 and 96,000, at geometric angles of attack from -5 degrees through 40 degrees at a resolution of 1 degrees. Further measurements are taken at Re=96,000 at geometric angles of attack from -5 through 16 degrees at a resolution of 0.5 degrees. Results show that while the noise generated far into the stall regime is quite similar for both aerofoils the change in noise level at the onset of stall is significantly different between the two aerofoils with the NACA 0021 exhibiting a much sharper increase in noise levels below a chord-based Strouhal number of St(c)=1.1. This behaviour is consistent with the changes in lift of these aerofoils as well as the rate of collapse of the suction peak of a NACA 0012 aerofoil under these flow conditions. This study examined whether a three-month equine-assisted learning program improved measures of character skills in two independent cohorts of Year 1 youths, in a specialized secondary school for youths with difficulties coping with mainstream curriculum. In 2013, 75 students underwent intervention while 82 students did not. In 2014, 58 students underwent intervention and 59 students were waitlisted in semester 1; cross-over was performed in semester 2. The students were rated a week before, mid-way and a week post-intervention. Results from multi-level modeling indicated that the intervention led to progressive improvements in character skills over the school semester, in the majority of the constructs measured in both the 2013 and 2014 cohorts. The rate of change in measures of character skills over the semester correlated with the grade point average of the students at semester-end. Implications and limitations of the findings are discussed. This paper aims to examine the influence of corporate social responsibility (CSR) on financial performance of small- and medium-sized enterprises (SMEs) in Ghana by using stakeholder engagement as a mediating variable. Primary data were collected from 423 SMEs within the Accra Metropolis. Partial least squares estimation technique was used to analyze the data. The study documented evidence for a mechanism through which CSR results in financial performance of firms: SMEs with improved CSR practices are better positioned to engage more with their stakeholders, which translates into improved financial performance. It was recommended that for SMEs to improve upon their CSR practices, which will eventually result in enhanced financial performance, stakeholder engagement should be a major part of their operations. The paper contributes to our knowledge on how CSR practices lead to financial performance of SMEs in developing countries. In addition, this is the first of its kind to establish the relationship between CSR practices and financial performance of SMEs in Ghana by using stakeholder engagement as a mediating factor. Background: Evaluating the quality of postgraduate medical education (PGME) programs through accreditation is common practice worldwide. Accreditation is shaped by educational quality and quality management. An appropriate accreditation design is important, as it may drive improvements in training. Moreover, accreditors determine whether a PGME program passes the assessment, which may have major consequences, such as starting, continuing or discontinuing PGME. However, there is limited evidence for the benefits of different choices in accreditation design. Therefore, this study aims to explain how changing views on educational quality and quality management have impacted the design of the PGME accreditation system in the Netherlands. Methods: To determine the historical development of the Dutch PGME accreditation system, we conducted a document analysis of accreditation documents spanning the past 50 years and a vision document outlining the future system. A template analysis technique was used to identify the main elements of the system. Results: Four themes in the Dutch PGME accreditation system were identified: (1) objectives of accreditation, (2) PGME quality domains, (3) quality management approaches and (4) actors' responsibilities. Major shifts have taken place regarding decentralization, residency performance and physician practice outcomes, and quality improvement. Decentralization of the responsibilities of the accreditor was absent in 1966, but this has been slowly changing since 1999. In the future system, there will be nearly a maximum degree of decentralization. A focus on outcomes and quality improvement has been introduced in the current system. The number of formal documents striving for quality assurance has increased enormously over the past 50 years, which has led to increased bureaucracy. The future system needs to decrease the number of standards to focus on measurable outcomes and to strive for quality improvement. Conclusion: The challenge for accreditors is to find the right balance between trusting and controlling medical professionals. Their choices will be reflected in the accreditation design. The four themes could enhance international comparisons and encourage better choices in the design of accreditation systems. Background: Offshore medical schools are for-profit, private enterprises located in the Caribbean that provide undergraduate medical education to students who must leave the region for postgraduate training and also typically to practice. This growing industry attracts many medical students from the US and Canada who wish to return home to practice medicine. After graduation, international medical graduates can encounter challenges obtaining residency placements and can face other barriers related to practice. Methods: We conducted a qualitative thematic analysis to discern the dominant messages found on offshore medical school websites. Dominant messages included frequent references to push and pull factors intended to encourage potential applicants to consider attending an offshore medical school. We reviewed 38 English-language Caribbean offshore medical school websites in order to extract and record content pertaining to push and pull factors. Results: We found two push and four pull factors present across most offshore medical school websites. Push factors include the: shortages of physicians in the US and Canada that require new medical trainees; and low acceptance rates at medical schools in intended students' home countries. Pull factors include the: financial benefits of attending an offshore medical school; geographic location and environment of training in the Caribbean; training quality and effectiveness; and the potential to practice medicine in one's home country. Conclusions: This analysis contributes to our understanding of some of the factors behind students' decisions to attend an offshore medical school. Importantly, push and pull factors do not address the barriers faced by offshore medical school graduates in finding postgraduate residency placements and ultimately practicing elsewhere. It is clear from push and pull factors that these medical schools heavily focus messaging and marketing towards students from the US and Canada, which raises questions about who benefits from this offshoring practice. Background: Like many other cancer patients, most pancreatic carcinoma patients suffer from severe weight loss. As shown in numerous studies with fish oil (FO) supplementation, a minimum daily intake of 1.5 g n-3-fatty acids (n-3-FA) contributes to weight stabilization and improvement of quality of life (QoL) of cancer patients. Given n-3FA not as triglycerides (FO), but mainly bound to marine phospholipids (MPL), weight stabilization and improvement of QoL has already been seen at much lower doses of n-3-FA (0,3 g), and MPL were much better tolerated. The objective of this double-blind randomized controlled trial was to compare low dose MPL and FO formulations, which had the same n-3-FA amount and composition, on weight and appetite stabilization, global health enhancement (QoL), and plasma FA-profiles in patients suffering from pancreatic cancer. Methods: Sixty pancreatic cancer patients were included into the study and randomized to take either FO- or MPL supplementation. Patients were treated with 0.3 g of n-3-fatty acids per day over six weeks. Since the n-3-FA content of FO is usually higher than that of MPL, FO was diluted with 40% of medium chain triglycerides (MCT) to achieve the same capsule size in both intervention groups and therefore assure blinding. Routine blood parameters, lipid profiles, body weight, and appetite were measured before and after intervention. Patient compliance was assessed through a patient diary. Quality of life and nutritional habits were assessed with validated questionnaires (EORTC-QLQ-C30, PAN26). Thirty one patients finalized the study protocol and were analyzed (per-protocol-analysis). Results: Intervention with low dose n-3-FAs, either as FO or MPL supplementation, resulted in similar and promising weight and appetite stabilization in pancreatic cancer patients. MPL capsules were slightly better tolerated and showed fewer side effects, when compared to FO supplementation. Conclusion: The similar effects between both interventions were unexpected but reliable, since the MPL and FO formulations caused identical increases of n-3-FAs in plasma lipids of included patients after supplementation. The effects of FO with very low n-3-FA content might be explained by the addition of MCT. The results of this study suggest the need for further investigations of marine phospholipids for the improvement of QoL of cancer patients, optionally in combination with MCT. Background: Abnormal lipid metabolism may contribute to an increase in endoplasmic reticulum (ER) stress, resulting in the pathogenesis of non-alcoholic steatohepatitis. Apolipoprotein A-I (apoA-I) accepts cellular free cholesterol and phospholipids transported by ATP-binding cassette transporter A1 to generate nascent high density lipoprotein particles. Previous studies have revealed that the overexpression of apoA-I alleviated hepatic lipid levels by modifying lipid transport. Here, we examined the effects of apoA-I overexpression on ER stress and genes involved in lipogenesis in both HepG2 cells and mouse hepatocytes. Methods: Human apoA-I was overexpressed in HepG2 hepatocytes, which were then treated with 2 mu g/mL tunicamycin or 500 mu M palmitic acid. Eight-week-old male apoA-I transgenic or C57BL/6 wild-type mice were intraperitoneally injected with 1 mg/kg body weight tunicamycin or with saline. At 48 h after injecting, blood and liver samples were collected. Results: The overexpression of apoA-I in the models above resulted in decreased protein levels of ER stress makers and lipogenic gene products, including sterol regulatory element binding protein 1, fatty acid synthase, and acetylCoA carboxylase 1. In addition, the cellular levels of triglycerides and free cholesterol also decreased. Some of gene products which are related to ER stress-associated apoptosis were also affected by apoA-I overexpression. These results suggested that apoA-I overexpression could reduce steatosis by decreasing lipid levels and by suppressing ER stress and lipogenesis in hepatocytes. Conclusion: ApoA-I expression could significantly reduce hepatic ER stress and lipogenesis in hepatocytes. Background: Nonalcoholic fatty liver disease (NAFLD) is increasing worldwide as one of the leading causes of chronic liver disease. Sake lees (SL) are secondary products of sake manufacturing and are considered to have beneficial effects on human health. To investigate these effects, we used high fat diet (HFD)-fed mice treated with or without the SL extract. Method: Mice were the HFD ad libitum for 8 weeks and were administered 500 mu L of distilled water with or without the SL extract (350 mg/mL) by a feeding needle daily for the last 4 weeks. Food intake, body weight, and liver weight were measured. Triacylglycerol content and the mRNA and protein expression levels of various lipid and glucose metabolism-related genes were determined in liver tissues. The levels of triglyceride, free fatty acids, glucose, insulin, and liver cell damage markers were determined in serum. Fatty acid-induced lipid accumulation in HepG2 cells was assessed in the presence or absence of the SL extract. Results: Mice fed a HFD and treated with the SL extract demonstrated a significant reduction in hepatic lipid accumulation and mRNA and protein levels of peroxidome proliferator-activated receptor gamma (PPAR gamma), PPAR alpha, CD36, and phosphoenolpyruvate carboxykinase 1 in the liver, while the SL extract did not affect body weight and food intake. Moreover, insulin resistance and hepatic inflammation in HFD-fed mice improved after administration of the SL extract. In HepG2 cells, the SL extract suppressed fatty acid-induced intracellular lipid accumulation. Conclusions: These findings suggest that treatment with the SL extract could potentially reduce the risk of NAFLD development, and that the SL extract may be clinically useful for the treatment of NAFLD. Capital assets are held in a variety of ownership structures that can be characterised by how they are taxed, whether or not their equity is publicly traded, and by the relationship between the ownership of the assets and the management of the assets. When taxes and regulations change, the popularity of the different ownership structures change. These changes in ownership structure can affect how the assets are managed, which can in turn influence innovation. We investigate the governance role of conservative accounting in mitigating the creditor-stockholder conflict by affecting firms' dividend policies, and how the convergence to International Financial Reporting Standards (IFRS) affects the governance role of conservative accounting as it relates to dividend policy. We analyze data on Chinese listed firms from 2000 through 2011. The use of conservative accounting reduced cash dividend payouts, thereby playing a governance role by mitigating the firm's creditor-stockholder agency conflict. However, China's convergence to IFRS reduced the governance role of conservative accounting on dividend policy by reducing the accounting conservatism of listed firms in China. Using a sample of 1,369 cross-border acquisitions announced by Standard & Poor's 1500 firms between 2000 and 2014, we find strong evidence that derivatives users experience higher announcement returns than non-users, which translates into a US$ 193.7 million shareholder gain for an average-sized acquirer. In addition, we find that acquirers with hedging programmes have higher deal completion probabilities, longer deal completion times, and better long-term post-deal performance. We confirm our findings after employing an extensive array of models to address potential endogeneity. Overall, our results provide new insights into a link between corporate financial hedging and firm performance. Using international country-level data, this paper shows that demographic ageing is likely to significantly expand the insurance industry. This expansion is driven by the increased need to secure earnings for post-retirement consumption, the desire to hedge against risks associated with increasing age, and the older generations risk aversion increasing the demand for safer assets such as insurance and pension products. Moreover, such an expansion of the insurance industry is particularly apparent in financially liberalised countries. This is because risk and asset management associated with insurance and pension products could be facilitated and more effective in liberalised financial markets. We examine a novel determinant of corporate diversification and its valuation effect: corporate innovations. We find consistent evidence that corporate innovations increase the extent of diversification. To establish causality, we estimate the firm fixed effects, 2SLS and GMM models. The 2SLS model uses the US state-level R&D tax credits as an instrumental variable for corporate innovations. We also find that a firm is more likely to diversify into an industry where it has more applicable innovations. Further, such innovation-related diversification is associated with significantly higher firm value. Our results are robust to various measures of corporate innovations. Using data on Foreign Portfolio Investment (FPI), we find a positive relationship between higher tax burden and OECD residents' tax evasion, especially via tax havens. Contrary to established investor preference for certain country characteristics, we find they are less important to tax evaders who value privacy and want to remain undetected by their home tax authorities. We find very limited evidence that OECD Tax Information Exchange Agreements (TIEAS) reduce tax evasion, controlling for other determinants of overall OECD FPI. Without the US in the OECD sample, tax havens play a lesser role and OECD policies appear to make a marginal impact. Background and aim: Low rates of bystander-initiated CPR are a major obstacle to improved survival rates, and the aim of this study is to elucidate the factors associated with university students' attitudes toward performing bystander CPR. Methods: Questionnaires were distributed to 18 universities across three metropolises in China. One question asking for respondents' attitudes toward performing bystander CPR was set as the dependent variable, and the logistic regression models were used to extract independent factors for respondents' attitudes toward performing bystander CPR. Results: 2934 questionnaires were completed, with a response rate of 81.5%. Results suggested that predictors of willingness to perform bystander CPR were: previous experience of performing bystander CPR, higher self-perceived ability to perform bystander CPR properly after instruction, medicine and law discipline, male gender, not being the single child of their parents, higher participation in university societies, being used to taking decisive action immediately, less self-perceived life stress and higher self-perceived knowledge level of CPR. Conclusions: Persons having previous experience of performing bystander CPR and those who thought they would have the ability to perform bystander CPR properly are predominantly associated with willingness to perform bystander CPR. Psychological and cultural factors need further study. (C) 2016 Elsevier Ltd. All rights reserved. Background: Transfer of older people from Residential Aged Care Facilities to Emergency Departments requires multiple comprehensive handovers across different services. Significant information gaps exist in transferred information despite calls for standards. Aim: To investigate: (1) presence of minimum standard elements in the transfer text written by RACF nurses, paramedics and ED triage nurses, and (2) the transfer documentation used by services. Methods: We analysed retrospective cross-sectional transfer narratives from the digital medical record system of an Australian tertiary referral hospital using the mnemonic SBAR (Situation, Background, Assessment Recommendation) as the measure of comprehensiveness. Transfer documents from 3 groups were also reviewed. Findings: Inclusion of elements from SBAR was inconsistent across transfer. Rather, the written narratives focused on concerns relevant to the immediate priority, the type of information imposed by the document(s) in use, and clinical role of the author. Conclusion: Transfer documentation from Residential Aged Care nurses, paramedics and ED triage nurses do not contain comprehensive information of older persons complex conditions. Better communication between non-affiliated organisations is needed to improve timely appropriate care for RACF residents. (C) 2016 Elsevier Ltd. All rights reserved. Background: Single responder (SR) systems have been implemented in several countries. When the very first SR system in Sweden was planned, it was criticised because of concerns about sending single emergency nurses out on alerts. In the present study, the first Swedish SR unit was studied in order to register waiting times and assess the working environment. Method: Quantitative data were collected from the ambulance dispatch register. Data on the working environment were collected using a questionnaire sent to the SR staff. Results: The SR system reduced the average patient waiting time from 26 to 13 min. It also reduced the number of ambulance transports by 35% following triage of patient(s) priority determined by the SR. The staff perceived the working environment to be adequate. Conclusion: The SR unit was successful in that it reduced waiting times to prehospital health care. Contrary to expectations, it proved to be an adequate working environment. There is good reason to believe that SR systems will spread throughout the country. In order to enhance in depth the statistical analysis, additional should be collected over a longer time period and from more than one SR unit. (C) 2016 Elsevier Ltd. All rights reserved. Background: The Swedish ambulance health care services are changing and developing, with the ambulance nurse playing a central role in the development of practice. The competence required by ambulance nurses in the profession remains undefined and provides a challenge. The need for a clear and updated description of ambulance nurses' competence, including the perspective of professional experiences, seems to be essential. Aim: The aim of this study was to elucidate ambulance nurses' professional experiences and to describe aspects affecting their competence. Methods: For data collection, the study used the Critical Incident Technique, interviewing 32 ambulance nurses. A qualitative content analysis was applied. Results and conclusion: This study elucidates essential parts of the development, usage and perceptions of the competence of ambulance nurses and how, in various ways, this is affected by professional experiences. The development of competence is strongly affected by the ability and possibility to reflect on practice on a professional and personal level, particularly in cooperation with colleagues. Experiences and communication skills are regarded as decisive in challenging clinical situations. The way ambulance nurses perceive their own competence is closely linked to patient outcome. The results of this study can be used in professional and curriculum development. (C) 2016 Elsevier Ltd. All rights reserved. Background: Ambulance nurses display stress symptoms, resulting from their work with patients in an emergency service. Certain individuals seem, however, to handle longstanding stress better than others and remain in exposed occupations such as ambulance services for many years. This paper examines stress inducing and stress defusing factors among ambulance nurses. Methods: A qualitative descriptive design using critical incident technique was used. A total of 123 critical incidents were identified, and a total of 61 strategies dealing with stress were confirmed. In all, 13 sub-categories (seven stress factors and five stress reducing factors) were merged into four categories (two stress categories and two stress reducing categories). Results and conclusion: The study shows that ambulance nurses in general experience emergency calls as being stressful. Unclear circumstances increase the stress level, with cases involving children and childbirth being especially stressful. Accurate information and assistance from the dispatch centre reduced the stress. Having discussions with colleagues directly after the assignment were particularly stress reducing. Advanced team collaboration with teammates was viewed as effective means to decrease stress, in addition to simple rituals to defuse stress such as taking short breaks during the workday. The study confirmed earlier studies that suggest the benefits of defusing immediately after stress reactions. (C) 2016 Elsevier Ltd. All rights reserved. Introduction: When emergency medical services (EMS) are needed, the choice of transport depends on several factors. These may include the patient's medical condition, transport accessibility to the accident site and the receiving hospital's resources. Emergency care research is advancing, but little is known about the patient's perspective of helicopter emergency medical services (HEMS). Aim: The aim of this study was to describe trauma patients' experiences of HEMS. Method: Thirteen persons (ages 21-76) were interviewed using an interview guide. Data were analyzed using qualitative content analysis. Findings: The analysis resulted in three themes: Being distraught and dazed by the event - patients experienced shock and tension, as well as feelings of curiosity and excitement. Being comforted by the caregivers - as the caregivers were present and attentive, they had no need for relatives in the helicopter. Being safe in a restricted environment - the participants' injuries were taken seriously and the caregivers displayed effective teamwork. Conclusion: For trauma patients to be taken seriously and treated as 'worst cases' enables them to trust their caregivers and 'hand themselves over' to their care. HEMS provide additional advantageous circumstances, such as being the sole patient and having proximity to a small, professional team. (C) 2016 Elsevier Ltd. All rights reserved. Objective: This study aimed to evaluate the impact of an Emergency Department Ambulance Offload Nurse (EDAOLN) role on patient and health services outcomes in one Queensland Emergency Department (ED). Methods: A retrospective study of all ED presentations (n = 21,454) made to a tertiary hospital ED in Queensland, Australia, during July 9, 2012 - November 2, 2012; 39 days before (T1), during (T2) and after (T3) the introduction of the trial of an EDAOLN role. The primary outcome of interest was time to be seen by a clinician. Results: Demographic and clinical profiles of ED presentations made during each of the time periods were relatively similar. Time to be seen improved marginally during the trial period of the EDAOLN (T1: 34 min vs. T2: 31 min, p = 0.002). The proportion of hospital admissions and those who did not wait differed between T1 and T2 (lower during T2 vs. T3). Most outcomes were not sustained when the role was removed (i.e. T2 vs. T3), and most returned close to baseline (i.e. T1 vs. T3). Conclusions: As part of a health services framework designed to improve timely access to emergency care, an EDAOLN may be one of several options to consider. (C) 2017 Elsevier Ltd. All rights reserved. Introduction: The Ambulance Organization of Sweden provides qualified medical assessment and treatment by ambulance nurses based on patient needs regarding appropriate levels of care. A new model for patients with non-urgent medical conditions has been introduced. The main objective of this study was to examine early prehospital assessment of non-urgent patients, and its impact on the choice of the appropriate level of care. Methods: The study design was a 1-year, prospective study, involving an ambulance district in southwestern Sweden with a population of 78,000. Eligible patients were from18 years of age, assessed as priority GREEN by Rapid Emergency Triage and Treatment System (RETTS). Ambulance nurses contacted primary care physicians on decisions on whether a patient should be transported to a primary healthcare unit or an A&E. Data was collected from electronic health records from April 2014 to July 2015. A comparison was made with a retrospective control group without consulting a physician concerning the appropriate level of care. Results: 394 patients were included, 184 in the intervention group, and 210 in the control group. There were statistically significant differences in favor of the study group (p < 0.001) regarding no transport, or transport and admission to an A&E. The groups did not differ significantly regarding transport to a primary care unit. Conclusion: This prehospital assessment model indicates a decrease in ambulance transports to an A&E and admissions to a hospital ward. Collaboration between ambulance nurses and primary physicians affects the decision for the appropriate level of care for patients with a non-urgent condition. (C) 2017 Elsevier Ltd. All rights reserved. The aim of the study was to investigate whether interprofessional education (IPE) and interprofessional collaboration (IPC) during the educational program had an impact on prehospital emergency care nurses' (PECN) self-reported competence towards the end of the study program. A cross-sectional study using the Nurse Professional Competence (NPC) Scale was conducted. A comparison was made between PECN students from Finland who experienced IPE and IPC in the clinical setting, and PECN students from Sweden with no IPE and a low level of IPC. Forty-one students participated (Finnish n = 19, Swedish n = 22). The self-reported competence was higher among the Swedish students. A statistically significant difference was found in one competence area; legislation in nursing and safety planning (p < 0.01). The Finnish students scored significantly higher on items related to interprofessional teamwork. Both the Swedish and Finnish students' self-reported professional competence was relatively low according to the NPC Scale. Increasing IPC and IPE in combination with offering a higher academic degree may be an option when developing the ambulance service and the study program for PECNs. (C) 2017 Elsevier Ltd. All rights reserved. Video scenarios have been used to explore clinical reasoning during interviews in Think Aloud studies. This study used nominal group technique with experts to create video scenarios to explore the ways paramedics think and reason when caring for children who are sick or injured. At present there is little research regarding paramedics' clinical reasoning with respect to performing non-urgent procedures on children. A core expert panel identified the central structure of a prehospital clinical interaction and the range of contextual factors that may influence a paramedic's clinical reasoning [ the way in which information is gathered, interpreted and analysed by clinicians]. The structure and contextual factors were then incorporated into two filmed scenarios. A second panel of clinical practice experts, then critiqued the body language, spoken word and age appropriate behaviours of those acting in the video scenarios and compared them against their own experience of clinical practice to confirm authenticity. This paper reports and reflects on the use of nominal group technique to create authentic video scenarios for use in prehospital research. (C) 2017 Elsevier Ltd. All rights reserved. Despite the widespread prevalence, teacher moonlighting is one of the under-researched areas. Using data from 313 public primary school teachers in Ilala District, this study examined the determinants of teacher moonlighting. The findings show that 39.4% of teachers had a secondary income generating activity. Sex and age of the teacher were significant predictors of the decision to moonlight. Further, the study findings show that the older the teacher is, the more likely the teacher is to moonlight. The results confirm the proposition that moonlighting in Tanzania is used by formal sector workers as a transition into self-employment after retirement. Aims: The aims of this study were to determine whether yoga and hydrotherapy training had an equal effect on the health-related quality of life in patients with heart failure and to compare the effects on exercise capacity, clinical outcomes, and symptoms of anxiety and depression between and within the two groups. Methods: The design was a randomized controlled non-inferiority study. A total of 40 patients, 30% women (meanSD age 64.98.9 years) with heart failure were randomized to an intervention of 12 weeks, either performing yoga or training with hydrotherapy for 45-60 minutes twice a week. Evaluation at baseline and after 12 weeks included self-reported health-related quality of life, a six-minute walk test, a sit-to-stand test, clinical variables, and symptoms of anxiety and depression. Results: Yoga and hydrotherapy had an equal impact on quality of life, exercise capacity, clinical outcomes, and symptoms of anxiety and depression. Within both groups, exercise capacity significantly improved (hydrotherapy p=0.02; yoga p=0.008) and symptoms of anxiety decreased (hydrotherapy p=0.03; yoga p=0.01). Patients in the yoga group significantly improved their health as rated by EQ-VAS (p=0.004) and disease-specific quality of life in the domains symptom frequency (p=0.03), self-efficacy (p=0.01), clinical summary as a combined measure of symptoms and social factors (p=0.05), and overall summary score (p=0.04). Symptoms of depression were decreased in this group (p=0.005). In the hydrotherapy group, lower limb muscle strength improved significantly (p=0.01). Conclusions: Yoga may be an alternative or complementary option to established forms of exercise training such as hydrotherapy for improvement in health-related quality of life and may decrease depressive symptoms in patients with heart failure. Aim: The aim of the study was to assess the effectiveness of exercise training on depression, anxiety, physical capacity and sympatho-vagal balance in patients after myocardial infarction and compare differences between men and women. Methods: Thirty-two men aged 56.37.6 years and 30 women aged 59.28.1 years following myocardial infarction underwent an 8-week training programme consisting of 24 interval trainings on cycloergometer, three times a week. Before and after completing the training programme, patients underwent: depression intensity assessment with the Beck depression inventory; anxiety assessment with the state-trait anxiety inventory; a symptom-limited exercise test during which were analysed: maximal workload, duration, double product. Results: In women the initial depression intensity was higher than in men, and decreased significantly after the training programme (14.8 +/- 8.7 vs. 10.5 +/- 8.8; P<0.01). The anxiety manifestation for state anxiety in women was higher than in men and decreased significantly after the training programme (45.7 +/- 9.7 vs. 40.8 +/- 0.3; P<0.01). Of note, no depression and anxiety manifestation was found in men. Physical capacity improved significantly after the training programme in all groups, and separately in men and in women. Moreover, an 8-week training programme favourably modified the parasympathetic tone. Conclusions: Participating in the exercise training programme contributed beneficially to a decrease in depression and anxiety manifestations in women post-myocardial infarction. Neither depression nor anxiety changed significantly in men. The impact of exercise training on physical capacity and autonomic balance was beneficial and comparable between men and women. Background: Exercise interventions apparently reduce the risks of and prevent coronary artery disease (CAD). Developing an exercise intervention for patients with CAD is a rapidly expanding focus worldwide. The results of previous studies are inconsistent and difficult to interpret across various types of exercise programme. Aim: This study aimed to update prior systemic reviews and meta-analyses in order to determine the overall effects of endurance exercise training on patients with CAD. Methods: The databases (PubMed, Medline, CINAHL, EMBASE and Cochrane Library) were searched for the interventions published between January 1, 2000, and May 31, 2015. Comprehensive meta-analysis software was used to evaluate the heterogeneity of the selected studies and to calculate mean differences (MDs) while considering effect size. Results: A total of 18 studies with 1286 participants were included. Endurance exercise interventions at a moderate to high training intensity significantly reduced resting systolic blood pressure (MD: -3.8 mmHg, p = 0.01) and low-density lipoprotein cholesterol (MD: -5.5 mg/dL, p=0.02), and increased high-density lipoprotein cholesterol (MD: 3.8 mg/dL, p<0.001). There were also significant positive changes in peak oxygen consumption (MD: 3.47 mL/kg/min, p<0.001) and left ventricular ejection fraction (MD: 2.6%, p=0.03) after the interventions. Subgroup analysis results revealed that exercise interventions of 60-90 minutes per week with a programme duration of >12weeks had beneficial effects on functional capacity, cardiac function and a number of cardiovascular risk factors. Conclusions: Endurance exercise training has a positive effect on major modifiable cardiovascular risk factors and functional capacity. Nurses can develop endurance exercise recommendations for incorporation into care plans of clinically stable CAD patients following an acute cardiac event or revascularisation procedure. Background: Vascular complications are still common in the catheterization laboratory setting. However, no risk scores for their prediction have been described. With a view to bridging this gap, the present study sought to develop and validate a score for prediction of vascular complications associated with arterial access in patients undergoing interventional cardiology procedures. Methods: This prospective multicenter cohort study included adult patients who underwent cardiac catheterization via the femoral or radial route. The outcomes of interest were: access site hematoma; major and minor bleeding; and retroperitoneal hemorrhage, pseudoaneurysm, or arteriovenous fistula requiring surgical repair. Past medical history as well as pre-procedural, intra-procedural, and post-procedural variables were collected. Patients were randomly allocated to the derivation or validation cohorts at a 2:1 ratio. The following equation constituted the score: (>6F introducer sheathx4.0)+(percutaneous coronary interventionx2.5)+(history of vascular complication after prior interventional cardiology procedurex2.0)+(prior use of warfarin or phenprocoumonx2.0)+(female sexx1.5)+(age60 yearsx1.5). The maximum score is 13.5 points. Results: A score dichotomized at 3 (best cutoff for balancing sensitivity and specificity) was moderately accurate (sensitivity=0.66 (95% confidence interval: 0.59-0.73); specificity=0.59 (95% confidence interval: 0.56-0.61)). Patients with a score 3 were at increased risk of complications (odds ratio: 2.95; 95% confidence interval: 2.22-3.91). Conclusions: This study yielded a score that is capable of predicting vascular complications and easily applied in daily practice by providers working in the catheterization laboratory setting. Background: Several studies have examined various parameters and experiences when patients suffer their first myocardial infarction (MI), but knowledge about when they suffer their second MI is limited. Aim: To compare risk factors for MI, that is, diabetes, hypertension and smoking, for the first and second MI events in men and women affected by two MIs and to analyse the time intervals between the first and second MIs. Methods: A retrospective cohort study of 1017 patients aged 25-74 years with first and second MIs from 1990 through 2009 registered in the Northern Sweden MONICA registry. Results: More women than men have diabetes and hypertension and are smokers at the first MI. Similar differences between the genders remain at the time of the second MI for diabetes and hypertension, although both risk factors have increased. Smoking decreased at the second MI without any remaining difference between genders. Women suffer their second MI within a shorter time interval than men do. Within 16 months of their first MI, 50% of women had a second MI. The corresponding time interval for men was 33 months. Conclusion: Patients affected by an MI should be made aware of their risk of recurrent MI and that the risk of recurrence is highest during the first few years after an MI. In patients affected by two MIs, women have a higher risk factor burden and suffer their second MI earlier than men do and thus may need more aggressive and more prompt secondary prevention. Background: Rapid and accurate interpretation of cardiac arrhythmias by nurses has been linked with safe practice and positive patient outcomes. Although training in electrocardiogram rhythm recognition is part of most undergraduate nursing programmes, research continues to suggest that nurses and nursing students lack competence in recognising cardiac rhythms. In order to promote patient safety, nursing educators must develop valid and reliable assessment tools that allow the rigorous assessment of this competence before nursing students are allowed to practise without supervision. Aim: The aim of this study was to develop and psychometrically evaluate a toolkit to holistically assess competence in electrocardiogram rhythm recognition. Methods: Following a convenience sampling technique, 293 nursing students from a nursing faculty in a Spanish university were recruited for the study. The following three instruments were developed and psychometrically tested: an electrocardiogram knowledge assessment tool (ECG-KAT), an electrocardiogram skills assessment tool (ECG-SAT) and an electrocardiogram self-efficacy assessment tool (ECG-SES). Reliability and validity (content, criterion and construct) of these tools were meticulously examined. Results: A high Cronbach's alpha coefficient demonstrated the excellent reliability of the instruments (ECG-KAT=0.89; ECG-SAT=0.93; ECG-SES=0.98). An excellent context validity index (scales' average content validity index>0.94) and very good criterion validity were evidenced for all the tools. Regarding construct validity, principal component analysis revealed that all items comprising the instruments contributed to measure knowledge, skills or self-efficacy in electrocardiogram rhythm recognition. Moreover, known-groups analysis showed the tools' ability to detect expected differences in competence between groups with different training experiences. Conclusion: The three-instrument toolkit developed showed excellent psychometric properties for measuring competence in electrocardiogram rhythm recognition. Background: Despite the recognition of the negative effects of depressive symptoms on self-care confidence and self-care maintenance in patients with heart failure, little is known about the moderating role of resilience underlying these relations. Aims: To explore whether depressive symptoms affect self-care maintenance through self-care confidence and whether this mediating process was moderated by resilience. Methods: The sample comprised 201 community-dwelling and medically stable patients with echocardiographically documented heart failure. A moderated mediation model was conducted to test whether self-care confidence mediated the association between depressive symptoms and self-care maintenance, and whether resilience moderated the direct and indirect effects of depressive symptoms after adjustment for covariates. Results: Depressive symptoms reduced self-care maintenance indirectly by decreasing self-care confidence (indirect effect: -0.22, 95% confidence interval: -0.36, -0.11), and this pathway was only significant for patients with moderate and high levels and not with low levels of resilience. Resilience also moderated the direct effects of depressive symptoms on self-care maintenance such that the negative association between depressive symptoms and self-care maintenance was reversed by the existence of high resilience. Conclusions: Resilience moderated the direct and indirect effects of depressive symptoms through self-care confidence on self-care maintenance in heart failure patients. Efforts to improve self-care maintenance by targeting depressive symptoms may be more effective when considering self-care confidence in patients with moderate to high levels of resilience. Background: Although patients may experience a quick recovery followed by rapid discharge after percutaneous coronary interventions (PCIs), continuity of care from hospital to home can be particularly challenging. Despite this fact, little is known about the experiences of care across the interface between secondary and primary healthcare systems in patients undergoing PCI. Aim: To explore how patients undergoing PCI experience continuity of care between secondary and primary care settings after early discharge. Methods: The study used an inductive exploratory design by performing in-depth interviews of 22 patients at 6-8 weeks after PCI. Nine were women and 13 were men; 13 were older than 67 years of age. Eight lived remotely from the PCI centre. Patients were purposively recruited from the Norwegian Registry for Invasive Cardiology. Interviews were analysed by qualitative content analysis. Findings: Patients undergoing PCI were satisfied with the technical treatment. However, patients experienced an unplanned patient journey across care boundaries. They were not receiving adequate instruction and information on how to integrate health information. Patients also needed help to facilitate connections to community-based resources and to schedule clear follow-up appointments. Conclusions and implications: As high-technology treatment dramatically expands, healthcare organisations need to be concerned about all dimensions of continuity. Patients are witnessing their own processes of healthcare delivery and therefore their voices should be taken into greater account when discussing continuity of care. Nurse-led initiatives to improve continuity of care involve a range of interventions at different levels of the healthcare system. Introduction: A high quality of chest compressions, e.g. sufficient depth (5-6 cm) and rate (100-120 per min), has been associated with survival. The patient's underlay affects chest compression depth. Depth and rate can be assessed by feedback systems to guide rescuers during cardiopulmonary resuscitation. Aim: The purpose of this study was to describe the quality of chest compressions by healthcare professionals using real-time audiovisual feedback during in-hospital cardiopulmonary resuscitation. Method: An observational descriptive study was performed including 63 cardiac arrest events with a resuscitation attempt. Data files were recorded by Zoll AED Pro, and reviewed by RescueNet Code Review software. The events were analysed according to depth, rate, quality of chest compressions and underlay. Results: Across events, 12.7% (median) of the compressions had a depth of 5-6 cm. Compression depth of >6 cm was measured in 70.1% (median). The underlay could be identified from the electronic patient records in 54 events. The median compression depth was 4.5 cm (floor) and 6.7 cm (mattress). Across events, 57.5% (median) of the compressions were performed with a median frequency of 100-120 compressions/min and the most common problem was a compression rate of <100 (median=22.3%). Conclusions: Chest compression quality was poor according to the feedback system. However, the distribution of compression depth with regard to underlay points towards overestimation of depth when treating patients on a mattress. Audiovisual feedback devices ought to be further developed. Healthcare professionals need to be aware of the strengths and weaknesses of their devices. This article is a part of the Special Issue on Intelligent Systems for Space Exploration. The Intelligent Payload Experiment (IPEX) is a CubeSat that flew from December 2013 through January 2015 and validated autonomous operations for onboard instrument processing and product generation for the Intelligent Payload Module of the Hyperspectral Infrared Imager (HyspIRI) mission concept. IPEX used several artificial intelligence technologies. First, IPEX used machine learning and computer vision in its onboard processing. IPEX used machine-learned random decision forests to classify images onboard (to downlink classification maps) and computer vision visual salience software to extract interesting regions for downlink in acquired imagery. Second, IPEX flew the Continuous Activity Scheduler Planner Execution and Re-planner AI planner/scheduler onboard to enable IPEX operations to replan to best use spacecraft resources such as file storage, CPU, power, and downlink bandwidth. First, the ground and flight operations concept for proposed HyspIRI IPM operations is described, followed by a description the ground and flight operations concept used for the IPEX mission to validate key elements of automation for the proposed HyspIRI IPM operations concept. The use of machine learning, computer vision, and automated planning onboard IPEX is also described. The results from the over-1-year flight of the IPEX mission are reported. Optimal aircraft ground scheduling is a well-known nondeterministic polynomial-time-hard problem; hence, many heuristics are used to generate schedules within realistic runtimes. These heuristics are designed to run fast, but they often do not promise any guarantee about the solution quality. Inspired by two existing algorithms for scheduling of railway operations, this paper introduces a branch-and-bound-based aircraft routing and scheduling approach with guaranteed global optimality as a real-time decision support tool for air traffic controllers. The performance of the algorithm is benchmarked against two previous approaches: a combinatorial approach using mixed-integer linear programming, and a heuristic approach based on bacterial foraging. The configuration agnostic design of the algorithm makes it suitable for applications: even to unconventional airport layouts. The globally optimal nature of the current solution exhibits a distinct improvement over the respective solutions while maintaining minimal runtimes. The algorithm is tested for flight operations at the cross-runway configuration of the Mumbai International Airport and delivers better and quicker solutions compared to previous studies. This paper presents a cooperative unmanned aerial vehicle navigation algorithm that allows a chief vehicle (equipped with inertial and magnetic sensors, a Global Positioning System receiver, and a vision system) to improve its navigation performance (in real time or in postprocessing phase), exploiting line-of-sight measurements from formation-flying deputies equipped with Global Positioning System receivers. The key concept is to integrate differential Global Positioning System and visual tracking information within a sensor fusion algorithm based on the extended Kalman filter. The developed concept and processing architecture are described, with a focus on the filtering algorithm. Then, flight-testing strategy and experimental results are presented. In particular, cooperative navigation output is compared with the estimates provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit accurate magnetic-and inertial-independent information. We study intraday market intermediation in an electronic market before and during a period of large and temporary selling pressure. On May 6, 2010, U.S. financial markets experienced a systemic intraday eventthe Flash Crashwhere a large automated selling program was rapidly executed in the E-mini S&P 500 stock index futures market. Using audit trail transaction-level data for the E-mini on May 6 and the previous three days, we find that the trading pattern of the most active nondesignated intraday intermediaries (classified as High-Frequency Traders) did not change when prices fell during the Flash Crash. New firms are an important source of job creation, but the underlying economic mechanisms for why this is so are not well understood. Using an identification strategy that links shocks to local income to job creation in the nontradable sector, we ask whether job creation arises more through new firm creation or through the expansion of existing firms. We find that new firms account for the bulk of net employment creation in response to local investment opportunities. We also find significant gross job creation and destruction by existing firms, suggesting that positive local shocks accelerate churn. We analyze linked databases on all SBA loans and lenders and on all U.S. employers to estimate the effects of financial access on employment growth. Estimation exploits the long panels and variation in local availability of SBA-intensive lenders. The results imply an increase of 3-3.5 jobs for each million dollars of loans, suggesting real effects of credit constraints. Estimated impacts are stronger for younger and larger firms and when local credit conditions are weak, but we find no clear evidence of cyclical variation. We estimate taxpayer costs per job created in the range of $21,000-$25,000. We provide evidence that lenders differ in their ex post incentives to internalize price-default externalities associated with the liquidation of collateralized debt. Using the mortgage market as a laboratory, we conjecture that lenders with a large share of outstanding mortgages on their balance sheets internalize the negative spillovers associated with the liquidation of defaulting mortgages and thus are less inclined to foreclose. We provide evidence consistent with our conjecture. Arguably as a consequence, zip codes with a higher concentration of outstanding mortgages experience smaller house prices declines. These results are not driven by unobservable zip code or lender characteristics. Agency mortgage-backed securities (MBS) trade simultaneously in a market for specified pools (SPs) and in the to-be-announced (TBA) forward market. TBA trading creates liquidity by allowing thousands of different MBS to be traded in a handful of TBA contracts. SPs that are eligible to be traded as TBAs have significantly lower trading costs than other SPs. We present evidence that TBA eligibility, in addition to characteristics of TBA-eligible SPs, lowers trading costs. We show that dealers hedge SP inventory with TBA trades, and they are more likely to prearrange trades in SPs that are difficult to hedge. We show that characterizing the effects of housing on portfolios requires distinguishing between the effects of home equity and mortgage debt. We isolate exogenous variation in home equity and mortgages by using differences across housing markets in house prices and housing supply elasticities as instruments. Increases in property value (holding home equity constant) reduce stockholdings, while increases in home equity wealth (holding property value constant) raise stockholdings. The stock share of liquid wealth would rise by 1 percentage point6% of the mean stock shareif a household were to spend 10% less on its house, holding fixed wealth. We document that a trading strategy that is short the U.S. dollar and long other currencies exhibits significantly larger excess returns on days with scheduled Federal Open Market Committee (FOMC) announcements. We show that these excess returns (i) are higher for currencies with higher interest rate differentials vis-a-vis the United States, (ii) increase with uncertainty about monetary policy, and (iii) increase further when the Federal Reserve adopts a policy of monetary easing. We interpret these excess returns as compensation for monetary policy uncertainty within a parsimonious model of constrained financiers who intermediate global demand for currencies. Financial intermediation naturally arises when knowing how loan payoffs are correlated is valuable for managing investments but lenders cannot easily observe that relationship. I show this result using a costly enforcement model in which lenders need ex post incentives to enforce payments from defaulted loans and borrowers' payoffs are correlated. When projects have correlated outcomes, learning the state of one project (via enforcement) provides information about the states of other projects. A large correlated portfolio provides ex post incentives for enforcement. Thus, intermediation dominates direct lending, and intermediaries are financed with risk-free deposits, earn positive profits, and hold systemic default risk. We examine changes in the scope of the sell-side analyst industry and whether these changes impact information dissemination and the quality of analysts' reports. Our findings suggest that changes in the number of analysts covering an industry impact analyst competition and have significant spillover effects on other analysts' forecast accuracy, bias, report informativeness, and effort. These spillover industry effects are incremental to the effects of firm level changes in analyst coverage. Overall, a more significant sell-side analyst industry presence has positive externalities that can result in better functioning capital markets. We study short-maturity (weekly) S&P 500 index options, which provide a direct way to analyze volatility and jump risks. Unlike longer-dated options, they are largely insensitive to the risk of intertemporal shifts in the economic environment. Adopting a novel seminonparametric approach, we uncover variation in the negative jump tail risk, which is not spanned by market volatility and helps predict future equity returns. As such, our approach allows for easy identification of periods of heightened concerns about negative tail events that are not always signaled by the level of market volatility and elude standard asset pricing models. Research summaryThis study develops a framework that links a typology of spinouts with distinct product/market strategies and the characteristics of localization economies to study location choice. Specifically, we examine focal spinouts and user-industry spinouts entry into generic and market-specific product categories and localization economies related to the focal industry and to downstream, user industries. We test our hypotheses on a sample of 413 spinouts in the U.S. semiconductor industry from 1997 to 2007. Our findings show that focal spinouts make different location choices than user-industry spinouts and that such choices are mediated by product strategy at entry. Our results contribute to the literatures on location choice and strategic entrepreneurship. Managerial summaryThis study concerns the location choices of different types of spinouts at the moment of entry. Our study suggests that location may be a key decision for entrepreneurs to increase their exposure to potential knowledge spillovers. Location, therefore, should not be taken as a given in either studies or business plans related to entrepreneurship. Rather, location needs to be treated as a strategic choice to be exercised by new firms as they enter an industry. Our research also underscores the importance of downstream industries both as a source of entrepreneurship and as a source of knowledge. In some cases, localization economies related to downstream industries might offer greater potential to entrants than localization economies related to the focal industry. Copyright (c) 2016 Strategic Management Society. Research SummaryThis study builds a grounded model of how careers shape entrepreneurs' preferences for causal and effectual decision logics when starting new ventures. Using both verbal protocol analysis and interviews, we adopt a qualitative research approach to induct career management practices germane to entrepreneurial decision making. Based on our empirical findings, we develop a model conceptualizing how configurations of career management practices, reflecting different emphases on career planning and career investment, are linked to entrepreneurial decision making through the imprint that they leave on one's view of the future, generating a tendency toward predictive and/or creative control. These findings extend effectuation theorizing by reformulating one of its most pervasive assumptions and showing how careers produce distinct pathways to entrepreneurial thinking, even prior to entrepreneurial entry. Managerial SummaryTreating your own career as a start-up impacts how you make decisions when actually becoming an entrepreneur. Based on empirical findings, we explain why and how sets of career management practices are distinctively linked to the use of different logics when making entrepreneurial decisions. Individuals who throughout their careers have emphasized investments in skills and networks over efforts to forecast and plan develop a general view of the future in which creative control dominates predictive control. The opposite is true for those who rely on managing their careers through planning but remain passive in their career investments. Upon entry to entrepreneurship, these differences become relevant such that some entrepreneurs rely on attempts to predict the future while others actively try to create it. (c) 2016 The Authors. Strategic Entrepreneurship Journal published by Strategic Management Society. Research summaryIn this article, we analyze the impact of academic incubators on the quality of innovations produced by U.S. research-intensive academic institutions. We show that establishing a university-affiliated incubator is followed by a reduction in the quality of university innovations. The conclusion holds when we control for the endogeneity of the decision to establish an incubator using the presence of incubators at peer institutions as an instrument. We also document a reduction in licensing income following the establishment of an incubator. The results suggest that university incubators compete for resources with technology transfer offices and other campus programs and activities, such that the useful outputs they generate can be partially offset by reductions in innovation elsewhere. Managerial summaryDo university incubators drain resources from other university efforts to generate innovations with commercial relevance? Our analysis suggests that they do: after research-intensive U.S. universities establish incubators, the quality of university innovations, which we measure with patents, drops. This finding has immediate implications for practice, as it suggests that the benefits and costs of incubation should not be analyzed in isolation. Rather, the effects of incubators extend to the overall innovation performance of the university. It follows that measuring the net economic effect of incubators is challenging because besides the effects on innovation efforts, the presence of an incubator may attract particular kinds of faculty and students, enhance the prestige of the university, generate economic multiplier effects, and benefit the community as a whole. Copyright (c) 2016 Strategic Management Society. Research SummaryAdding to the literature that optimists are attracted to entrepreneurship, this article finds that prior financial optimism has detrimental consequences for entrepreneurial pay satisfaction. Optimists overestimate the likelihood of positive events and will, therefore, tend to overestimate their prospects in entrepreneurship. It follows that conditional on realized entrepreneurial performance, optimists' subsequent pay satisfaction is lower through disappointment. These findings are consistent with theories of self-discrepancy from social psychology. Evidence is also provided that optimism reduces employee pay satisfaction, but since self-employment widens the scope for prospects to be exaggerated, the effects are stronger in self-employment. While selection on optimism implies that entry into entrepreneurship is likely to be excessive, optimism by reducing entrepreneurial pay satisfaction may increase entrepreneurial exits. Managerial summaryThis article examines how prior financial optimism affects entrepreneurial pay satisfaction. Optimists have a generalized tendency to overestimate the likelihood of positive events and will, therefore, tend to overestimate their prospects in self-employment. The results suggest that prior financial optimism reduces entrepreneurial pay satisfaction through disappointment. The same is true for employee pay satisfaction, but since entrepreneurship is typically a more uncertain and turbulent environment, making prospects harder to evaluate, the effects are found to be stronger in self-employment. Copyright (c) 2016 Strategic Management Society. This article studies how workforce composition is related to a firm's success in introducing radical innovations. Previous studies have argued that teams composed of individuals with diverse backgrounds are able to perform more information processing and make deeper use of the information, which is important to accomplish complex tasks. We suggest that this argument can be extended to the level of the aggregate workforce of high-technology firms. In particular, we argue that ethnic and higher education diversity within the workforce is associated with superior performance in radical innovation. Using a sample of 3,888 Swedish firms, this article demonstrates that having greater workforce diversity in terms of both ethnic background and educational disciplinary background is positively correlated to the share of a firm's turnover generated by radical innovation. Having more external collaborations does, however, seem to reduce the importance of educational background diversity. The impact of ethnic diversity is not affected by external collaboration. These findings hold after using alternative measures of dependent and independent variables, alternative sample sizes, and alternative estimation techniques. The research findings presented in this article would seem to have immediate and important practical implications. They would suggest that companies may pursue recruitment policies inspired by greater ethnic and disciplinary diversity as a way to boost the innovativeness of the organization. From a managerial perspective, it may be concluded that workforce disciplinary diversity could be potentially replaced by more external links, while ethnic diversity could not. This study extends the knowledge of the human resource management (HRM)-innovation relationship and examines how innovation-facilitating bundles of HRM practices are applied to facilitate radical pharmaceutical front-end innovation (FEI). The empirical investigation is an explorative case study of science-driven FEI and HRM practices across one in-depth case study and seven validation studies among international pharmaceutical and biotech companies. The findings provide a theoretical overview of key HRM practices in support of radical pharmaceutical FEI as well as an empirical mapping of how innovation-facilitating bundles of HRM practices are applied to actively develop radical, science-driven pharmaceutical FEI, including the identification of the key innovation challenges and opportunities involving innovation-facilitating HRM practices in pharmaceutical FEI. The article contributes to the existing innovation literature in terms of identifying how radical FEI may be facilitated through the application of innovation-facilitating bundles of HRM practices. The empirical contribution and managerial implications provide nine specific suggestions for how pharmaceutical management groups can better support radical pharmaceutical FEI through targeted HRM practices. The derived results of the study also underline inherent challenges of the pharmaceutical industry and regulations (FDA) that may not stimulate radical innovation, which cannot be resolved by HRM, but require the attention of policy makers. The value added lies in the specificity of the empirical, pharmaceutical context in which the issue of supporting radical, science-driven FEI is investigated. Radical innovation in professional settings faces an institutional challenge. Professionals enjoy autonomy predicated on jurisdictional knowledge and can resist radical innovation if their interests are threatened. Our study examines if and how managers mediate professional resistance and ensure that radical innovation can take hold. A comparative case study of 12 Italian hospitals introducing integrated service configurations shows that managers may hold back from introducing radical innovation where they judge professional resistance as insurmountable. Executives reinforce, rather than challenge, the status quo, and discourage middle managers from further actions. Where the professional context is more receptive because of micro-institutional affordances, then, managers enact different tactics. Managers may centralize decision-making through political work, which, however, increases professional resistance and hinders radical innovation. Managers may adopt project management approaches, which facilitate local experiments, but struggle to scale-up the radical innovation. Most successful cases are characterized by executive and middle managers enacting a two-step institutional work, which reconfigures the regulative, normative, and cognitive foundations of professional boundaries and practice. The comparative study shows how managers can support radical innovation in collaboration with professionals. In the two-step institutional work, executive and middle managers develop stable alliances with local professional groups to provide cognitive/normative foundations of radical innovation; second, they allow professionals to inhabit nascent institutional arrangements to make sense of how these fit with their prevailing interests, norms, and beliefs; third, they co-develop new structures/rules that encourage professionals to pursue radical innovation; finally, they perform maintenance work to preserve professionals' attachment to new institutions. As high resource consumption and high uncertainty are two of the most critical challenges to radical innovation, it is imperative to adopt resource structuring for an active management of resource portfolios, and also to adopt strategic flexibility for active management of contextual uncertainties, especially for firms in the emerging economies characterized by serious resource deficiency and high contextual uncertainty. Though firms engaging in resource structuring and strategic flexibility separately could foster radical innovation, the interaction effect of resource structuring and strategic flexibility could be complementary or substitutive, and the effective utilization of these two organizational dimensions as a joint force should be well aligned to achieve scientific breakthroughs. Specifically, this study explores how two different types of strategic flexibility (i.e., resource flexibility and coordination flexibility) as special capabilities interact with two different types of resource structuring (i.e., resource acquisition and resource accumulation) as special mechanisms to shape radical innovation under high uncertainty. With a sample of 508 Chinese firms, our results show that the specific effects of resource acquisition and resource accumulation on radical innovation are contingent upon resource flexibility and coordination flexibility in two contrasting patterns. Specifically, a firm with high resource flexibility tends to foster radical innovation under high uncertainty by interacting with resource accumulation, rather than with resource acquisition; in contrast, a firm with high coordination flexibility is likely to foster radical innovation under high uncertainty together with resource acquisition, rather than with resource accumulation. The theoretical and practical implications of the above two contrasting patterns are also discussed. Compared to large established firms, technology-based new firms (TBNF) seem well placed to produce breakthrough innovations although questions remain as to their adeptness at subsequent exploitation. Building on the innovation and strategy literatures, the study identifies two different knowledge-development approaches or modes (business models) in TBNFsinternal versus externaland examines their relation to breakthrough innovation and subsequent progression of the product to market. The internal mode assembles knowledge inside the firm to generate its innovations, whereas the external mode relies heavily on alliances to develop and assemble knowledge among firms embedded in a creative network. The study uses a unique panel dataset of 69 UK new biotechnology firms over an 11-year period to explore this issue empirically. The findings show that the external knowledge-development mode is associated with more breakthrough innovations and a faster movement of innovations to market. The externally focused mode is not impeded by its relative lack of internal knowledge; it uses partners to access, assemble, and develop a wide scope of knowledge in a flexible manner. In addition, partners provide deep domain expertise to undertake the requisite deep-dives. In contrast, the internal mode has the huge challenge of assembling knowledge resources internally and suffers from a quicker onset of path dependence that impedes the generation of breakthroughs. This study provides a choice of business models (internal or external) that is associated with different breakthrough and speed to market performance outcomes. Going forward, policy makers and managers seeking breakthrough innovations, and speedy progression of the innovations to market should consider the potential resource efficiency of the external mode and the vital role played by collaborationssmall firm versus large firm and private versus public entities. Despite the legacy of experience, some established firms are able to avoid a mindset, behaviors, and routines that can be expected to lead them down paths of local search and incremental product innovations of ever-declining value. Indeed, established firms are often adept at introducing successful path-breaking innovations. To explain this apparent paradox, this article draws on the organizational identity literature to present a model that ascribes breakthrough innovations by established firms to managerial identity-dissemination discourse (MIDD). MIDD is argued to provide a sense-giving framework, which fosters an understanding of the firm as a nexus of values around which the firm can be continuously rediscovered and reconstituted in new ways. By exposing the firm as an idea that can assume fresh forms in terms of product-market combinations, MIDD stimulates and coordinates creative endeavor, thus increasing the disposition to produce breakthrough innovations. The model also suggests that the impact of MIDD is likely to depend on transformational leadership and the level of centralization and formalization in the company. The results of a cross-sectional empirical study provide support for the model. In contrast to the focus of earlier research on behavioral and structural explanations for breakthroughs by established firms, this article advances understanding by offering a cognitive explanation. In doing so, the article highlights that creativity and innovation in firms are mentally located in an interpretive schema of the firm's identity, which has important implications in relation to organizing for breakthroughs. The article discusses these implications with particular reference to the use of multifunctional teams and advanced information and communication technologies for facilitating breakthroughs. Firms increasingly look to collaboration with alliance partners in their quest for breakthrough innovation. But how does the position of a firm in its alliance network weighted by the centrality of its partnersa concept which we term partner-weighted alliance centralityand the heterogeneities in the types of partners that it cooperates within terms of its private-public collaborationinfluence this quest? Using longitudinal data from the U.S. pharmaceutical industry, we build alliance networks in the period 1985-2001 to investigate these questions. We show that, for breakthrough innovation, collaborating with more partners that are more central in alliance networks the better, but only to a point. Beyond that point, we find that the likelihood of achieving breakthrough innovation drops. Furthermore, and looking at the kinds of knowledge provided by the partners in each firm's alliances, we report that firms with a greater share of private partners, relative to public partners, suffer less from the diminishing benefits of collaboration with central partners when developing breakthrough innovation. Taken together, we make novel contributions about how to organize for breakthrough innovation, and provide actionable managerial advice in terms of selecting collaborative partners in alliance networks. We report a carbothermal ammonia reduction (CAR) strategy for one-pot preparation of highly efficient and durable hydrogen evolution reaction (HER) electrocatalyst composed of earth abundant early transition metals. In this strategy, resin is applied both as the pre-binder and carbon source, and Pluronic F127 as the structuredirecting agent. Upon annealing in NH3, the Mo-containing precursor on Ni foam undergoes the CAR, and simultaneously Ni from the substrate diffuses out, resulting in the formation of a quaternary complex structure of molybdenum-nickel bimetallic carbonitride (MoNiNC). The as-prepared electrode showed outstanding HER performance with an overpotential of 150 mV at 50 mA cm(-2) and maintained steady hydrogen bubble evolution for continuous 48 h. Our strategy offers a quick and simple way to fabricate earth-abundant HER electrode with highly efficient and durable electrocatalytic performance. Lithium sulfur batteries attract the increasing attentions because of the high energy density. However, sulfur cathodes suffer from several scientific and technical issues which are related to polysulfide ion migration, low conductivity, and volume changes. Many strategies such as porous hosts, polysulfide adsorbents, catalyst, and conductive fillers and so on have been proposed to address these issues, separately. In this study, novel Co3S4 nanotubes are developed to efficiently host sulfur, adsorb polysulfide, and catalyze their conversion. Because of these multifunctional advantages in one structure, the resulting Co3S4@S nanotube electrodes demonstrate superior electrochemical properties for high performance lithium sulfur batteries. Pseudo-capacitive transition metal chalcogenides have recently received considerable attention as a promising class of materials for high performance supercapacitors (SCs) due to their superior intrinsic conductivity to circumvent the limitations of corresponding transition metal oxides with relatively poor conductivity. However, the important challenge associated with the utilization of such high-capacitive electrode materials is the development of desirably structured electrode materials, enabling efficient and rapid Faradaic redox reactions and ultra long-term cycling. Here, we propose a hierarchically integrated hybrid transition metal (Cu-Ni) chalcogenide shell-core-shell (HTMC-SCS) tubular heterostructure using a facile bottom-up synthetic approach. The resultant HTMC-SCS electrode exhibits a high volumetric capacitance of 25.9 F cm(-3) at a current density of 2 mA cm(-2). Furthermore, asymmetric SCs based on an HTMC-SCS heterostructured electrode demonstrate a high power density (770 mW cm(-3)) and an energy density (2.63 mW h cm(-3)) as well as an ultrahigh reversible capacity with a capacitance retention of 84% and a long-term cycling stability of over 10,000 cycles. Based on experimental results and density functional theory calculations, these remarkably improved electrochemical features are discussed and explained in terms of the unique combination of the conductive CuS core and active NiS shell materials, hierarchical tubular open geometry with nanoscale inner/outer shell structure, and mechanical stress-mitigating interlayer on shell-core-shell interface, allowing highly reversible and efficient electrochemical redox processes coupled with fast charge transfer kinetics and an electrochemically stable structure. By introducing a non-fullerene small molecule acceptor as a third component to typical polymer donor: fullerene acceptor binary solar cells, we demonstrate that the short circuit current density (J(sc)), open circuit voltage (V-oc), power conversion efficiency (PCE) and thermal stability can be enhanced simultaneously. The different surface energy of each component causes most of the non-fullerene acceptor molecules to self-organize at the polymer/fullerene interface, while the appropriately selected oxidation/reduction potential of the non-fullerene acceptor enables the resulting ternary junction to work through a cascade mechanism. The cascade ternary junction enhances charge generation through complementary absorption between the non-fullerene and fullerene acceptors and aids the efficient charge extraction from fullerene domains. The bimolecular recombination in the ternary blend layer is reduced as the ternary cascade junction increases the separation of holes and electrons during charge transportation and the trap assistant recombination induced by integer charge transfer (ICT) state potentially reduced due to the smaller pinning energy of inserted non-fullerene acceptor, leading to an unprecedented increase in the open circuit voltage beyond the binary reference values. Although the family of difluoro-2,1,3-benzothiadiazole with 2-octyldodecyl alkyl chains based donor copolymers have reached over 10% power conversion efficiencies (PCE) in past three years, several limitations are holding back their further commercialization application. For instance, those polymers have to be processed at a high temperature (similar to 110 degrees C) due to their strong aggregation in the solution. Here we report the achievement of low temperature-processed polymers for high-efficient polymer solar cells (PSCs) via random polymerization. The introduction of 2,2'-(perfluoro-1,4-phenylene)dithiophene (2TPF4) via random polymerization can weaken the strong self-aggregation of the polymers, enabling the polymers processible by spin-coating at room temperature as well as favor the formation of a near ideal active layer morphology which involves highly crystalline yet with reasonably small polymer domains. All these three polymers exhibit preferable face-on orientation and the domain purity could be significantly changed by the introduction of 2TPF4 block. A superior PCE of 9.4% of the photovoltaic device based on PffBT-2TPF4-9/1 was obtained, which is one of the best values for room temperature-processed solar cells. These findings indicate that low temperature processed high-efficient PSCs can be achieved by rational conjugated backbone engineering, which presents distinctive advantages for largescale production in the near future. The exceptional optical and electronic properties of metal halide perovskite make it the ideal newly coming optoelectronic materials. Compared to the record efficiency acquired in organic-inorganic hybrid perovskite LED, however, thin film CsPbBr3 perovskite LED is rarely reported. The inferior performance of CsPbBr3 thin film perovskite LED can be ascribed to its delocalization nature, weak exciton binding energy, low quantum efficiency and high leakage current. After comprehensive consideration of these factors, in this work, we first put forward a new method for fabrication of highly crystalline CsPbBr3 thin film through powder synthesis, and further control its interfacial properties to obtain continuous and uniform emission layer. The developed CsPbBr3 thin film perovskite LED exhibits greatly improved performance, with luminance as high as 10,700 cd m(-2) and current efficiency of 2.9 cd A(-1). Therefore, the facile powder method and subsequent interfacial modulation will open up a new and promising avenue for the future thin film perovskite LED development. Quinones with their structural diversity and electrochemical reversibility are among the most promising organic electrode materials. One distinct feature of quinones is their cross-conjugated structure, the importance of which in the design of organic electrode materials is so far overlooked. Here we report the design, synthesis, and characterizations of two cross-conjugated quinone oligomers (PBDTD and PBDTDS) and their nanocomposites with carbon nanotubes as potential low-cost organic electrode materials for Li-ion batteries. We investigate the effect of conjugation structure and molecular conformations (planar vs. helical) on electrochemical properties such as electronic conductivity, ionic conductivity, and electrode kinetics. Both quinones deliver similar specific capacity over 200 mA h g(-1) at 2.5 V versus Li/Li+ with excellent stability over 250 cycles. In particular, the difference in their rate performance is mainly determined by two aspects. First, cross-conjugation of PBDTD becomes electron transport-favorable through-conjugation after reduction, while PBDTDS is always cross-conjugated. Second, the planar conformation of PBDTD facilitates electron-transfer compared with the helical PBDTDS. This work provides insights into the popular yet less understood cross-conjugated quinone-based electrode materials and will stimulate the design of better quinone materials to achieve high-performance organic batteries. The ability to use infrared imaging systems with multicolor capabilities, high photoresponsivity and polarization sensitivity, is central to practical photodetectors and has been demonstrated with conventional devices based on III-V or II-VI semiconductors. However, the photodetectors working at room temperature with high responsivity for polarized infrared light detection remains elusive. Here, we first demonstrate a broadband photodetector using a vertical photogate heterostructure of BP-on-WSe2 (black phosphorus-on-tungsten diselenide) in which BP serves as the photogate and WSe2 as the conductive channel. Ultrahigh visible and infrared photoresponsivity at room temperature can reach up to similar to 10(3) A/W and similar to 5x10(-1) A/W, respectively, and ultrasensitive visible and infrared specific detectivity is obtained up to similar to 10(14) and similar to 10(10) Jones respectively at room temperature. Moreover, the high sensitivity to infrared polarization is about 40 mA/W with incident light polarized along the horizontal axis (defined as 0 degrees polarization). This performance is due to the strong intrinsic linear dichroism of BP and the device design which can sufficiently collect the photoinduced carriers isotropically, as well as the influence from the orientation of the edge of the BP-on-WSe2 overlapped area which is the same for all polarizations. The high responsivity, good sensitive detectivity and highly polarization-sensitive infrared photoresponse suggest that the photodetectors based on photogate structure afford new opportunities for infrared detecting or imaging at room temperature by using two-dimensional materials. A broadband perfect absorber based on loading effect-induced single-layer/trench-like thin metallic (LISTTM) structures is demonstrated. These LISTTM structures take advantage of both surface plasmon resonance and three-dimensional cavity effects to provide efficient, tunable, polarization-insensitive absorption from the ultraviolet (UV) to the infrared (IR) regime. The optimized hole-width of the LISTTM arrays was approximately one half of the designed wavelength. Therefore, even when the designed absorption band was in the visible regime, the feature sizes of the LISTTM structure could remain on the order of several hundred nanometers. Moreover, the loading effects, which were generated during the etching and deposition processes, further improved the maximum absorption to greater than 95% and widened the absorption bandwidth of the structures significantly. These LISTTM structures exhibited superior photothermal performance; they also displayed very low emissivity, thereby decreasing heat dispersion through thermal radiation. Therefore, the LISTTM arrays could efficiently absorb light of higher photon energy in the UV, visible, and near-IR regimes, effectively conduct the generated heat through the continuous metal films, and barely disperse any heat through thermal radiation. Accordingly, these attractive properties suggest that such LISTTM absorbers might have promising applications in many fields related to energy harvesting. Earth-abundant, noble-metal-free catalysts with outstanding electrochemical hydrogen evolution reaction catalytic activity in alkaline media play a key role in sustainable production of H-2 fuel. Herein, the novel three-dimensional Ni(OH)(2)/MoS2 hybrid catalyst with synergistic effect has been synthesized by a facile approach for efficient alkaline hydrogen evolution reaction. Benefiting from abundant active interfaces, this hybrid catalyst shows high hydrogen evolution catalytic activity in 1 M KOH aqueous solution with an onset overpotential of 20 mV, an overpotential of 80 mV at 10 mA cm(-2) and a Tafel slope of 60 mV dec(-1). Further theoretical calculations offers a deeper insight of the synergistic effect of Ni(OH)(2)/MoS2 interface: Ni(OH)(2) provides the active sites for hydroxyl adsorption, and MoS2 facilitates adsorption of hydrogen intermediates and H-2 generation. This interfacial cooperation leads to a favorable hydrogen and hydroxyl species energetics and reduce the energy barrier of the initial water dissociation step, which is the rate-limiting step of MoS2 catalyst in alkaline media. The combination of experimental and theoretical investigations demonstrates that the sluggish alkaline hydrogen evolution process can be circumvented by rational catalysts interface engineering. As a potential alternative to the prevailing lithium ion batteries, the application of sodium (Na) ion batteries (NIBs) in renewable energy and smart grid have revitalized research interest for large-scale energy storage. One of the roadblocks hindering their future commercialization is the development of suitable anode materials. Herein, we present the large-scale preparation of highly uniform iron sulfide (Fe1-xS) nanostructures by a costeffective and versatile one-step sulfurization strategy. Impressively, as a high-rate and viable sodium-ion anode, the as-prepared Fe1-xS nanostructure manifests appealing electrochemical performance (a high discharge capacity of 563 mA h g(-1) over 200 cycles at a current density of 100 mA g(-1) and outstanding cycling stability even at high rate of 10 A g(-1) up to 2000 cycles). Moreover, the proven pseudocapacitance contribution interprets the unprecedented rate capability. Meanwhile, the sodium storage mechanism in the as-prepared samples has also been investigated by using the in-situ X-ray diffraction techniques. Remarkably, a full cell based on Na0.6Co0.1Mn0.9O2 cathode and Fe1-xS anode deliver high discharge capacity (393 mA h g(-1)) and superior cycling stability. In this work, an in-situ experimental mass-electrochemical investigation of the LiFePO4 (LFP) and NaFePO4 (NFP) electrolyte interfacial chemical reactions and surface redox potential is achieved by adopting electrochemical quartz crystal microbalance (EQCM) to monitor the mass change trend. In organic electrolyte, LFP (NFP) cathode's mass decreases/increases during the charge/discharge process because of deintercalation/intercalation of Li (Na) ions, which is an normal phenomenon which is generally known. However, the mass-potential curve for LFP nanocrystals in aqueous electrolyte show an anomalous mass change interval (AMCI) around 3.42 V (vs. Li/Li+) where the cathode's mass increase in the charging process and mass decrease in the discharging process, which doesn't obey the normal law of mass change. Through density functional theory (DFT) calculations, we gain a microscopic picture of the solid-liquid interface structure with a reconstructed LFP (010)/H2O and NFP (010)/H2O interface. Taken together, it's concluded that the surface redox potential of LFP is around 3.31 V, which is lower than the bulk potential (3.42 V) and the desolvation/solvation rate of surficial Li-ion is lower than the bulk Li-ion diffusion rate. While for NFP, it's surface redox potential is almost the same as the bulk one. The exploration of highly-efficient, low-cost bifunctional oxygen electrocatalysts for both oxygen reduction reaction (ORR) and oxygen evolution reactions (OER) is critical for renewable energy storage and conversion technologies (e.g., fuel cells and metal-air batteries). Here we report the design and fabrication of free-standing nitrogen-doped carbon nanotube (NCNT) arrays via a directed growth approach as a high-performance bifunctional oxygen electrocatalyst. By virtue of the unique hierarchical nanoarray structure, uniform N-doping and decreased charge-transfer resistance, the as-prepared NCNT array exhibits rather high activity and stability in both ORR and OER, even superior to the mono-functional commercial Pt/C (for ORR) and IrO2 (for OER). A flexible, rechargeable all-solid-state zinc-air battery is successfully fabricated by using this self-supporting NCNT electrode as air-cathode, which gives excellent discharge-charge performance and mechanical stability. Graphene has been extensively investigated as anode material for Li and Na ion batteries due to its excellent physical and chemical performance. Herein, we report a new member of ` graphene family', a reduced graphene nanowire on three-dimensional graphene foam (3DGNW). The novel graphene nanowires were synthesized via a template strategy involving reduction and assembly process of nanosized graphene oxides (nGO), pyrolysis of polystyrene spheres (PS) template and catalytic reaction between GO and PS decomposition products. When evaluated as anodes material for Li and Na ion batteries, the 3DGNW exhibits relatively low discharge-voltage plateau, excellent reversible capacity, rate capability, and durable tolerance. For anode of Na ion batteries, a reversible capacity of more than 301 mAh g(-1) without capacity fading after 1000 cycles at rate of 1 C were achieved. Even at rate of 20 C, a high reversible capacity of 200 mAh g(-1) can be retained. The superior electrochemical performance is ascribed to hierarchical multidimensional graphene architecture, high graphene crystallinity, expansile graphene interlayer distance, and extensively lateral exposed edges/pores, which can promote the electron and ion transport. The realization of assembling reduced graphene sheets to graphene nanowire offers new opportunities for energy storage application of graphene based assembly in future. A solar heat shield coating featuring the combination of plasmonic nanostructures in silica-based insulating materials has been developed and tested under conditions resembling natural sunlight exposure. Our results when implementing this coating on standard glazing reveal a blocking efficiency higher than 40%, compatible with a notable preservation of visible light transmittance above 75%. This strategy is (i) cost-effective, as only requires minute amounts of absorbent material in order to obtain the desired effect; (ii) straightforward, because no particular ordering of the plasmon resonators is needed onto the glass substrate; (iii) eco-friendly, as no metal leaching is observed once the gold is encapsulated; and (iv) retrofit-capable, given the fact that these nanostructures can be easily incorporated onto pre-installed glazing. All of these features emphasize the great potential of this approach in the search of more sustainable technologies for the fenestration industry. Formamidinium lead halide perovskite nanoparticles (FAPbBr(3) NPs), because of their prominent piezoelectric properties, have drawn increasing attention in the fields of piezoelectric applications. Remarkable enhancement of piezoelectric power output is achieved from a piezoelectric composite nanogenerator by combining the FAPbBr(3) NPs with poly(vinylidene fluoride) (PVDF) polymer. The FAPbBr(3) NPs @ PVDF composite based piezoelectric nanogenerators show the highest outstanding outputs with voltage of 30 V and current density of 6.2 mu A cm (2) for organic-inorganic lead halide perovskite material based devices. The improved performance can be ascribed to the using of PVDF, piezoelectric polymer as matrix, as well as the homogeneous distribution of FAPbBr(3) NPs with enhanced stress on piezoelectric nanoparticles. The alternating energy generated from the nanogenerator can be utilized to charge a capacitor through a bridge rectifier and light up a commercial light-emitting diode (LED). Our obtained results open the feasibility of organic-inorganic lead halide perovskite materials for the design and development of high-efficiency energy harvesting applications. InGaN-based nanostructures have recently been recognized as promising materials for efficient solar hydrogen generation. This is due to their chemical stability, adjustable optoelectronic properties, suitable band edge alignment, and large surface-to-volume ratio. The inherent high density of surface trapping states and the lack of compatible conductive substrates, however, hindered their use as stable photo-catalysts. We have designed, synthesized and tested an efficient photocatalytic system using stable In0.33Ga0.67N-based nanorods (NRs) grown on an all-metal stack substrate (Ti-Mo) for a better electron transfer process. In addition, we have applied a bifunctional ultrathin thiol-based organic surface treatment using 1,2-ethanedithiol (EDT), in which sulfur atoms protected the surface from oxidation. This treatment has dual functions, it passivates the surface (by the removal of dangling bonds) and creates ligands for linking Ir-metal ions as oxygen evolution centers on top of the semiconductor. This treatment when applied to In0.33Ga0.67N NRs resulted in a photo-catalyst that achieved 3.5% solar-to-hydrogen (STH) efficiency, in pure water (pH similar to 7, buffer solution) under simulated one-sun (AM1.5G) illumination and without electrical bias. Over the tested period, a steady increase of the gas evolution rate was observed from which a turnover frequency of 0.23 s(-1) was calculated. The novel growth of InGaN-based NRs on a metal as well as the versatile surface functionalization techniques (EDT-Ir) have a high potential for making stable photo-catalysts with adjustable band gaps and band edges to harvest sun light. Effective power management has always been the difficulty and bottleneck for practicability of triboelectric nanogenerator (TENG). Here we propose a universal power management strategy for TENG by maximizing energy transfer, direct current (DC) buck conversion, and self-management mechanism. With the implemented power management module (PMM), about 85% energy can be autonomously released from the TENG and output as a steady and continuous DC voltage on the load resistance. The DC component and ripple have been systematically investigated with different circuit parameters. At a low frequency of 1 Hz with the PMM, the matched impedance of the TENG has been converted from 35 MO to 1 M Omega at 80% efficiency, and the stored energy has been dramatically improved in charging a capacitor. The universality of this strategy has been greatly demonstrated by various TENGs with the PMM for harvesting human kinetic and environmental mechanical energy. The universal power management strategy for TENG is promising for a complete micro-energy solution in powering wearable electronics and industrial wireless networks. With the new coupling mode of triboelectricity and semiconductor in the PMM, the tribotronics has been extended and a new branch of power-tribotronics is proposed for manageable triboelectric power by electronics. Lithium metal as an attractive anode material has been widely used in the advanced energy storage technology such as lithium-sulfur and lithium-air batteries. However, suffering from the uncontrollable deposition, growth of lithium dendrite and the serious volume change within cycling process, the commercial application of lithium anode is impeded by the safety hazards and limited span-life. Here, we demonstrate a kind of bamboo-derived 3D hierarchical porous carbon decorated by ZnO quantum dots which can serve as a lithiophilic scaffold for dendrite-free Li metal anode. This carbon scaffold is stable against the serious volumetric change during cycles. In addition, the 3D porous scaffold can reduce the effective local current density. Most importantly, the lithiophilic ZnO quantum dots within the carbon can be used to induce lithium deposition. Notably, lithium metal up to 131 mAh cm(-2) can be confined within ZnO@HPC, achieving acceptable volume expansion, considerable reduction in overpotential and effective dendrite suppression. Thus, 3D Li within ZnO@HPC scaffold could exhibit better capability and much lower voltage hysteresis when compared with Li foil in cells paired with LiCoO2. The function of the ZnO decorated 3D hierarchical porous carbon scaffold might provide innovative insights into the design principles for metallic lithium anodes. In this work, CuCl2 as a promoter was added into the mixture of polythiophene (PTh), FeCl3, and melamine for preparing Fe-Cu-N/C catalyst. The catalyst features one-dimensional bamboo-like carbon nanotubes with few metal oxide nanoparticles encapsulated into tubes. The catalyst exhibits excellent activity toward the oxygen reduction reaction (ORR) with half-wave potential 50 mV more positive than the commercial Pt/C in 0.1 M KOH. It also shows comparable ORR activity in 0.1 M HClO4 solution. Moreover, it exhibits superior long-term stability and excellent methanol tolerance in both alkaline and acidic solutions. The outstanding catalytic performance of Fe-Cu-N/C catalyst can be ascribed to the doping of Cu in the Fe-N-C architecture, which promotes the formation of bamboo-like nanotube structure and the generation of interaction among Cu and Fe-N-C. This synthetic strategy may open new avenues for constructing highly efficient electrocatalysts that adding of an inactive metal can obviously promote the catalytic performance of catalysts. Functionalities in heterostructure oxide material interfaces are an emerging subject resulting in extraordinary material properties such as great enhancement in the ionic conductivity in a heterostructure between a semiconductor SrTiO3 and an ionic conductor YSZ (yttrium stabilized zirconia), which can be expected to have a profound effect in oxygen ion conductors and solid oxide fuel cells [1-4]. Hereby we report a semiconductorionic heterostructure La0.6Sr0.4Co0.2Fe0.8O3-delta (LSCF) and Sm-Ca co-doped ceria (SCDC) material possessing unique properties for new generation fuel cells using semiconductor-ionic heterostructure composite materials. The LSCF-SCDC system contains both ionic and electronic conductivities, above 0.1 S/cm, but used as the electrolyte for the fuel cell it has displayed promising performance in terms of OCV (above 1.0 V) and enhanced power density (ca. 1000 mW/cm(2) at 550 degrees C). Such high electronic conduction in the electrolyte membrane does not cause any short-circuiting problem in the device, instead delivering enhanced power output. Thus, the study of the charge separation/transport and electron blocking mechanism is crucial and can play a vital role in understanding the resulting physical properties and physics of the materials and device. With atomic level resolution ARM 200CF microscope equipped with the electron energy-loss spectroscopy (EELS) analysis, we can characterize more accurately the buried interface between the LSCF and SCDC further reveal the properties and distribution of charge carriers in the heterostructures. This phenomenon constrains the carrier mobility and determines the charge separation and devices' fundamental working mechanism; continued exploration of this frontier can fulfill a next generation fuel cell based on the new concept of semiconductor-ionic fuel cells (SIFCs). The strong interdependence between the Seebeck coefficient, the electrical and thermal conductivity makes it difficult to obtain a high thermoelectric figure of merit, ZT. It is of critical significance to design a novel structure that manages to decouple these parameters. Here, we combine a liquid state manipulation method for solidified Bi0.5Sb1.5Te3 alloy with subsequent melt spinning, ball milling, and spark plasma sintering processes, to construct dedicated microstructures containing plenty of 60 degrees twin boundaries. These twin boundaries firstly scatter the very low-energy carriers and lead to an enhancement of the Seebeck coefficient. Secondly, they provide a considerable high carrier mobility, compensating the negative effect of the reduced hole concentration on the electrical conductivity. Thirdly, both experimental and calculated results demonstrate that the twin-boundary scattering dominates the conspicuous decrease of the lattice thermal conductivity. Consequently, the highest ZT value of 1.42 is achieved at 348 K, which is 27% higher than that of the sample with less twin boundary treated without liquid state manipulation. The average ZT value from 300 K to 400 K reaches 1.34. Our particular sample processing methods enabling the twin-dominant microstructure is an efficient avenue to simultaneously optimize the thermoelectric parameters. Luminescent solar concentrators (LSCs) are considered a promising technology to reduce the cost of electricity by decreasing the use of expensive photovoltaic materials, such as single-crystal silicon. In addition, LSCs are suitable for applications in building-integrated photovoltaics. Inorganic perovskite quantum dots (QDs) are promising candidates as absorbers/emitters in LSCs, due to their excellent optical properties including size/chemical-composition dependent absorption/emission spectrum, high absorption coefficient, high quantum yield and good stability. However, due to the large overlap between their absorption and emission spectra, it is still very challenging to fabricate large-area high-efficiency LSCs using perovskite QDs. Here we report the synthesis of mixed-halide perovskite CsPb(BrxI1-x)(3) QDs with small overlap of absorption and emission spectra, high quantum yield (over 60%) and absorption spectrum ranging from 300 to 650 nm. We use these QDs to build semi-transparent large-area LSCs that exhibit an external optical efficiency of 2% with a geometrical gain factor of 45 (9 cm in length). To date, these represent the brightest and most efficient solution-processed perovskite QDs based LSCs compared to LSCs based on perovskite thin films. The LSCs exhibit long term air stability without any noticeable variation in photoluminescence and lifetime under 4 W UV light illumination for over four hours. The microstructure and chemical composition of the discharge products generated in sodium-oxygen batteries are analyzed by transmission X-ray microscopy (TXM) revealing a complex architecture. Unexpectedly, sodium peroxide is detected in the cubic-shaped discharge deposits associated with the often reported sodium superoxide. Analyses on the product distribution show a surface layer rich in decomposition products in contact with an O-deficient region enclosing the bulk of these cubes. In addition, we show that oxygen evolution reaction (OER) occurs by a process that involves oxidation and re-dissolution of the bulk fraction, leading to a contraction of the discharge particles and a composition similar to that found in the intermediate phase of the deposits observed at the end of discharge. All these solid compounds are eventually removed at the end of recharge, but formation of these structures evidence possible irreversible pathways that need to be considered for the cycle life improvement. Cermet-based solar selective absorbing coatings are widely used, however, the long-term thermal instability and pretty high infrared emissivity at high temperatures (> 550 degrees C) are still challenging issues to be addressed, which essentially lies in suppressing the growing up and agglomeration behaviors of metal nanoparticles (NPs) and maintaining the interface integrity in the multi-layer stacked structure. Herein, we develop and explore WTi-Al2O3 cermet-based absorbing coatings, demonstrating a solar absorptance of similar to 93% and a very low thermal emissivity of 10.3% @ 500 degrees C even after annealing at 600 degrees C for 840 h in vacuum. It is revealed that the surface segregation of solute Ti atoms from the parent alloyed NPs and their partial oxidation to form protective layer restrain outward diffusion of W element, agglomeration of NPs, and interface structure degradation, in favor of enhancing the thermal tolerance of the coatings. These results suggest that the WTi-Al2O3 based absorbing coating is a good candidate for high-temperature solar thermal conversion. This paper introduces a new class of carbon-supported nickel-based electrocatalysts (Ni3Ag/C, Ni3Pd/C and Ni3Co/C) for the direct electrooxidation of ammonia borane (AB) in alkaline medium. The enabled anodic process opens the opportunity to build a novel concept of carbon-free-fuel energy conversion devices. Promising performances are reported for every studied catalyst. A trade-off is observed between the activity for the direct electrooxidation of AB and its decomposition (and related hydrogen release), as well as hydrogen oxidation reaction (HOR). The direct AB oxidation reaction (ABOR) onset is lower for the least noble materials (Ni3Co/C), due to its smaller catalytic activity for AB decomposition and hydrogen evolution, combined with a non-negligible activity for AB oxidation. On the contrary, on Ni3Pd/C, the noblest material, the onset of the ABOR is higher, because this material decomposes AB into hydrogen and evolves hydrogen, and then valorizes it. Moreover, Ni3Co/C exhibits a better durability than Ni3Ag/C, Ni3Pd/C, and than what is reported for Pt/C and Pd/C nanocatalysts in the literature in the same experimental conditions: identical-location transmission electron microscopy (ILTEM) investigations demonstrated no significant morphological degradations after accelerated stress tests (ASTs) in alkaline medium. This study therefore demonstrates that a carbon-supported nanostructured noble-free catalyst (Ni3Co/C) is both active and durable for the AB direct electrooxidation in alkaline medium. This result opens the way to direct liquid alkaline fuel cells fed with boron based fuels, a technology that could be both economically and industrially viable if noble-free catalysts are used. Piezoelectric energy harvesting is a promising technique for scavenging ambient mechanical motion for driving compact, low-power, multi-functional electronic devices. To adapt to various ambient surroundings, the geometric configurations and sizes varied in wide ranges with high operational reliabilities and piezoelectric performance have been regarded as a key for piezoelectric harvester design. Herein, by applying a normal force, we report an innovative structure for harvesting electric energy from bending the obliquely aligned GaN piezoelectric nanorods (NRs) that are integrated in the vertically integrated nanogenerator (VING). The single-crystalline GaN NRs used here were successfully synthesized with obliquely alignments on the pyramided Si substrate by plasma-assisted molecular beam epitaxy (PA-MBE). Using conductive atomic force microscope (c-AFM), a remarkable change in the Schottky barrier height (SBH) between the tip and GaN NR is observed upon bending an oblique-aligned GaN NR. This demonstrates that a remarkably enhanced piezoelectric performance of GaN NRs can be achieved by coupling a lateral force. We anticipate that this work will provide an efficient approach for coupling the lateral loading to enhance the electric potential in piezoelectric NRs-embedded VING, and thus open a new path for efficiently generating electric energy. Stable and repeatable operation is paramount for practical and extensive applications of all energy harvesters. Herein, we develop a new type of flexible piezoelectret generator, which converts mechanical energy into electricity consistently even under harsh environments. Specifically, the generator, with piezoelectric coefficient (d(33)) reaching similar to 6300 pC/N, had worked stably for continuous similar to 90000 cycles, and the generator pressed by a human hand produced load peak current and power up to similar to 29.6 mu A and similar to 0.444 mW, respectively. Moreover, the capability to steadily produce electrical power under extreme moisture and temperature up to 70 degrees C had been achieved for possible applications in wearable devices and flexible electronics. Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain's code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell's firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems. We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice. KCNQ1 is the pore-forming subunit of cardiac slow-delayed rectifier potassium (IKs) channels. Mutations in the kcnq1 gene are the leading cause of congenital long QT syndrome (LQTS). Here, we present the cryoelectron microscopy (cryo-EM) structure of a KCNQ1/calmodulin (CaM) complex. The conformation corresponds to an "uncoupled," PIP2-free state of KCNQ1, with activated voltage sensors and a closed pore. Unique structural features within the S4-S5 linker permit uncoupling of the voltage sensor from the pore in the absence of PIP2. CaM contacts the KCNQ1 voltage sensor through a specific interface involving a residue on CaM that is mutated in a form of inherited LQTS. Using an electrophysiological assay, we find that this mutation on CaM shifts the KCNQ1 voltage-activation curve. This study describes one physiological form of KCNQ1, depolarized voltage sensors with a closed pore in the absence of PIP2, and reveals a regulatory interaction between CaM and KCNQ1 that may explain CaM-mediated LQTS. During eukaryotic evolution, ribosomes have considerably increased in size, forming a surface-exposed ribosomal RNA (rRNA) shell of unknown function, which may create an interface for yet uncharacterized interacting proteins. To investigate such protein interactions, we establish a ribosome affinity purification method that unexpectedly identifies hundreds of ribosome-associated proteins (RAPs) from categories including metabolism and cell cycle, as well as RNA-and protein-modifying enzymes that functionally diversify mammalian ribosomes. By further characterizing RAPs, we discover the presence of ufmylation, a metazoan-specific post-translational modification (PTM), on ribosomes and define its direct substrates. Moreover, we show that the metabolic enzyme, pyruvate kinase muscle (PKM), interacts with sub-pools of endoplasmic reticulum (ER)-associated ribosomes, exerting a non-canonical function as an RNA-binding protein in the translation of ER-destined mRNAs. Therefore, RAPs interconnect one of life's most ancient molecular machines with diverse cellular processes, providing an additional layer of regulatory potential to protein expression. Centrosomes are non-membrane-bound compartments that nucleate microtubule arrays. They consist of nanometer-scale centrioles surrounded by a micron-scale, dynamic assembly of protein called the pericentriolar material (PCM). To study how PCM forms a spherical compartment that nucleates microtubules, we reconstituted PCM-dependent microtubule nucleation in vitro using recombinant C. elegans proteins. We found that macromolecular crowding drives assembly of the key PCM scaffold protein SPD-5 into spherical condensates that morphologically and dynamically resemble in vivo PCM. These SPD-5 condensates recruited the microtubule polymerase ZYG-9 (XMAP215 homolog) and the microtubule-stabilizing protein TPXL-1 (TPX2 homolog). Together, these three proteins concentrated tubulin similar to 4-fold over background, which was sufficient to reconstitute nucleation of microtubule asters in vitro. Our results suggest that in vivo PCM is a selective phase that organizes microtubule arrays through localized concentration of tubulin by microtubule effector proteins. In flies, Centrosomin (Cnn) forms a phosphorylation-dependent scaffold that recruits proteins to the mitotic centrosome, but how Cnn assembles into a scaffold is unclear. We show that scaffold assembly requires conserved leucine zipper (LZ) and Cnn-motif 2 (CM2) domains that co-assemble into a 2:2 complex in vitro. We solve the crystal structure of the LZ:CM2 complex, revealing that both proteins form helical dimers that assemble into an unusual tetramer. A slightly longer version of the LZ can form micron-scale structures with CM2, whose assembly is stimulated by Plk1 phosphorylation in vitro. Mutating individual residues that perturb LZ:CM2 tetramer assembly perturbs the formation of these micron-scale assemblies in vitro and Cnn-scaffold assembly in vivo. Thus, Cnn molecules have an intrinsic ability to form large, LZ:CM2-interaction-dependent assemblies that are critical for mitotic centrosome assembly. These studies provide the first atomic insight into a molecular interaction required for mitotic centrosome assembly. Genetic studies have elucidated critical roles of Piwi proteins in germline development in animals, but whether Piwi is an actual disease gene in human infertility remains unknown. We report germline mutations in human Piwi (Hiwi) in patients with azoospermia that prevent its ubiquitination and degradation. By modeling such mutations in Piwi (Miwi) knockin mice, we demonstrate that the genetic defects are directly responsible for male infertility. Mechanistically, we show that MIWI binds the histone ubiquitin ligase RNF8 in a Piwi-interacting RNA (piRNA)-independent manner, and MIWI stabilization sequesters RNF8 in the cytoplasm of late spermatids. The resulting aberrant sperm show histone retention, abnormal morphology, and severely compromised activity, which can be functionally rescued via blocking RNF8-MIWI interaction in spermatids with an RNF8-N peptide. Collectively, our findings identify Piwi as a factor in human infertility and reveal its role in regulating the histone-to-protamine exchange during spermiogenesis. Mutations truncating a single copy of the tumor suppressor, BRCA2, cause cancer susceptibility. In cells bearing such heterozygous mutations, we find that a cellular metabolite and ubiquitous environmental toxin, formaldehyde, stalls and destabilizes DNA replication forks, engendering structural chromosomal aberrations. Formaldehyde selectively depletes BRCA2 via proteasomal degradation, a mechanism of toxicity that affects very few additional cellular proteins. Heterozygous BRCA2 truncations, by lowering pre-existing BRCA2 expression, sensitize to BRCA2 haploinsufficiency induced by transient exposure to natural concentrations of formaldehyde. Acetaldehyde, an alcohol catabolite detoxified by ALDH2, precipitates similar effects. Ribonuclease H1 ameliorates replication fork instability and chromosomal aberrations provoked by aldehyde-induced BRCA2 haploinsufficiency, suggesting that BRCA2 inactivation triggers spontaneous mutagenesis during DNA replication via aberrant RNA-DNA hybrids (R-loops). These findings suggest a model wherein carcinogenesis in BRCA2 mutation carriers can be incited by compounds found pervasively in the environment and generated endogenously in certain tissues with implications for public health. The maintenance of tissue homeostasis is critically dependent on the function of tissue-resident immune cells and the differentiation capacity of tissue-resident stem cells (SCs). How immune cells influence the function of SCs is largely unknown. Regulatory T cells (Tregs) in skin preferentially localize to hair follicles (HFs), which house a major subset of skin SCs (HFSCs). Here, we mechanistically dissect the role of Tregs in HF and HFSC biology. Lineage-specific cell depletion revealed that Tregs promote HF regeneration by augmenting HFSC proliferation and differentiation. Transcriptional and phenotypic profiling of Tregs and HFSCs revealed that skin-resident Tregs preferentially express high levels of the Notch ligand family member, Jagged 1 (Jag1). Expression of Jag1 on Tregs facilitated HFSC function and efficient HF regeneration. Taken together, our work demonstrates that Tregs in skin play a major role in HF biology by promoting the function of HFSCs. Regulatory T cells (T-regs) are a barrier to anti-tumor immunity. Neuropilin-1 (Nrp1) is required to maintain intratumoral T-reg stability and function but is dispensable for peripheral immune tolerance. T-reg-restricted Nrp1 deletion results in profound tumor resistance due to T-reg functional fragility. Thus, identifying the basis for Nrp1 dependency and the key drivers of T-reg fragility could help to improve immunotherapy for human cancer. We show that a high percentage of intratumoral NRP1(+) T-regs correlates with poor prognosis in melanoma and head and neck squamous cell carcinoma. Using a mouse model of melanoma where Nrp1-deficient (Nrp1(-/-)) and wild-type (Nrp1(+/+)) T-regs can be assessed in a competitive environment, we find that a high proportion of intratumoral Nrp1(-/-) T-regs produce interferon-gamma (IFN gamma), which drives the fragility of surrounding wild-type T-regs, boosts anti-tumor immunity, and facilitates tumor clearance. We also show that IFN gamma-induced T-reg fragility is required for response to anti-PD1, suggesting that cancer therapies promoting T-reg fragility may be efficacious. Selection for inflorescence architecture with improved flower production and yield is common to many domesticated crops. However, tomato inflorescences resemble wild ancestors, and breeders avoided excessive branching because of low fertility. We found branched variants carry mutations in two related transcription factors that were selected independently. One founder mutation enlarged the leaf-like organs on fruits and was selected as fruit size increased during domestication. The other mutation eliminated the flower abscission zone, providing "jointless" fruit stems that reduced fruit dropping and facilitated mechanical harvesting. Stacking both beneficial traits caused undesirable branching and sterility due to epistasis, which breeders overcame with suppressors. However, this suppression restricted the opportunity for productivity gains from weak branching. Exploiting natural and engineered alleles for multiple family members, we achieved a continuum of inflorescence complexity that allowed breeding of higher-yielding hybrids. Characterizing and neutralizing similar cases of negative epistasis could improve productivity in many agricultural organisms. In this paper, we present a robust global approach for point cloud registration from uniformly sampled points. Based on eigenvalues and normals computed from multiple scales, we design fast descriptors to extract local structures of these points. The eigenvalue-based descriptor is effective at finding seed matches with low precision using nearest neighbor search. Generally, recovering the transformation from matches with low precision is rather challenging. Therefore, we introduce a mechanism named correspondence propagation to aggregate each seed match into a set of numerous matches. With these sets of matches, multiple transformations between point clouds are computed. A quality function formulated from distance errors is used to identify the best transformation and fulfill a coarse alignment of the point clouds. Finally, we refine the alignment result with the trimmed iterative closest point algorithm. The proposed approach can be applied to register point clouds with significant or limited overlaps and small or large transformations. More encouragingly, it is rather efficient and very robust to noise. A comparison to traditional descriptor-based methods and other global algorithms demonstrates the fine performance of the proposed approach. We also show its promising application in large-scale reconstruction with the scans of two real scenes. In addition, the proposed approach can be used to register low-resolution point clouds captured by Kinect as well. In this paper, we propose a novel scheme for scalable image coding based on the concept of epitome. An epitome can be seen as a factorized representation of an image. Focusing on spatial scalability, the enhancement layer of the proposed scheme contains only the epitome of the input image. The pixels of the enhancement layer not contained in the epitome are then restored using two approaches inspired from local learning-based super-resolution methods. In the first method, a locally linear embedding model is learned on base layer patches and then applied to the corresponding epitome patches to reconstruct the enhancement layer. The second approach learns linear mappings between pairs of co-located base layer and epitome patches. Experiments have shown that the significant improvement of the rate-distortion performances can be achieved compared with the Scalable extension of HEVC (SHVC). In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors. Mathematicians commonly distinguish two modes of work in the discipline: Problem solving, and theory building. Mathematics education offers many opportunities to learn problem solving. This paper explores the possibility, and value, of designing instructional activities that provide supported opportunities for students to learn mathematics theory-building practices. It begins by providing a definition of these theory-building practices on the basis of which to formulate principles for the design of such instructional activities. The paper offers theoretical arguments that theory-building practices serve not only the synthesizing role that they play in disciplinary mathematics, but they also have the potential to enrich learners' reasoning powers and enhance their problem solving skills. Examples of problem sets designed for this purpose are provided and analyzed. This article concerns student sense making in the context of algebraic activities. We present a case in which a pair of middle-school students attempts to make sense of a previously obtained by them position formula for a particular numerical sequence. The exploration of the sequence occurred in the context of two-month-long student research project. The data were collected from the students' drafts, audiotaped meetings of the students with the teacher and a follow-up interview. The data analysis was aimed at identification and characterization of the algebraic activities in which the students were engaged and the processes involved in the students' sense-making quest. We found that sense-making process consisted of a sequence of generational and transformational algebraic activities in the overarching context of a global, meta-level activity, long-term problem solving. In this sense-making process, the students: (1) formulated and justified claims; (2) made generalizations, (3) found the mechanisms behind the algebraic objects (i.e., answered why-questions); and (4) established coherence among the explored objects. The findings are summarized as a suggestion for a four component decomposition of algebraic sense making. In this article, we use multimodality to examine how bilingual students interact with an area task from the National Assessment of Educational Progress in task-based interviews. Using vignettes, we demonstrate how some of these students manipulate the concrete materials, and use gestures, as a primary form of structuring their explanations and making mathematical meaning. We use our results as a basis to challenge the possible deficit perspective of bilingual students' mathematical knowledge in current assessment practices. Choosing tasks that afford multiple modes of engagement and recognizing multimodal explanations in assessment practices has the potential to move us towards a better understanding of what bilingual students know and can do mathematically. Curricular implementations are unlikely to deliver the anticipated benefits for mathematics learners if written guidance to teachers is interpreted and enacted differently from the ways that policymakers and curriculum designers intend. One way in which this could happen is in relation to the mathematics tasks that teachers deploy in the classroom. Teachers and curriculum designers have developed an extensive vocabulary for describing tasks, using adjectives such as 'rich', 'open', 'real-life', 'engaging' and so on. But do teachers have a shared understanding of what these adjectives mean when they are applied to mathematics tasks? In study 1, we investigated teachers' appraisals of adjectives used to describe mathematics tasks, finding that task appraisals vary on seven dimensions, which we termed engagement, demand, routineness, strangeness, inquiry, context and interactivity. In study 2, focusing on the five most prominent dimensions, we investigated whether teachers have a shared understanding of the meaning of adjectives when applied to mathematics tasks. We found that there was some agreement about inquiry and context, some disagreement about routineness and clear disagreement about engagement and demand. We conclude that at least some adjectives commonly used to describe tasks are interpreted very differently by different teachers. Implications for how tasks might be discussed meaningfully by teachers, teacher educators and curriculum designers are highlighted. While the general planning advice offered to mathematics teachers seems to be to start with simple examples and build complexity progressively, the research reported in this article is a contribution to the body of literature that argues the reverse. That is, posing of appropriately complex tasks may actually prompt the use of more sophisticated strategies. Results are presented from a detailed study of young children working on tasks that prompt multiplicative thinking. It was found that the tasks involving more complex number triples prompted the use of more sophisticated multiplicative thinking. Preparing students for their lives beyond schooling appears to be a universal goal of formal education. Much has been done to make mathematics education more "realistic," but such activities nevertheless generally remain within the institutional norms of education. In this article, we assume that pedagogic relations are also an integral part of working life and draw on Bernstein's work to address their significant features in this context. However, unlike participation in formal mathematics education, where the discipline is central, workers are likely to be confronted by, and need to reconcile, a range of other valued workplace discourses, both epistemic and social/cultural in nature. How might mathematics education work towards overcoming the hiatus between these two very different institutional settings? This article will argue that the skills of recontextualisation, central to teachers' work, should be integral to the mathematics education of all future workers. It will consider theoretical perspectives on pedagogic discourse and the consequences of diverse knowledge structures at work, with implications for general and vocational mathematics education. The RELEDscientific payload of the Vernov satellite launched on July 8, 2014 includes the DRGE spectrometer of gamma-rays and electrons. This instrument comprises a set of scintillator phoswich-detectors, including four identical X-ray and gamma-ray detector with an energy range of 10 kev to 3 MeV with a total area of 500 cm(2) directed to the atmosphere, as well as an electron spectrometer containing three mutually orthogonal detector units with a geometric factor of 2 cm(2) sr. The aim of a space experiment with the DRGE instrument is the study of fast phenomena, in particular Terrestrial gamma-ray flashes (TGF) and magnetospheric electron precipitation. In this regard, the instrument provides the transmission of both monitoring data with a time resolution of 1 s, and data in the event-by-event mode, with a recording of the time of detection of each gamma quantum or electron to an accuracy of 15 mu s. This makes it possible to not only conduct a detailed analysis of the variability in the gamma-ray range, but also compare the time profiles with the results of measurements with other RELEC instruments (the detector of optical and ultraviolet flares, radio-frequency and low-frequency analyzers of electromagnetic field parameters), as well as with the data of ground-based facility for thunderstorm activity. This paper presents the first catalog of Terrestrial gamma-ray flashes. The criterion for selecting flashes required in order to detect no less than 5 hard quanta in 1 ms by at least two independent detectors. The TGFs included in the catalog have a typical duration of 400 mu s, during which 10-40 gamma-ray quanta were detected. The time profiles, spectral parameters, and geographic position, as well as a result of a comparison with the output data of other Vernov instruments, are presented for each of candidates. The candidate for Terrestrial gamma-ray flashes detected in the near-polar region over Antarctica is discussed. The direction of the twilight sky background polarization on the celestial sphere far from the solar vertical depends on the ratio of single and multiple scattering contributions. Variations in the polarization direction during twilight reflect the evolution of the properties and background emission components and can be used to control the procedure of selecting single scattering. This makes it possible to specify the temperature measurements according to the molecular scattering of solar emission and the contribution of dust scattering in the upper mesosphere. The results of the temperature measurements during the observations in 2011-2015 have been presented. This paper discusses the errors in analyzing solar-terrestrial relationships, which result from either disregarding the types of interplanetary drivers in studying the magnetosphere response on their effect or from the incorrect identification of the type of these drivers. In particular, it has been shown that the absence of selection between the Sheath and ICME (the study of so-called CME-induced storms, i.e., magnetic storms generated by CME) leads to errors in the studies of interplanetary conditions of magnetic storm generation, because the statistical analysis has shown that, in the Sheath + ICME sequences, the largest number of storm onsets fell on the Sheath, and the largest number of storms maxima fell at the end of the Sheath and the beginning of the ICME. That is, the situation is observed most frequently when at least the larger part of the main phase of storm generation falls on the Sheath and, in reality, Sheath-induced storms are observed. In addition, we consider several cases in which magnetic storms were generated by corotating interaction regions, whereas the authors attribute them to CME. Results of laboratory measurements of the dielectric characteristics of lunar soil samples returned to the Earth by the Luna and Apollo missions have been analyzed. The feasibility of determining the density of the upper cover of the Moon from the permittivity, which is restored as a result of solving the inverse problem of radiolocation, has been discussed. A formula has been proposed for approximating the frequency dependence of the loss tangent for the regolith and bedrock. Relationships have been deduced for estimating the percentage of metal oxides in the lunar soil. The problem of the optimal spacecraft's insertion from the Earth into the high circular polar Moon Artificial Satellite's orbit (MAS) with a radius of 4000-8000 km has been investigated. A comparison of single- and three-impulse insertion schemes has been performed. The analysis was made taking into account the disturbances from the lunar gravity field harmonics and the gravity fields of the Earth and the Sun, as well as the engine's limited thrust. It has been shown that the three-impulse transfer from the initial selenocentric hyperbola of the approach into the considered final high MAS orbit is noticeably better with respect to the final mass than the ordinary single-impulse deceleration. The control parameters that implement this maneuver and provide nearly the same energy expenses as in the Keplerian case have been presented. It was found that, in contrast to the Keplerian case, in the considered case of the real gravity field, there is the optimal maximum distance of the maneuver. Recently, the Moon exploration problem became actual again. A satellite equipped with a magnetic attitude control system and a pitch flywheel has been considered. The system performance in the transient mode has been investigated. The characteristic exponent of the system have been approximated for a satellite on a circumpolar orbit. In the steady-state mode of gravitational attitude, small motions are considered in the vicinity of equilibrium. The attitude accuracy has been analyzed. The algorithm of an arbitrary but given attitude of the satellite in the orbital plane has been investigated. A numerical simulation has been performed. The possibility of the spacecraft insertion into the system of operational heliocentric orbits has been analyzed. It has been proposed to use a system of several operational heliocentric orbits. On each orbit, the spacecraft makes one or more revolutions around the Sun. These orbits are characterized by a relatively small perihelion radius and relatively high inclination, which allows one to investigate the polar regions of the Sun. The transition of the spacecraft from one orbit to another has been performed using an unpowered gravity assist maneuver near Venus and does not require the cruise propulsion operation. Each maneuver transfers the spacecraft into the sequence of operational heliocentric orbits. We have analyzed several systems of operational heliocentric orbits into which the spacecraft can be inserted by means of the considered transportation system with electric propulsion (EP). The mass of the spacecraft delivered to these systems of operational orbits has been estimated. Control of an orbital tether system that consists of two small spacecraft has been considered. The proposed control laws are based on the modification of well-known programs for the deployment of tether system systems under the assumption that the masses of spacecraft and the tether are comparable in magnitude. To construct nominal deployment programs, we have developed a mathematical model of the motion of the given system in an orbital moving coordinate system taking into account the specific features of this problem. The performance of the proposed deployment programs is assessed by a mathematical model of the orbital tether system with distributed parameters written in the geocentric coordinate system. The test calculations involve a linear regulator that implements feedback on the tether length and velocity. This article provides a historical contextualization of Corporate Social Responsibility (CSR) and its political role. CSR, we propose, is one form of business-society interactions reflecting a unique ideological framing. To make that argument, we compare contemporary CSR with two historical ideal-types. We explore in turn paternalism in nineteenth century Europe and managerial trusteeship in early twentieth century US. We outline how the political responsibilities of business were constructed, negotiated, and practiced in both cases. This historical contextualization shows that the frontier between economy and polity has always been blurry and shifting and that firms have played a political role for a very long time. It also allows us to show how the nature, extent, and impact of that political role changed through history and co-evolved in particular with shifts in dominant ideologies. Globalization, in that context, is not the driver of the political role of the firm but a moderating phenomenon contributing significantly to the dynamics of this shift. The comparison between paternalism, trusteeship, and contemporary CSR points to what can be seen as functional equivalents-alternative patterns of business-society interactions that each correspond, historically, to unique and distinct ideological frames. We conclude by drawing implications for future theorizing on (political) CSR and stakeholder democracy. In this paper we seek to uncover and analyse unitarist ideology within the field of HRM, with particular emphasis on the manner in which what we call 'new unitarism' is ideologically performative in HRM scholarship. Originally conceived of as a way of understanding employer ideology with regard to the employment relationship, unitarist frames of reference conceive a workplace that is characterised by shared interests and a single source of authority. This frame has continuously evolved and persistently formed thinking about HRM; however, this influence has been largely covert and unexamined. Using an epistemic analysis informed by theories of knowledge, we examine new unitarism against three types of validity claims-descriptive, normative, and instrumental-in order to understand how it has been ideologically constitutive of HRM scholarship. We consider the implications of this analysis for HRM research and practice and contend that an alternative frame, namely 'new pluralism', has potential to offer a more valid account of the employment relationship, to provide a framework for assessing how power affects the pursuit of employee interests, and to allow space for taking up deeply ethical questions related to employment. Cultural diversity is an increasingly important phenomenon that affects not only social and political harmony but also the cohesion and efficiency of organisations. The problems that firms have with regard to managing cultural diversity have been abundantly studied in recent decades from the perspectives of management theory and moral philosophy, but there are still open questions that require deeper reflection and broader empirical analysis. Managing cultural diversity in organisations is of prime importance because it involves harmonising different values, beliefs, credos and customs, and, in essence, human identity. Taking into consideration these cultural differences and harmonising them is a human rights issue (UNDP, Cultural liberty in today's diverse world, 2004) and a central dimension of corporate social responsibility. Here we are going to focus on theoretical reflection about the ideas that lie behind corporate policies and organisational initiatives that deal with cultural diversity. The aim of our paper is twofold: to present a critical reflection on the ideology of tolerance, and propose an ideology of respect for dealing with cultural diversity. We start by presenting the plurality of interpretations of the concept of ideology, and justify its applicability to the field of cultural diversity. We then reflect on the differences between "tolerance" and "respect" and identifying the practical implications for managing cultural diversity. And finally, we propose a culture of respect that goes beyond tolerance and complements and legitimizes the "business case" perspective for managing cultural diversity in companies. The ideology of respect is based on the Kantian tradition and on the discursive approach where rational dialogue and argumentation are considered the legitimate process for creating a culture of intercultural respect. From this theoretical discussion of the key philosophical concepts we can suggest some general principles for managing cultural diversity in organisations. This paper explores the role of ideology in attempts to influence public policy and in business representation in the EU-China solar panel anti-dumping dispute. It exposes the dynamics of international activity by emerging-economy multinationals, in this case from China, and their interactions in a developed-country context (the EU). Theoretically, the study also sheds light on the recent notion of 'liability of origin', in addition to the traditional concept of 'liability of foreignness' explored in international business research, in relation to firms' market and political strategies and their institutional embeddedness in home and host countries. Through a qualitative analysis of primary and secondary materials and interview data with key protagonists, we provide a detailed evolution of the case, the key actors involved and their positions, arguments and strategies. This illustrates the complexities involved in the interaction between markets and ideologies in the midst of debates regarding different forms of subsidy regimes for renewable energy, free trade versus protectionist tendencies by governments, and the economic and sustainability objectives of firms and societies. The case shows how relative newcomers to the EU market responded to overcome a direct threat to their business and became, with support from their home government, active participants in the public debate through interactions with local commercial partners and non-governmental organisations. Firms adopted relatively sophisticated strategies to reduce their liabilities vis-A -vis host-country institutions and local stakeholders, including collective action, to increase their legitimacy and reputation, and counter ideologically based attacks. We also discuss implications and limitations. The purpose of this paper is to examine the outcomes arising from ideologically driven health reforms, which confronted an enduring socialized model of public health care in New Zealand. The primary focus is on the narratives arising from the unprecedented strike action of junior doctors, symbolic of industrial unrest in the public health sector. Analysis revealed the way in which moral obligations ingrained in the professional identities of junior doctors can be both enacted and persistently challenged by ongoing and extensive ideologically embedded reform. A socialized public healthcare system privileges cooperation and relies on a public service ethos, espousing commitment and goodwill of health professionals. The inverse tenets of a pursuit of efficiency through New Public Management validate an ideology of market principles which legitimate competitive and self-interested behavior. The value-based disconnect that occurred not only affected the goodwill and trust of junior doctors, but also destabilized their commitment to their work in a way that threatened the ongoing sustainability of the public health service. This paper suggests four areas in which research might help solve the ethical conundrums currently undermining a public health service. The suggested direction moves the emphasis on systems and activities toward the cognitive and emotional needs of healthcare professionals. This article offers a first step toward a multi-level theory linking social movements to corporate social initiatives. In particular, building on the premise that social movements reflect ideologies that direct behavior inside and outside organizations, this essay identifies mechanisms by which social movements induce firms to engage with social issues. First, social movements are able to influence the expectations that key stakeholders have about firms' social responsibility, making corporate social initiatives more attractive. Second, through conflict or collaboration, they shape firms' reputation and legitimacy. And third, social movements' ideologies manifest inside the corporations by triggering organizational members' values and affecting managerial cognition. The essay contributes to the literatures on social movements and CSR, extends the understanding of how ideologies are manifested in movement-business interactions, and generates rich opportunities for future research. Research on social entrepreneurship has taken an increasing interest in issues pertaining to ideology. In contrast to existing research which tends to couch 'ideology' in pejorative terms (i.e., something which needs to be overcome), this paper conceives ideology as a key mechanism for rendering social entrepreneurship an object with which people can identify. Specifically, drawing on qualitative research of arguably one of the most prolific social entrepreneurship intermediaries, the global Impact Hub network, we investigate how social entrepreneurship is narrated as an 'ideal subject,' which signals toward others what it takes to lead a meaningful (working) life. Taking its theoretical cues from the theory of justification advanced by Boltanski, Chiapello and Th,venaut, and from recent affect-based theorizing on ideology, our findings indicate that becoming a social entrepreneur is considered not so much a matter of struggle, hardship, and perseverance but rather of 'having fun.' We caution that the promise of enjoyment which pervades portrayals of the social entrepreneur might cultivate a passive attitude of empty 'pleasure' which effectively deprives social entrepreneurship of its more radical possibilities. The paper concludes by discussing the broader implications this hedonistic rendition of social entrepreneurship has and suggests a re-politicization of social entrepreneurship through a confronting with what Slavoj A 1/2 iA 3/4 ek calls the 'impossible.'. In a society where the ideology of shareholder value maximization (SVM) prevails, how do evaluators make appraisal and bonus decisions when corporate social responsibility (CSR) measures and financial measures in the balanced scorecard (BSC) point in different directions? To explore this question, we conducted two studies to develop and test a conceptual framework. Participants were asked to evaluate the performance of two managers, using a case we wrote about a commercial bank. We found that (1) evaluators are more willing to drop CSR performance measures than financial measures from the evaluations; (2) perceived CSR relevance is influenced by where evaluators stand in regard to CSR ("stakeholder view" in the "Perceptions of the Role of Ethics and Social Responsibility" or PRESOR scale) and also by where evaluators believe shareholders stand (shareholder support); and (3) there is a financial bias in appraisal and bonus decisions when CSR measures are used in the BSC, consistent with SVM ideology. We conclude by discussing the implications of the influence of SVM ideology on the use of CSR measures in terms of business research, practice, and education. We propose, in this article, a pluralistic theory of ethics programs orientations, empirically derived from the statistical analysis of responses to an ad hoc questionnaire on organizational ethics practices. The results of our research identify six different orientations to ethics programs, corresponding to as many types of organizational ethics practices. This model goes beyond the traditional opposition between a compliance orientation, focused on the regulation of behavior and the detection of deviance, and a values-based orientation, which is said to be more reflective and enabling, and allows for a more sophisticated understanding of the composition of ethics programs. Drawing from the theory of requisite variety, we suggest that these six orientations are not to be considered in opposition but rather as complementary and even synergistic. In doing so, this pluralism has the potential to counter the potential deleterious effects of dominant logics and ideologies in organizations. Our model thus allows for a more empirical analysis of organizational ethics practices and programs and provides a new analytical framework for research and practice in considering the principle of requisite variety in ethics management. This represents, in our opinion, a contribution to the advancement of knowledge and practice in organizational ethics. When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process . Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called "modified OLS" that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process. We consider the benchmark stochastic frontier model where inefficiency is directly influenced by observable determinants. In this setting, we estimate the stochastic frontier and the conditional mean of inefficiency without imposing any distributional assumptions. To do so we cast this model in the partly linear regression framework for the conditional mean. We provide a test of correct parametric specification of the scaling function. An empirical example is also provided to illustrate the practical value of the methods described here. This study incorporates carbon dioxide emissions in productivity measurement in the airline industry and examines the determinants of productivity change. For this purpose a two-stage analysis under joint production of good and bad outputs is employed to compare the operational performance of airlines. In the first stage, productivity index are derived using the Luenberger productivity indicator. In the second stage, productivity change scores derived therefrom are regressed using the random-effects Generalized Least Squares to quantify determinants of productivity change. The paper finds low cost carriers and average number of hours flown per aircraft having a positive impact on productivity under joint production model while demand variable negatively impacts on productivity under market model. Empirical studies have often shown wide differences in productivity among firms. Although several studies have sought to identify factors causing such differences, only a few studies have examined the effects of risk and risk aversion on productivity. In this study, using Norwegian dairy farming data for 2009, we examined the effects of different aspects of risk on productivity. We used a range of variables to construct indices of risk taking, risk perception and risk management. These indices were then included as arguments in an input distance function which represents the production technology. Our results show that these risk indices did affect productivity. Regional differences in productivity, though small, were also found to exist, suggesting that unobserved edaphic factors that differ between regions also affected productivity. In recent years, England and Wales have suffered droughts. This unusual situation defies the common belief that the British climate provides abundant water resources and has prompted the regulatory authorities to impose bans on superfluous uses of water. Furthermore, a large percentage of households in England consume unmetered water which is detrimental to water saving efforts. Given this context, we estimate the shadow price of water using a panel data from reports published by the Office of Water Services (Ofwat) for the period 1996 to 2010 (three regulatory periods). These shadow prices are derived from a parametric multi-output, multi-input, input distance function characterized by a translog technology. Following O'Donnell and Coelli (2005), we use a Bayesian econometric framework in order to impose regularity-monotonicity and curvature-conditions on a high-flexible technology. Consequently, our results can be interpreted at the firm level without requiring the need to base analysis on the averages. Our estimations offer guidance for regulation purposes and provide an assessment of how the water supply companies deal with water losses under each regulatory period. The relevance of the study is quite general as water scarcity is a problem that will become more important with population growth and the impact of climate change. This paper aims to explore the impact of Information and Communication Technologies (ICT) on labor productivity growth in Turkish manufacturing. This is the first attempt at exploring the impact of ICT on productivity in Turkish manufacturing at the firm level. The analysis is based on firm level data obtained from Turkish Statistical Institute (TURKSTAT) and covers the period from 2003 to 2012. The data used in the analysis includes all firms employing 19+ workers in Turkish manufacturing industry. Growth accounting results show that the contributions of conventional and ICT capital to value added growth are not significantly different from each other. On the other hand, results based both on static (fixed-effects) and dynamic panel data analysis highlight the positive influence on firms' productivity exerted by ICT capital. The findings show that the impact of ICT capital on productivity is larger by about 25 to 50% than that of conventional capital. This contribution of ICT capital is higher than that of non-ICT capital for small sized and low-tech firms. Our findings imply that investing in ICT capital increases firm productivity by increasing the productivity of labor and also that convention growth accounting approaches may not be adequate to identify such linkages. This paper examines the impact of investment in research and innovation on Australian market sector productivity. While previous studies have largely focused on a narrow class of private sector intangible assets as a source of productivity gains, this paper shows that there is a broad range of other business sector intangible assets that can significantly affect productivity. Moreover, the paper pays special attention to the role played by public funding for research and innovation. The empirical results suggest that there are significant spillovers to productivity from public sector R&D spending on research agencies and higher education. No evidence is found for productivity spillovers from indirect public funding for the business enterprise sector, civil sector or defence R&D. These findings have implications for government innovation policy as they provide insights into possible productivity gains from government funding reallocations. In this article we investigate how secondary level students' mathematical competence develops within one school year. Mathematical competence was tested in grade 9 and again in grade 10, both in terms of mathematical literacy in the sense of PISA (n = 4610 students) and in terms of the German national educational standards (subsample of n = 3351 students). The development of mathematical competence in terms of both operationalizations within one year is reported for the overall population. Additionally, we investigate differences related to students' gender, immigrant background and school type. Results show virtually no increase of mathematical literacy in the sense of PISA from grade 9 to grade 10. By contrast, the competence according to the German educational standards, which is more closely aligned with school curricula, increases by about one quarter of a standard deviation. Results are discussed in the light of similarities and differences of the two operationalizations of mathematical competence. This study examines a linkage between the international mathematics scale of the Programme for International Student Assessment (PISA 2012) and the mathematics assessment taken from the German National Assessment 2012 (NA 2012). Previous comparisons of the studies' theoretical frameworks confirmed a high conceptual similarity. Although both frameworks differentiate on the one hand in four (PISA 2012) and on the other hand in five (NA 2012) content areas, a systematic comparison shows a high conceptual overlap between both studies. There is also a high correlation between both mathematics tests (r = 0.82). This study aims to check further how far both reporting scales could be linked to each other and to what extent the proficiency levels in PISA and in the NA are comparable. The results show that the score distributions of both studies are approximated normally distributed. The results of an Equipercentil-Equating lead to a distribution of PISA-equivalence scores that is close to the distribution of the original PISA-scores. For the total population there is neither difference between both distributions in the mean values, standard deviation, skewness and kurtosis nor in the distributions on the PISA Proficiency Levels. There was evidence for a high quality of scale linking. As expected it could be shown, that the cut score for mathematics competency, at which the learners are counted among the PISA risk group, is lower than for national education standards of mathematics (middle school track) and higher than for the national education standards of mathematics for the lower school track. The article closes with a discussion about the results and the studies limitations. Public availability, high data quality, and sample characteristics make data from international large-scale assessments (ILSA) attractive for secondary analyses, and more and more researcher use them to analyze teaching and learning in students' classrooms. However, for several reasons ILSA data is limited in its scope, and conclusions on educational effectiveness are restricted. This study specifically intends to evaluate the potential danger of overestimating the effectiveness of indicators of teaching and learning based on ILSA data. We take advantage of two extensions to the German sample for the Programme for International Student Assessment (PISA) 2012 study: The sample was enhanced a) by adding students from entire 9th grade classrooms and b) by a repeated measurement, re-testing all participants one year later in 10th grade. This study analyzed eight indicators of students' reports on teaching and learning in their mathematics classrooms. Regression analyses in multi-level structural equation models (doubly-latent models) compared results of cross-sectional analyses (controlling for individual background and classroom level characteristics) with a longitudinal extension (controlling for prior achievement). Results only partially confirm common practice to use ILSA data to study educational effectiveness of classroom level processes. Cross-sectional analyses overestimate effectiveness results, but three out of four indicators that are cross-sectionally related to student achievement are also related to students' achievement development in grade 10. Assuming that previous studies modeled and interpreted the data correctly, the importance of their results in debates on educational effectiveness is thus underlined. The study analyzes the effects of grade retention on the development of mathematics competence and on motivation and beliefs in the field of mathematics using data from a representative sample of 9th graders in Germany (PISA 2012 and repeated measurement one year later). Same-age comparisons were applied between a group of retained students (N = 89) and a matched comparison group of promoted students (N = 89), applying the method of Propensity Score Matching (PSM). The results show no different effects in the development of mathematics competence between retained students and promoted students. Concerning motivation and beliefs in the field of mathematics the retained students significantly increased their mathematics work ethics compared to the promoted group. However, no group specific effects could be found for mathematics interest, self-efficacy, instrumental motivation, and behavior. In this study we look the relationship between students' mathematical competencies and their relationship to family background variables. The backdrop of this investigation is a follow-up measurement to the 2012 Programme for Internationas Student Assessment (PISA) in 2013. The large influence of family variables for student achievement in Germany has been the result of numerous studies. Yet, in more recent analyses trend developments in the context of PISA show improvement in this area. This circumstance provides evidence for the need of a longitudinal investigation of family background variables and their relationship with students' mathematical competence. Multilevel analyses stress the great importance of prior competencies and support prior research indicating that much of the achievement differences due to migration status come down to differences in financial resources. Controlling for class composition had only limited effects, but decreased the role of migration status even further. Results are discussed in light of recent research. Thus far, only little evidence exists for the development of students' science competencies in Germany. The present paper examines in the context of the PISA 2012/2013 longitudinal study, how science competencies develop from 9(th) to 10(th) grade and whether there are differences between students depending on the school track, gender, and class membership. Besides PISA science competencies, curricular oriented competencies in content knowledge and scientific inquiry for the subjects biology, physics, and chemistry were examined. Results showed for PISA on average no changes. For all subjects, content knowledge increased; scientific inquiry only increased in physics. Differences occured between the highest school track and other school tracks. Furthermore, girls developed more favorable in PISA and biology as compared to boys based on their initial background. Composition effects could only be found for the average achievement level of a class in PISA but nor for the curricular oriented science competencies. For the average composition of the social background, no effects were found. This paper examines the development of proficiency in reading from grades 9 to 10. In addition to the proficiency development in the overall population, relationships of proficiency gains with institutional (school type), familial (immigration background and socioeconomic family status) and individual characteristics (gender) were examined. In accordance with current results according to which the development of reading proficiency slows down in later stages of schooling, we found no meaningful proficiency increase in the overall sample. Furthermore, the explanatory variables showed no reliable relationships with proficiency growth. However, the analyses provided strong evidence that students' persistence of working thoroughly on the test indicated by position effects changed across measurement occasions. Furthermore, we found changes in position effects to be related to the explanatory variables considered in this paper. Stronger decreases in persistence were detected in male students, students with less favorable socio-economic background, and in students from nonacademic tracks. Non-consideration of test persistence led to biased estimates of effects of the explanatory variables school type and gender, which upon closer inspection were not related to real proficiency growth. This study investigates relations between reading comprehension and basic processes of reading on word and sentence level as well as working memory in 15-year-old adolescents. The questions were considered if differences in the efficiency of the investigated components can explain reading competence as well as changes in reading competence after one year. The study is based on reading data of the Programme for International Student Assessment (PISA) 2012 and a longitudinal follow-up study in 2013 (PISA Plus). Additionally, basic skills in word recognition, semantic integration, and working memory were assessed as part of a national add-on study in 2012. The results show that reading competence in 2012 was predicted by the investigated cognitive basic skills. Although reading comprehension was improved after one year, this change was neither explained by word recognition, semantic integration, nor working memory. Theoretical and practical implications are discussed critically. This paper deals with the item response theory scaling of the tests in PISA longitudinal assessment 2012/2013. It presents analyses testing longitudinal measurement invariance of achievement tests in the domains of mathematics, science and reading comprehension. The analyses showed that conventional approaches of investigating longitudinal measurement invariance revealed no indication of meaningful violations of time-invariant item difficulties. On the other hand, taking into account complex booklet effects, extended analyses indicated the existence of test context effects due to the effect of item positioning. At both measurement points the PISA tests were affected by position effects; these effects being especially noteworthy and increasing over the time points in the group of nonacademic track schools. The findings of the presented study were used to derive corrections in order to counteract distortions of the results by the selective sample dropout as well as the increase in position effects. The limitations of these corrections are discussed. The present article is devoted to the IRT scaling of tests used in the PISA Longitudinal Study 2012/2013 which verify educational standards for lower secondary school certification. It presents analyses investigating the agreement of freely estimated item parameters with the parameters calibrated in the national assessment 2012, and describes the estimation of competence levels. In addition, analyses are presented on the consequences of the unbalanced test design implemented in the retest assessment. Results indicated that the item parameters estimated for non-academic and academic track school types closely matched the pre-calibrated parameters. With few exceptions increases in competence levels during the 10th grade were estimated for both academic and non-academic track school types on basis of ability parameters estimated by means of the plausible value technique. Further analyses suggested, however, that distortions in the growth estimate are to be expected due to the unbalanced test design administered in the second survey period. Implications of these findings for the evaluation of the competence gains are discussed. This article presents the empirical basis for the PISA longitudinal study 2012-2013. The composition of the additional national sample of students of the 9(th) grade is illustrated. Findings with regard to non-participation (dropout) for the second point of measurement will be presented at the level of schools and students. The results show that the dropout is related with characteristics of family background, the level of competence in the three domains (mathematics, reading and science) in 2012, and with other student and school demographic variables at the level of schools as well as students. These findings are taken up in order to derive weight adjustments, which could counteract a systematic bias of the results caused by the selective dropout. The limits of the possibilities for correction are discussed. The PISA longitudinal study is characterized by a mandatory assessment for the first point of measurement followed by a voluntary test for the second point of measurement. Thus, the present article provides valuable information about probable causes of non-participation in voluntary studies and the associated risk of systematic bias for population estimates. The customer value proposition (CVP) has a critical role in communicating how a company aims to provide value to customers. Managers and scholars increasingly use CVP terminology, yet the concept remains poorly understood and implemented; relatively little research on this topic has been published, considering the vast breadth of investigations of the value concept. In response, this article offers a comprehensive review of fragmented CVP literature, highlighting the lack of a strong theoretical foundation; distinguishes CVPs from related concepts; proposes a conceptual model of the CVP that includes antecedents, consequences, and moderators, together with several research propositions; illustrates the application of the CVP concept to four contrasting companies; and advances a compelling agenda for research. Manufacturers invest in customer solutions to differentiate their offerings and sustain profitability despite declining margins from goods sales. Notwithstanding strong managerial and academic interest, an examination of whether and explanations for when and why solutions translate into superior performance are lacking. We test hypotheses developed from the resource-based theory and transaction cost economics, supplemented with in-depth theory-in-use interviews, on primary and secondary data collected from 175 manufacturers. From a model that corrects for endogeneity, the findings suggest that, compared with other service offerings, solutions are associated with increased return on sales. This positive profitability effect is enhanced in firms with greater sales capabilities; it is stronger in industries with greater buyer power but weaker in technology-intensive industries. These results caution against the simplistic view of solutions as a universal route to gaining competitive advantage and aid in better identifying the role of solutions in a manufacturer's offering portfolio. Substantial research has examined how stock market reactions to marketing actions affect subsequent marketing decisions. However, prior research provides limited insights into whether abnormal stock returns to a marketing action actually predict the future performance resulting from that action. This study focuses on new product preannouncements (NPPAs) and investigates the relationship between short-term stock market returns to an NPPA and the post-launch new product performance under various industry and firm conditions. Findings based on a dynamic panel data analysis of 208 NPPAs in the U.S. automotive industry between 2001 and 2014 reveal that stock returns associated with an NPPA are not an appropriate forward-looking measure of future product performance. However, under specific conditions (i.e., when the preannouncement is specific, the preannounced new product has low innovativeness, the preannouncing firm has a high reputation and invests heavily in advertising, and the preannouncement environment is less competitive), abnormal stock returns to NPPAs actually predict the future performance of new products. Thus, this study extends the marketing-finance and innovation literature with its focus on the conditions that affect the predictive power of immediate stock returns for the future performance of new products. It is widely accepted, and demonstrated in the marketing literature, that negative online word of mouth (NOWOM) has a negative impact on brands. The present research, however, finds the opposite effect among individuals who feel a close personal connection to the brand-a group that often contains the brand's best customers. A series of three studies show that, when self-brand connection (SBC) is high, consumers process NOWOM defensively-a process that actually increases their behavioral intentions toward the brand. Study 1 demonstrates this effect using an experimental manipulation of SBC related to clothing brands, and provides process evidence by analyzing coded thought listings. Study 2 provides convergent evidence by measuring SBC associated with smartphones, and followup analyses show that as SBC increases, the otherwise negative effect of NOWOM steadily transforms to become significantly positive. Study 3 replicates these results using a combination of a national survey conducted by J.D. Power investigating hotel stays and data drawn from TripAdvisor. Results of all three studies, set in product categories with varying levels of identity relevance, support the positive effects of NOWOM for high-SBC customers and have implications for both managers and researchers. For suppliers to secure a positive return on their investment in the creation of component innovations, they must ensure that original equipment manufacturers (OEMs) implement these component innovations. The objectives of this study are to (1) identify the activities that suppliers undertake to foster innovation implementation, (2) articulate the theoretical mechanisms that mediate the impact of these activities on innovation implementation, and (3) establish the inter-related effects of these mediating theoretical mechanisms on innovation implementation. The results from a survey of 173 supplier-OEM dyads reveal that functional advantage is the theoretical mechanism that mediates the impact of two supplier actions-knowledge acquisition and installation support-on innovation implementation. The results also show that reputational advantage mediates the effect of innovative OEM endorsement and that relational advantage mediates the effects of supplier asset specificity and supplier innovativeness on innovation implementation. In terms of the inter-related effects of the mediating mechanisms on innovation implementation, the results indicate that relational advantage complements the positive impact of functional advantage on innovation implementation. In addition, the results reveal that innovation implementation has a positive impact on the financial performance of component innovations. A top priority among retailers is enhancing the consumer's shopping experience. With the number of private label products increasing at the same time retailers are shedding slower moving products, understanding how private labels impact the consumer's experience at the retail shelf becomes critical. While one might think that private labels, in particular those that look similar to their national brand counterparts (i.e., copycat private labels), may hinder the shopping experience by making it more difficult for consumers to choose a product, we find the exact opposite. Adopting a fluency perspective, we show that when copycat private labels are included in a shelf set, consumers with high knowledge of the category experience greater choice ease, and as a result they subsequently evaluate their chosen product more favorably. Importantly, the choice ease and evaluations of novice consumers are found to be unaffected. Consequently, this research provides insights for retailers and manufacturers on how and why copycat private labels positively impact an important aspect of the consumer's shopping experience (i.e., choice ease). Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem in the case that each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a modified normal rectangular branch-and-bound algorithm to solve the resulting problem where multiple rectangles are simultaneously subdivided to increase the chance of shrinking the rectangle containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time. Parametric maximum likelihood (ML) estimators of probability density functions (pdfs) are widely used today because they are efficient to compute and have several nice properties such as consistency, fast convergence rates, and asymptotic normality. However, data is often complex making parametrization of the pdf difficult, and nonparametric estimation is required. Popular nonparametric methods, such as kernel density estimation (KDE), produce consistent estimators but are not ML and have slower convergence rates than parametric ML estimators. Further, these nonparametric methods do not share the other desirable properties of parametric ML estimators. This paper introduces a nonparametric ML estimator that assumes that the square-root of the underlying pdf is band-limited (BL) and hence "smooth". The BLML estimator is computed and shown to be consistent. Although convergence rates are not theoretically derived, the BLML estimator exhibits faster convergence rates than state-of-the-art nonparametric methods in simulations. Further, algorithms to compute the BLML estimator with lesser computational complexity than that of KDE methods are presented. The efficacy of the BLML estimator is shown by applying it to (i) density tail estimation and (ii) density estimation of complex neuronal receptive fields where it outperforms state-of-the-art methods used in neuroscience. This paper proposes a unified theory for calibrating a wide variety of cameramodels such as pinhole, fisheye, cata-dioptric, and multi-camera networks. Wemodel any camera as a set of image pixels and their associated camera rays in space. Every pixelmeasures the light traveling along a (half-) ray in 3-space, associated with that pixel. By this definition, calibration simply refers to the computation of the mapping between pixels and the associated 3D rays. Such amapping can be computed using images of calibration grids, which are objects with known 3D geometry, taken fromunknown positions. This general cameramodel allows to represent non-central cameras; we also consider two special subclasses, namely central and axial cameras. In a central camera, all rays intersect in a single point, whereas the rays are completely arbitrary in a non-central one. Axial cameras are an intermediate case: the camera rays intersect a single line. In thiswork, we show the theory for calibrating central, axial and non-centralmodels using calibration grids, which can be either three-dimensional or planar. In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [1], which was the state-of-the-art, from 31 to 50.3 percent on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 percent. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline. Complex geometric variations of 3D models usually pose great challenges in 3D shape matching and retrieval. In this paper, we propose a novel 3D shape feature learning method to extract high-level shape features that are insensitive to geometric deformations of shapes. Our method uses a discriminative deep auto-encoder to learn deformation-invariant shape features. First, a multiscale shape distribution is computed and used as input to the auto-encoder. We then impose the Fisher discrimination criterion on the neurons in the hidden layer to develop a deep discriminative auto-encoder. Finally, the outputs from the hidden layers of the discriminative auto-encoders at different scales are concatenated to form the shape descriptor. The proposed method is evaluated on four benchmark datasets that contain 3D models with large geometric variations: McGill, SHREC' 10 ShapeGoogle, SHREC' 14 Human and SHREC' 14 Large Scale Comprehensive Retrieval Track Benchmark datasets. Experimental results on the benchmark datasets demonstrate the effectiveness of the proposed method for 3D shape retrieval. This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. In this paper, we present a label transfer model from texts to images for image classification tasks. The problem of image classification is often much more challenging than text classification. On one hand, labeled text data is more widely available than the labeled images for classification tasks. On the other hand, text data tends to have natural semantic interpretability, and they are often more directly related to class labels. On the contrary, the image features are not directly related to concepts inherent in class labels. One of our goals in this paper is to develop a model for revealing the functional relationships between text and image features as to directly transfer intermodal and intramodal labels to annotate the images. This is implemented by learning a transfer function as a bridge to propagate the labels between two multimodal spaces. However, the intermodal label transfers could be undermined by blindly transferring the labels of noisy texts to annotate images. To mitigate this problem, we present an intramodal label transfer process, which complements the intermodal label transfer by transferring the image labels instead when relevant text is absent from the source corpus. In addition, we generalize the inter-modal label transfer to zero-shot learning scenario where there are only text examples available to label unseen classes of images without any positive image examples. We evaluate our algorithm on an image classification task and show the effectiveness with respect to the other compared algorithms. In this paper we present the latent regression forest (LRF), a novel framework for real-time, 3D hand pose estimation from a single depth image. Prior discriminative methods often fall into two categories: holistic and patch-based. Holistic methods are efficient but less flexible due to their nearest neighbour nature. Patch-based methods can generalise to unseen samples by consider local appearance only. However, they are complex because each pixel need to be classified or regressed during testing. In contrast to these two baselines, our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skeletal joints. The searching process is guided by a learnt latent tree model which reflects the hierarchical topology of the hand. Our main contributions can be summarised as follows: (i) Learning the topology of the hand in an unsupervised, data-driven manner. (ii) A new forest-based, discriminative framework for structured search in images, as well as an error regression step to avoid error accumulation. (iii) A new multi-view hand pose dataset containing 180 K annotated images from 10 different subjects. Our experiments on two datasets show that the LRF outperforms baselines and prior arts in both accuracy and efficiency. A well-defined deformation model can be vital for non-rigid structure from motion (NRSfM). Most existing methods restrict the deformation space by assuming a fixed rank or smooth deformation, which are not exactly true in the real world, and they require the degree of deformation to be predetermined, which is impractical. Meanwhile, the errors in rotation estimation can have severe effects on the performance, i.e., these errors can make a rigid motion be misinterpreted as a deformation. In this paper, we propose an alternative to resolve these issues, motivated by an observation that non-rigid deformations, excluding rigid changes, can be concisely represented in a linear subspace without imposing any strong constraints, such as smoothness or low-rank. This observation is embedded in our new prior distribution, the Procrustean normal distribution (PND), which is a shape distribution exclusively for non-rigid deformations. Because of this unique characteristic of the PND, rigid and non-rigid changes can be strictly separated, which leads to better performance. The proposed algorithm, EM-PND, fits a PND to given 2D observations to solve NRSfM without any user-determined parameters. The experimental results show that EM-PND gives the state-of-the-art performance for the benchmark data sets, confirming the adequacy of the new deformation model. B-splines are commonly utilized to construct the transformation model in free-form deformation (FFD) based registration. B-splines become smoother with increasing spline order. However, a higher-order B-spline requires a larger support region involving more control points, which means higher computational cost. In general, the third-order B-spline is considered as a good compromise between spline smoothness and computational cost. A lower-order function is seldom used to construct the transformation model for registration since it is less smooth. In this research, we investigated whether lower-order B-spline functions can be utilized for more efficient registration, while preserving smoothness of the deformation by using a novel random perturbation technique. With the proposed perturbation technique, the expected value of the cost function given probability density function (PDF) of the perturbation is minimized by a stochastic gradient descent optimization. Extensive experiments on 2D synthetically deformed brain images, and real 3D lung and brain scans demonstrated that the novel randomly perturbed free-form deformation (RPFFD) approach improves the registration accuracy and transformation smoothness. Meanwhile, lower-order RPFFD methods reduce the computational cost substantially. This paper addresses classification tasks on a particular target domain in which labeled training data are only available from source domains different from (but related to) the target. Two closely related frameworks, domain adaptation and domain generalization, are concerned with such tasks, where the only difference between those frameworks is the availability of the unlabeled target data: domain adaptation can leverage unlabeled target information, while domain generalization cannot. We propose Scatter Component Analyis (SCA), a fast representation learning algorithm that can be applied to both domain adaptation and domain generalization. SCA is based on a simple geometrical measure, i.e., scatter, which operates on reproducing kernel Hilbert space. SCA finds a representation that trades between maximizing the separability of classes, minimizing the mismatch between domains, and maximizing the separability of data; each of which is quantified through scatter. The optimization problem of SCA can be reduced to a generalized eigenvalue problem, which results in a fast and exact solution. Comprehensive experiments on benchmark cross-domain object recognition datasets verify that SCA performs much faster than several state-of-the-art algorithms and also provides state-ofthe- art classification accuracy in both domain adaptation and domain generalization. We also show that scatter can be used to establish a theoretical generalization bound in the case of domain adaptation. Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper, we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale- selection necessary for these pixels, when dense, scale- invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low- contrast areas, can be different and selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single- scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low- dimensional, linear subspace. A subspace- to- point mapping may further be used to produce a novel descriptor representation, the Scale- Less SIFT (SLS), as an alternative to single- scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods. A preliminary version of this work appeared in [1]. We propose a novel approach to semantic scene labeling in urban scenarios, which aims to combine excellent recognition performance with highest levels of computational efficiency. To that end, we exploit efficient tree-structured models on two levels: pixels and superpixels. At the pixel level, we propose to unify pixel labeling and the extraction of semantic texton features within a single architecture, so-called encode-and-classify trees. At the superpixel level, we put forward a multi-cue segmentation tree that groups superpixels at multiple granularities. Through learning, the segmentation tree effectively exploits and aggregates a wide range of complementary information present in the data. A tree-structured CRF is then used to jointly infer the labels of all regions across the tree. Finally, we introduce a novel object-centric evaluation method that specifically addresses the urban setting with its strongly varying object scales. Our experiments demonstrate competitive labeling performance compared to the state of the art, while achieving near real-time frame rates of up to 20 fps. We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points. An intuition on human segmentation is that when a human is moving in a video, the video-context (e.g., appearance and motion clues) may potentially infer reasonable mask information for the whole human body. Inspired by this, based on popular deep convolutional neural networks (CNN), we explore a veryweakly supervised learning framework for human segmentation task, where only an imperfect human detector is available along with massive weakly-labeled YouTube videos. In our solution, the video-context guided human mask inference and CNN based segmentation network learning iterate to mutually enhance each other until no further improvement gains. In the first step, each video is decomposed into supervoxels by the unsupervised video segmentation. The superpixels within the supervoxels are then classified as human or non-human by graph optimization with unary energies from the imperfect human detection results and the predicted confidence maps by the CNN trained in the previous iteration. In the second step, the video-context derived human masks are used as direct labels to train CNN. Extensive experiments on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate that the proposed framework has already achieved superior results than all previous weakly-supervised methods with object class or bounding box annotations. In addition, by augmenting with the annotated masks from PASCAL VOC 2012, our method reaches a new state-ofthe- art performance on the human segmentation task. In our recent work, we showed that solving the LP relaxation of the pairwise min-sum labeling problem (also known as MAP inference in graphical models or discrete energy minimization) is not much easier than solving any linear program. Precisely, the general linear program reduces in linear time (assuming the Turing model of computation) to the LP relaxation of the min-sum labeling problem. The reduction is possible, though in quadratic time, even to the min-sum labeling problem with planar structure. Here we prove similar results for the pairwise min-sum labeling problem with attractive Potts interactions (also known as the uniform metric labeling problem). Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them "Networks on Convolutional feature maps" (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015. The angle between the RGBs of the measured illuminant and estimated illuminant colors-the recovery angular error-has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are 'divided out' from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are 'divided out'. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established. In literature the references to EM estimation of product mixtures are not very frequent. The simplifying assumption of product components, e.g. diagonal covariance matrices in case of Gaussian mixtures, is usually considered only as a compromise because of some computational constraints or limited dataset. We have found that the product mixtures are rarely used intentionally as a preferable approximating tool. Probably, most practitioners do not "trust" the product components because of their formal similarity to "naive Bayes models." Another reason could be an unrecognized numerical instability of EM algorithm in multidimensional spaces. In this paper we recall that the product mixture model does not imply the assumption of independence of variables. It is even not restrictive if the number of components is large enough. In addition, the product components increase numerical stability of the standard EM algorithm, simplify the EM iterations and have some other important advantages. We discuss and explain the implementation details of EM algorithm and summarize our experience in estimating product mixtures. Finally we illustrate the wide applicability of product mixtures in pattern recognition and in other fields. Using clustering method to detect useful patterns in large datasets has attracted considerable interest recently. The HKM clustering algorithm (Hierarchical K-means) is very efficient in large-scale data analysis. It has been widely used to build visual vocabulary for large scale video/image retrieval system. However, the speed and even the accuracy of hierarchical K-means clustering algorithm still have room to be improved. In this paper, we propose a Parallel N-path quantification hierarchical K-means clustering algorithm which improves on the hierarchical K-means clustering algorithm in the following ways. Firstly, we replace the Euclidean kernel with the Hellinger kernel to improve the accuracy. Secondly, the Greedy N-best Paths Labeling method is adopted to improve the clustering accuracy. Thirdly, the multi-core processors-based parallel clustering algorithm is proposed. Our results confirm that the proposed clustering algorithm is much faster and more effective. With the development of the detection technology using multispectra sensors, spectral decomposition (SD) attracts more and more attention in the biomedical signal processing and image processing. In this paper, a local smoothness constrained nonnegative matrix factorization (NMF) with nonlinear convergence rate (NMF-NCR) is proposed to solve SD problem and our contributions are as follows. First, it proves that the gradients of the cost function with respect to each variable matrix are Lipschitz continuous. Then, a proximal function is constructed for optimizing the cost function. As a result, our method can achieve an NCR much faster than the traditional methods. Simulations show the advantage in solving SD of our algorithm over the compared methods. Interactive segmentation of images has become an integral part of image processing applications. Several graph based segmentation techniques have been developed, which depend upon global minimization of the energy cost function. An adequate scheme of interactive segmentation still needs a skilled initialization of regions with user-defined seeds pixels distributed over the entire image. We propose an iterative segmentation technique based on Cellular Automaton which focuses to reduce the user efforts required to provide initialization. The existing algorithms based on Cellular Automaton only use local smoothness term in label propagation making them highly sensitive to user-defined seeds pixels. To reduce the sensitivity towards initial user definition of regions, global constraints are introduced along with local information to propagate labels. The results obtained are comparable to the state-of-the-art interactive segmentation techniques on a standard dataset. One-class extraction from remotely sensed imagery is researched with multi-class classifiers in this paper. With two supervised multi-class classifiers, Bayesian classifier and nearest neighbor classifier, we firstly analyzed the effect of the data distribution partitioning on one-class extraction from the remote sensing images. The data distribution partitioning refers to the way that the data set is partitioned before classification. As a parametric method, the Bayesian classifier achieved good classification performance when the data distribution was partitioned appropriately. While as a nonparametric method, the NN classifier did not require a detailed partitioning of the data distribution. For simplicity, the data set can be partitioned into two classes, the class of interest and the remainder, to extract the specific class. With appropriate partitioning of the data set, the specific class of interest was well extracted from remotely sensed imagery in the experiments. This study will be helpful for one-class extraction from remote sensing imagery with multi-class classifiers. It provides a way to improve the one-class classification from the aspect of data distribution partitioning. Graph cuts is an image segmentation method by which the region and boundary information of objects can be revolved comprehensively. Because of the complex spatial characteristics of high-dimensional images, time complexity and segmentation accuracy of graph cuts methods for high-dimensional images need to be improved. This paper proposes a new three-dimensional multilevel banded graph cuts model to increase its accuracy and reduce its complexity. Firstly, three-dimensional image is viewed as a high-dimensional space to construct three-dimensional network graphs. A pyramid image sequence is created by Gaussian pyramid downsampling procedure. Then, a new energy function is built according to the spatial characteristics of the three-dimensional image, in which the adjacent points are expressed by using a 26-connected system. At last, the banded graph is constructed on a narrow band around the object/background. The graph cuts method is performed on the banded graph layer by layer to obtain the object region sequentially. In order to verify the proposed method, we have performed an experiment on a set of three-dimensional colon CT images, and compared the results with local region active contour and ChanVese model. The experimental results demonstrate that the proposed method can segment colon tissues from three-dimensional abdominal CT images accurately. The segmentation accuracy can be increased to 95.1% and the time complexity is reduced by about 30% of the other two methods. Objects in scenes are thought to be important for scene recognition. In this paper, we propose to utilize scene-specific objects represented by deep features for scene categorization. Our approach combines benefits of deep learning and Latent Support Vector Machine (LSVM) to train a set of scene-specific object models for each scene category. Specifically, we first use deep Convolutional Neural Networks (CNNs) pre-trained on the large-scale object-centric image database ImageNet to learn rich object features and a large number of general object concepts. Then, the pre-trained CNNs is adopted to extract features from images in the target dataset, and initialize the learning of scene-specific object models for each scene category. After initialization, the scene-specific object models are obtained by alternating between searching over the most representative and discriminative regions of images in the target dataset and training linear SVM classifiers based on obtained region features. As a result, for each scene category a set of object models that are representative and discriminative can be acquired. We use them to perform scene categorization. In addition, to utilize global structure information of scenes, we use another CNNs pre-trained on the large-scale scene-centric database Places to capture structure information of scene images. By combining objects and structure information for scene categorization, we show superior performances to state-of-the-art approaches on three public datasets, i.e. MIT-indoor, UIUC-sports and SUN. Experiment results demonstrated the effectiveness of the proposed method. In this research, we provided a dictionary-based approach for identifying biomedical concepts from the literature. The approach first crawled experimental corpus by E-utilities and built a concept dictionary. Then, we developed an algorithm called Variable-step Window Identification Algorithm (VWIA) for matching biomedical concepts based on preprocessing, POS tagging and the formation of phrase block. The approach could identify embedded biomedical concepts and new concepts, which could identify concepts more completely. The proposed approach obtain 95.0% F-measure overall for the test dataset. Thus, it is promising for the method of biomedical text mining. Linguistic variables can better approximate the fuzziness of mans thinking, which are important tools for multiple attribute decision-making problems. This paper establishes the possibility-based ELECTRE II model under the environment of uncertain linguistic fuzzy variables and uncertain weight information. By introducing the degree of possibility to ELECTRE II model, the concordance set, the discordance set and the indifferent set are obtained, respectively. Furthermore, the concordance index is redefined by considering deviation index under the same attribute, by which the strong and weak relationships are constructed, and then the rank of alternatives is obtained. A numerical example about the evaluation of socio-economic systems is employed to illustrate the convenience and applicability of the proposed method. How to make the existing models from different disciplines effectively interoperate and integrate is one of the primary challenges for scientists and decision-makers. Heihe river Open Modeling Environment (HOME) provides a convenient model coupling platform that enables researchers concentrate on the theory and applications of ecological and hydrological watershed models. The model parameter optimization is an important component and key step that links models and simulation of watershed. In this paper, through integration modules of existing models, an improved ABC algorithm (ORABC) based on optimization strategy and reservation strategy of the best individuals was introduced into HOME as a hydrological model parameter optimization module, and coupled with the Xinanjiang hydrological model to complete automatically task of model parameter optimization. The runoff simulation experiments in Heihe river watershed were taken to verify the parameter optimization in HOME, and the simulation results testified the efficiency and effectiveness of the method. It can significantly improve simulation accuracy and efficiency of hydrological and ecological models, and promote the scientific researches for watershed issues. To solve the problems with spraying over the inner wall of air-intake pipe, this paper introduces an algorithm of measurement path planning based on the spraying robot system and the laser displacement sensor technology. Scanning measurement path planning is the premise and basis of model construction and spray. Traditional methods, such as arc length extrapolation and polynomial are applicable only for the measurement of a plane curve with finite maximum curvature. Drawing references from existing method, this paper focuses on the pre-scanning measurement method for different types of cross-section curves. Algorithm simulation and model reconstruction show that this study solves the problem of collision avoidance for scanning measurement of the inner wall of air-intake pipe. To regain mobility, stroke patients need to receive repetitive and intensive therapy. Robot-assisted rehabilitation is an active area of research. Cheap robotic leg rehabilitation devices should be developed to meet the demands and assist most patients. A low cost hip-knee exoskeleton prototype powered by pneumatic muscles was developed. On this basis, Functional Electrical Stimulation (FES) induced paralyzed muscles to realize ankle joint rehabilitation training. These three ankle muscles: the tibialis anterior, the soleus, and the gastrocnemius under electrical stimulation cooperated together to realize optimally coordinated control of dorsiflexion and plantar-flexion movement. As both of pneumatic muscle and FES induced muscle possess highly nonlinear characteristics, a sliding control algorithm called Chattering mitigation Robust Variable Control (CRVC) was applied to leg hybrid rehabilitation. The combination of exoskeleton and FES is a promising way to reduce the cost and the complexity of designing hip-knee-ankle exoskeleton. The proposed hybrid method was verified by treadmill-based gait training experiments. This paper investigates optimal trading strategies in a financial market with multidimensional stock returns, where the drift is an unobservable multivariate Ornstein-Uhlenbeck process. Information about the drift is obtained by observing stock returns and expert opinions which provide unbiased estimates on the current state of the drift. The optimal trading strategy of investors maximizing expected logarithmic utility of terminal wealth depends on the filter which is the conditional expectation of the drift given the available information. We state filtering equations to describe its dynamics for different information settings. At information dates, the expert opinions lead to an update of the filter which causes a decrease in the conditional covariance matrix. We investigate properties of these conditional covariance matrices. First, we consider the asymptotic behavior of the covariance matrices for an increasing number of expert opinions on a finite time horizon. Second, we state conditions for convergence in infinite time with regularly-arriving expert opinions. Finally, we derive the optimal trading strategy of an investor. The optimal expected logarithmic utility of terminal wealth, the value function, is a functional of the conditional covariance matrices. Hence, our analysis of the covariance matrices allows us to deduce properties of the value function. This paper derives the theoretical underpinnings behind the following observed empirical facts in credit risk modeling: The probability of default, the seniority, the thickness of the tranche, the debt cushion, and macroeconomic factors are the important determinants of the conditional probability density function of the recovery rate given default (RGD) of a firm's debt and its tranches. In a portfolio of debt securities, the conditional probability density functions of the recovery rate given default of tranches have point probability masses near zero and one, and the expected value of the recovery rate given default increases as the seniority or debt cushion increases. The paper derives other results as well, such as the fact that the conditional probability distribution function associated with any senior tranche dominates that of any junior tranche by first-order. The standard deviation of the recovery rate given default of a senior security need not be greater than that of a junior security. It is proved that the expected value of the recovery rate given default need not increase as the proportional thickness of the tranche increases. This paper studies arbitrage pricing theory in financial markets with implicit transaction costs. We extend the existing theory to include the more realistic possibility that the price at which the investors trade is dependent on the traded volume. The investors in the market always buy at the ask and sell at the bid price. Implicit transaction costs are composed of two terms, one is able to capture the bid-ask spread, and the second the price impact. Moreover, a new definition of a self-financing portfolio is obtained. The self-financing condition suggests that continuous trading is possible, but is restricted to predictable trading strategies having cadlag (right-continuous with left limits) and caglad (left-continuous with right limits) paths of bounded quadratic variation and of finitely many jumps. That is, cadlag and caglad predictable trading strategies of infinite variation, with finitely many jumps and of finite quadratic variation are allowed in our setting. Restricting ourselves to caglad predictable trading strategies, we show that the existence of an equivalent probability measure is equivalent to the absence of arbitrage opportunities, so that the first fundamental theorem of asset pricing (FFTAP) holds. It is also shown that the use of continuous and bounded variation trading strategies can improve the efficiency of hedging in a market with implicit transaction costs. To better understand how to apply the theory proposed we provide an example of an implicit transaction cost economy that is linear and nonlinear in the order size. We present two models for the fair value of a self-funding instalment warrant. In both models we assume the underlying stock process follows a geometric Brownian motion. In the first model, we assume that the underlying stock pays a continuous dividend yield and in the second we assume that it pays a series of discrete dividend yields. We show that both models admit similarity reductions and use these to obtain simple finite- difference and Monte Carlo solutions. We use the method of multiple scales to connect these two models and establish the first-order correction term to be applied to the first model in order to obtain the second, thereby establishing that the former model is justified when many dividends are paid during the life of the warrant. Further, we show that the functional form of this correction may be expressed in terms of the hedging parameters for the first model and is, from this point of view, independent of the particular payoff in the first model. In two appendices we present approximate solutions for the first model which are valid in the small volatility and the short time-to-expiry limits, respectively, by using singular perturbation techniques. The small volatility solutions are used to check our finite- difference solutions and the small time-to-expiry solutions are used as a means of systematically smoothing the payoffs so we may use pathwise sensitivities for our Monte Carlo methods. In this paper we discuss the common shortcomings of a large class of essentially-affine models in the current monetary environment of repressed rates, and we present a class of reduced-form stochastic-market-risk affine models that can overcome these problems. In particular, we look at the extension of a popular doubly-mean-reverting Vasicek model, but the idea can be applied to all essentially-affine models. The model straddles the Pand Q-measures. By allowing for a market price of risk whose stochasticity is not fully spanned by the yield-curve state variables that enter the model specification, we break the deterministic link between the yield-curve-based return-predicting factors and the market price of risk, but we retain, on average, the observed statistical regularities reported in the literature. We discuss in detail how this approach relates to the recent work by Joslin et al. (2014) [S. Joslin, M. Priebsch & K. J. Singleton (2014) Risk premiums in dynamic term structure models with unspanned macro risk, Journal of Finance LXIX (3), 1197-1233]. We show that the parameters of the model can be estimated in a simple and robust manner using survey- like information; and that the model we propose affords a more plausible decomposition of observed market yields into expectations and risk premia during an important recent market event than the one produced by mainstream essentially-affine models. In this paper, we explore two new tree lattice methods, the piecewise binomial tree and the piecewise trinomial tree for both the bond prices and European/American bond option prices assuming that the short rate is given by a generalized skew Vasicek model with discontinuous drift coefficient. These methods build nonuniform jump size piecewise binomial/trinomial tree based on a tractable piecewise process, which is derived from the original process according to a transform. Numerical experiments of bonds and European/American bond options show that our approaches are efficient as well as reveal several price features of our model. Following the economic rationale of Peskir & Samee [The British put option, Applied Mathematical Finance 18 (6), 537-563 (2011); The British call option, Quantitative Finance 13 (1), 95-109 ( 2013)], we present a new class of asset-or-nothing put option where the holder enjoys the early exercise feature of American asset-or-nothing put option whereupon his payoff is the 'best prediction' of the European asset-or-nothing put option payoff under the hypothesis that the true drift equals a contract drift. Based on the observed price movements, the option holder finds that if the true drift of the stock price is unfavorable, then he can substitute it with the contract drift and minimize his losses. The key to the British asset-or-nothing put option is the protection feature as not only can the option holder exercise at or above the strike price to a substantial reimbursement of the original option price (covering the ability to sell in a liquid option market completely endogenously)but also when the stock price movements are favorable he will generally receive high returns. We derive a closed form expression for the arbitrage-free price in terms of the rational exercise boundary and show that the rational exercise boundary itself can be characterized as the unique solution to a nonlinear integral equation. We also analyzethe financial meaning of the British asset-or-nothing put option using the results above and show that with the contract drift properly selected, the British asset-or-nothing put option becomes a very attractive alternative to the classic European/American asset-or-nothing put option. To design an epitope-based vaccine for Human immunodeficiency virus (HIV), we previously predicted 20 potential HIV epitopes using bioinformatics approaches. The combination of these 20 epitopes has a theoretical coverage of 98.1% of the population for both the prevalent HIV genotypes and Chinese human leukocyte antigen DR types. To test the immunogenicity of this vaccine in vivo, a corresponding antigen needs to be prepared. To this end, we constructed a recombinant plasmid containing DNA encoding the epitopes and GPGPG spacers and a 6-His tag for verification of protein expression and ease of purification, and then transformed Escherichia coli cells with the plasmid. After IPTG induction, the recombinant protein was expressed in the form of mainly inclusion bodies. To stabilize the structure of denatured inclusion bodies for efficient purification and renaturation in vitro, we transferred the dissolved inclusion bodies from 7 mol/L guanidine hydrochloride to 8 mol/L urea. Under denaturing conditions, the vaccine protein was purified by a 3-step process including ion-exchange chromatography and affinity column, and then renatured by stepwise dialysis. Together, the above described procedures generated 43 mg of vaccine protein per litre of fermentation medium, and the final product reached approximately 95% purity. The purified protein was capable of eliciting antigen-specific T-cell responses in immunized mice. The Bitumount Provincial Historic site is the location of 2 of the world's first oil-extracting and -refining operations. Despite hydrocarbon levels ranging from 330 to 24 700 mg.(kg soil)(-1), plants have been able to recolonize the site through means of natural revegetation. This study was designed to achieve a better understanding of the plant-root-associated bacterial partnerships occurring within naturally revegetated hydrocarbon-contaminated soils. Root endophytic bacterial communities were characterized from representative plant species throughout the site by both high-throughput sequencing and culturing techniques. Population abundance of rhizosphere and root endosphere bacteria was significantly influenced (p < 0.05) by plant species and sampling location. In general, members of the Actinomycetales, Rhizobiales, Pseudomonadales, Burkholderiales, and Sphingomonadales orders were the most commonly identified orders. Community structure of root-associated bacteria was influenced by both plant species and sampling location. Quantitative real-time polymerase chain reaction was used to determine the potential functional diversity of the root endophytic bacteria. The gene copy numbers of 16S rRNA and 2 hydrocarbon-degrading genes (CYP153 and alkB) were significantly affected (p < 0.05) by the interaction of plant species and sampling location. Our findings suggest that some of the bacterial communities detected are known to exhibit plant growth promotion characteristics. To investigate the physiological role of an extracellular aminopeptidase (BSAP168) encoded by the ywaD gene in Bacillus subtilis 168, we constructed the ywaD-deletion mutant (BS-AP-K). Compared with that of the wildtype strain, the maximum growth rate of BS-AP-K was reduced by 28% when grown in soybean protein medium at 37 degrees C, but not in Luria-Bertani medium. The impaired growth rate was more marked at higher temperature and could be compensated by supplementation of amino acid to the culture media. Further studies showed that in regards to the amino acid compositions and peptide distribution in the culture supernatants, there was an obvious difference between the culture supernatants of wild-type and BS-AP-K strains. In addition, another mutant strain (BS-AP-R) was constructed by replacing ywaD with ywaD-Delta PA to evaluate the effect of a protease-associated domain in BSAP168 on growth. All these findings indicated that BSAP168 played an important role in supplying the amino acids required for growth. In the present study, we investigated the spatial change of sediment nitrite-dependent anaerobic methane-oxidizing (n-damo) organisms in the mesotrophic freshwater Gaozhou Reservoir (6 different sampling locations and 2 sediment depths (0-5 cm, 5-10 cm)), one of the largest drinking water reservoirs in China. The abundance of sediment n-damo bacteria was quantified using quantitative polymerase chain reaction assay, while the richness, diversity, and composition of n-damo pmoA gene sequences were characterized using clone library analysis. Vertical and horizontal changes in sediment n-damo bacterial abundance occurred in Gaozhou Reservoir, with 1.37 x 10(5) to 8.24 x 10(5) n-damo 16S rRNA gene copies per gram of dry sediment. Considerable horizontal and vertical variations of n-damo pmoA gene diversity (Shannon index = 0.32-2.50) and composition also occurred in this reservoir. Various types of sediment n-damo pmoA genes existed in Gaozhou Reservoir. A small proportion of n-damo pmoA gene sequences (19.1%) were related to those recovered from "Candidatus Methylomirabilis oxyfera". Our results suggested that sediment n-damo pmoA gene diversity might be regulated by nitrite, while n-damo pmoA gene richness might be governed by multiple environmental factors, including total organic carbon, total phosphorus, nitrite, and total nitrogen. The water-borne Gram-negative bacterium Legionella pneumophila (Lp) is the causative agent of Legionnaires' disease. Lp is typically transmitted to humans from water systems, where it grows inside amoebae. Survival of Lp in water is central to its transmission to humans. A transcriptomic study previously identified many genes induced by Lp in water. One such gene, lpg2524, encodes a putative LuxR family transcriptional regulator. It was hypothesized that this gene could be involved in the survival of Lp in water. Deletion of lpg2524 does not affect the growth of Lp in rich medium, in the amoeba Acanthamoeba castellanii, or in human macrophage-like THP-1 cells, showing that Lpg2524 is not required for growth in vitro and in vivo. Nevertheless, deletion of lpg2524 results in a faster colony-forming unit (CFU) reduction in an artificial freshwater medium, Fraquil, indicating that Lpg2524 is important for Lp to survive in water. Overexpression of Lpg2524 also results in a survival defect, suggesting that a precise level of this transcriptional regulator is essential for its function. However, our result shows that Lpg2524 is dispensable for survival in water when Lp is at a high cell density (10(9) CFU/mL), suggesting that its regulon is regulated by another regulator activated at high cell density. Rural communities rely on surface water reservoirs for potable water. Effective removal of chemical contaminants and bacterial pathogens from these reservoirs requires an understanding of the bacterial community diversity that is present. In this study, we carried out a 16S rRNA-based profiling approach to describe the bacterial consortia in the raw surface water entering the water treatment plants of 2 rural communities. Our results show that source water is dominated by the Proteobacteria, Bacteroidetes, and Cyanobacteria, with some evidence of seasonal effects altering the predominant groups at each location. A subsequent community analysis of transects of a biological carbon filter in the water treatment plant revealed a significant increase in the proportion of Proteobacteria, Acidobacteria, Planctomycetes, and Nitrospirae relative to raw water. Also, very few enteric coliforms were identified in either the source water or within the filter, although Mycobacterium was of high abundance and was found throughout the filter along with Aeromonas, Legionella, and Pseudomonas. This study provides valuable insight into bacterial community composition within drinking water treatment facilities, and the importance of implementing appropriate disinfection practices to ensure safe potable water for rural communities. In this work, we highlight effects of pH on bacterial phenotypes when using the bacteriological dyes Aniline blue, Congo red, and Calcofluor white to analyze polysaccharide production. A study of galactose catabolism in Sinorhizobium meliloti led to the isolation of a mutation in dgoK1, which was observed to overproduce exopolysaccharides when grown in the presence of galactose. When this mutant strain was spotted onto plates containing Aniline blue, Congo red, or Calcofluor white, the intensity of the associated staining was strikingly different from that of the wild type. Additionally, a Calcofluor dull phenotype was observed, suggesting production of a polysaccharide other than succinoglycan. Further investigation of this phenotype revealed that these results were dependent on medium acidification, as buffering at pH 6 had no effect on these phenotypes, while medium buffered at pH 6.5 resulted in a reversal of the phenotypes. Screening for mutants of the dgoK1 strain that were negative for the Aniline blue phenotype yielded a strain carrying a mutation in tkt2, which is annotated as a putative transketolase. Consistent with the plate phenotypes, when this mutant was grown in broth cultures, it did not acidify its growth medium. Overall, this work shows that caution should be exercised in evaluating polysaccharide phenotypes based strictly on the use of dyes. Discrimination targeting lesbian, gay, bisexual, and transgender (LGBT) students on college campuses occurs. Bystander intervention is important in supporting targeted students and improving campus climate for LGBT students. Peer-familiarity context (i.e., who the bystander knows in the situation) can play a role in bystander intervention, but researchers have not explored the nature of bystander intervention in specific peer-familiarity contexts concerning LGBT discrimination. Using hypothetical vignettes, we examine heterosexual students' (n = 1616) intention to intervene across 4 peer-familiarity contexts, namely, when the bystander knows no one, only witnesses or targets, only perpetrator, or everyone. We explore the role of student inputs (sociodemographics, self-esteem, attitudes toward LGBT people and political ideology) and experiences (LGBT social contacts, LGBT and social justice course content, and perceived and experienced campus climate) on their intentions to intervene in these contexts. Multiple regression results suggest that across all peer-familiarity contexts, being older, having higher self-esteem, having LGBT friends, taking courses with social justice content, and affirming attitudes toward LGBT people were independently associated with higher intentions to intervene. Males were more likely than females to intervene when they knew no one, while females were more likely to intervene in all other contexts. Race/ethnicity, religious affiliation, witnessing heterosexist harassment, perceptions of campus climate for LGBT students, and student standing were significant in particular peer contexts. Recommendations to promote bystander intervention and future research are presented. This qualitative case study examines the experiences of 27 university students participating in LEVEL, a campus-based group that pairs undergraduates (enactors) and their classmates, who self-identify as having physical disabilities (recipients). The purpose of this research was to understand how enactors' perceptions of physical disability were shaped by their participation in LEVEL. The conceptual framework used to guide this research melds Gordon Allport's (1954) social theory of prejudice with disability-based constructs. In addition, it draws on Allport's contact theory to discuss ways to reduce prejudice, especially as it relates to individuals with disabilities. Findings reveal that contact promotes the formation of friendships; increased contact influences language and perceptions of disability; and physical barriers engender social barriers. Since there are few on-campus groups that bridge academic and social environments between students of all abilities in a positive and meaningful way, participation in LEVEL represents 1 such point of interaction. By creating social spaces for individuals with disabilities and their typically abled peers to connect, LEVEL offers a promising new way to think about how to meet the needs of an underserved population. Because research on social contact between college students and their peers with physical disabilities is limited, this study works to fill this void. The purpose of this quantitative study of 6,076 undergraduates in the United States (3,038 international and 3,038 domestic) was to examine leadership development outcomes for international students in the United States and the potential role of mentorship in this process. Data for this study were derived from the 2009 Multi-Institutional Study of Leadership. Two primary research questions guided this study: (a) Do differences in socially responsible leadership outcomes exist between domestic and international students? (b) How does mentorship contribute to socially responsible leadership development for international undergraduate students? Results of this study suggest a differential effect in which international students were not experiencing the same level of socially responsible leadership development outcomes relative to domestic peers. However, this difference appeared to be mediated with the presence of mentorship focused around personal development. As this type of mentorship increased for international students, they performed nearer and nearer to domestic students in terms of socially responsible leadership development. This scholarly essay interrogates the seemingly necessary engagement of normative and essentialist characterizations of identity in the historical study of race in U.S. higher education. The author's study of the experiences of Black collegians in private, liberal arts colleges in the Midwestern Great Lakes region between 1945 and 1965 grounds this discussion. Although engaging racial essentialism is necessary, the author presents alternative treatments of historicizing race to illustrate the benefits of a critical-realist approach to producing a synthetic cultural educational history. Legal decisions about affirmative action in higher education do more than impact how admissions policies are structured. The discourse produced in these decisions structures how race is talked about, understood, and enacted in the context of higher education and beyond. However, critique of affirmative action rhetoric in the legal realm tends to focus on the anti-affirmative action constructions of race, underanalyzing rhetoric favoring affirmative action. The current project uses critical discourse analysis to explore how dominant interests are challenged, produced, and sustained by pro-affirmative action rhetoric. Specifically, this project engaged Whiteness as a theoretical and analytical lens through which to critique the amicus briefs submitted in support of race-conscious admissions policies in the recent U.S. Supreme Court case, Fisher v. University of Texas (2013). Our analysis revealed that pro-affirmative action arguments engaged the concepts of diversity and race in ways that reproduced the structural power of Whiteness, drawing upon individualism and market-driven rationales as discursive resources. The analysis suggests that even arguments supporting race-conscious admissions may inadvertently contribute to the reproduction of problematic racial hierarchies. The findings also note the potential transformative value of alternative rationales present in a small subset of amicus briefs submitted by African American organizations. Practical applications for pro-affirmative action advocates and policy-makers are offered. The purpose of this study is to enhance our understanding of how a Historically Black College and University (HBCU) is cultivating Black male achievement in STEM. In this in-depth qualitative case study, we explore 2 resource-intensive and successful STEM pathway programs at Morehouse College, the only all-male HBCU in this country, as an opportunity to examine the cultivation of Black male STEM scholars. Our study was guided by 2 overarching questions: What opportunities for participation in a rigorous STEM education do the programs provide? What individual and institutional practices contribute to STEM student persistence and learning? Myxozoans are a large group of poorly characterized cnidarian parasites. To gain further insight into their evolution, we sequenced the mitochondrial (mt) genome of Enteromyxum leei and reevaluate the mt genome structure of Kudoa iwatai. Although the typical animal mt genome is a compact, 13-25 kb, circular chromosome, the mt genome of E. leei was found to be fragmented into eight circular chromosomes of similar to 23 kb, making it the largest described animal mt genome. Each chromosome was found to harbor a large noncoding region (similar to 15 kb), nearly identical between chromosomes. The protein coding genes show an unusually high rate of sequence evolution and possess little similarity to their cnidarian homologs. Only five protein coding genes could be identified and no tRNA genes. Surprisingly, themt genome of K. iwatai was also found to be composed of two chromosomes. These observations confirm the remarkable plasticity of myxozoan mt genomes. The innovation of the eukaryote cytoskeleton enabled phagocytosis, intracellular transport, and cytokinesis, and is largely responsible for the diversity of morphologies among eukaryotes. Still, the relationship between phenotypic innovations in the cytoskeleton and their underlying genotype is poorly understood. To explore the genetic mechanism of morphological evolution of the eukaryotic cytoskeleton, we provide the first single cell transcriptomes from uncultured, free-living unicellular eukaryotes: the polycystine radiolarian Lithomelissa setosa (Nassellaria) and Sticholonche zanclea (Taxopodida). A phylogenomic approach using 255 genes finds Radiolaria and Foraminifera as separate monophyletic groups (together as Retaria), while Cercozoa is shown to be paraphyletic where Endomyxa is sister to Retaria. Analysis of the genetic components of the cytoskeleton and mapping of the evolution of these on the revised phylogeny of Rhizaria reveal lineage-specific gene duplications and neofunctionalization of alpha and beta tubulin in Retaria, actin in Retaria and Endomyxa, and Arp2/3 complex genes in Chlorarachniophyta. We show how genetic innovations have shaped cytoskeletal structures in Rhizaria, and how single cell transcriptomics can be applied for resolving deep phylogenies and studying gene evolution in uncultured protist species. Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases' evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. Lineage-specific gene losses can be driven by selection or environmental adaptations. However, a lack of studies on the original function of species-specific pseudogenes leaves a gap in our understanding of their role in evolutionary histories. Pseudogenes are of particular relevance for taste perception genes, which encode for receptors that confer the ability to both identify nutritionally valuable substances and avoid potentially harmful substances. To explore the role of bitter taste pseudogenization events in human origins, we restored the open reading frames of the three human-specific pseudogenes and synthesized the reconstructed functional hTAS2R2, hTAS2R62 and hTAS2R64 receptors. We have identified ligands that differentially activate the human and chimpanzee forms of these receptors and several other human functional TAS2Rs. We show that these receptors are narrowly tuned, suggesting that bitter-taste sensitivities evolved independently in different species, and that these pseudogenization events occurred because of functional redundancy. The restoration of function of lineage-specific pseudogenes can aid in the reconstruction of their evolutionary history, and in understanding the forces that led to their pseudogenization. Hybridization is often considered maladaptive, but sometimes hybrids can invade new ecological niches and adapt to novel or stressful environments better than their parents. The genomic changes that occur following hybridization that facilitate genome resolution and/or adaptation are not well understood. Here, we examine hybrid genome evolution using experimental evolution of de novo interspecific hybrid yeast Saccharomyces cerevisiae x Saccharomyces uvarum and their parentals. We evolved these strains in nutrient-limited conditions for hundreds of generations and sequenced the resulting cultures identifying numerous point mutations, copy number changes, and loss of heterozygosity (LOH) events, including species-biased amplification of nutrient transporters. We focused on a particularly interesting example, in which we saw repeated LOH at the high-affinity phosphate transporter gene PHO84 in both intra-and interspecific hybrids. Using allele replacement methods, we tested the fitness of different alleles in hybrid and S. cerevisiae strain backgrounds and found that the LOH is indeed the result of selection on one allele over the other in both S. cerevisiae and the hybrids. This is an example where hybrid genome resolution is driven by positive selection on existing heterozygosity and demonstrates that even infrequent outcrossing may have lasting impacts on adaptation. TYRO3, AXL, and MERTK (TAM) receptors are a family of receptor tyrosine kinases that maintain homeostasis through the clearance of apoptotic cells, and when defective, contribute to chronic inflammatory and autoimmune diseases such as atherosclerosis, multiple sclerosis, systemic lupus erythematosus, rheumatoid arthritis, and Crohn's disease. In addition, certain enveloped viruses utilize TAM receptors for immune evasion and entry into host cells, with several viruses preferentially hijacking MERTK for these purposes. Despite the biological importance of TAM receptors, little is understood of their recent evolution and its impact on their function. Using evolutionary analysis of primate TAM receptor sequences, we identified strong, recent positive selection in MERTK's signal peptide and transmembrane domain that was absent from TYRO3 and AXL. Reconstruction of hominid and primate ancestral MERTK sequences revealed three non-synonymous single nucleotide polymorphisms in the human MERTK signal peptide, with a G14C mutation resulting in a predicted non-B DNA cruciform motif, producing a significant decrease in MERTK expression with no significant effect on MERTK trafficking or half-life. Reconstruction of MERTK's transmembrane domain identified three amino acid substitutions and four amino acid insertions in humans, which led to significantly higher levels of self-clustering through the creation of a new interaction motif. This clustering counteracted the effect of the signal peptide mutations through enhancing MERTK avidity, whereas the lower MERTK expression led to reduced binding of Ebola virus-like particles. The decreased MERTK expression counterbalanced by increased avidity is consistent with antagonistic coevolution to evade viral hijacking of MERTK. The mu opioid receptor is involved in many natural processes including stress response, pleasure, and pain. Mutations in the gene also have been associated with opiate and alcohol addictions as well as with responsivity to medication targeting these disorders. Two common and mutually exclusive polymorphisms have been identified in humans, A118G (N40D), found commonly in non-African populations, and C17T (V6A), found almost exclusively in African populations. Although A118G has been studied extensively for associations and in functional assays, C17T is much less well understood. In addition to a parallel polymorphism previously identified in rhesus macaques (Macaca mulatta), C77G (P26R), resequencing in additional non-human primate species identifies further common variation: C140T (P47L) in cynomolgus macaques (Macaca fascicularis), G55C (D19H) in vervet monkeys (Chlorocebus aethiops sabeus), A111T (L37F) in marmosets (Callithrix jacchus), and C55T (P19S) in squirrel monkeys (Saimiri boliviensis peruviensis). Functional effects on downstream signaling are observed for each of these variants following treatment with the endogenous agonist beta-endorphin and the exogenous agonists morphine, DAMGO ([d-Ala(2), N-Me-Phe(4), Gly(5)-ol]-enkephalin), and fentanyl. In addition to demonstrating the importance of functional equivalency in reference to population variation for minority health, this also shows how common evolutionary pressures have produced similar phenotypes across species, suggesting a shared response to environmental needs and perhaps elucidating the mechanism by which these organism-environment interactions are mediated physiologically and molecularly. These studies set the stage for future investigations of shared functional polymorphisms across species as a new genetic tool for translational research. Phenotypic plasticity is increasingly recognized to facilitate adaptive change in plants and animals, including insects, nematodes, and vertebrates. Plasticity can occur as continuous or discrete (polyphenisms) variation. In social insects, for example, in ants, some species have workers of distinct size classes while in other closely related species variation in size may be continuous. Despite the abundance of examples in nature, how discrete morphs are specified remains currently unknown. In theory, polyphenisms might require robustness, whereby the distribution of morphologies would be limited by the same mechanisms that execute buffering from stochastic perturbations, a function attributed to heat-shock proteins of the Hsp90 family. However, this possibility has never been directly tested because plasticity and robustness are considered to represent opposite evolutionary principles. Here, we used a polyphenism of feeding structures in the nematode Pristionchus pacificus to test the relationship between robustness and plasticity using geometric morphometrics of 20 mouth-form landmarks. We show that reducing heat-shock protein activity, which reduces developmental robustness, increases the range of mouth-form morphologies. Specifically, elevated temperature led to a shift within morphospace, pharmacological inhibition of all Hsp90 genes using radicicol treatment increased shape variability in both mouth-forms, and CRISPR/Cas9-induced Ppa-daf-21/Hsp90 knockout had a combined effect. Thus, Hsp90 canalizes the morphologies of plastic traits resulting in discrete polyphenism of mouth-forms. HIV significantly affects the immunological environment during tuberculosis coinfection, and therefore may influence the selective landscape upon which M. tuberculosis evolves. To test this hypothesis whole genome sequences were determined for 169 South African M. tuberculosis strains from HIV-1 coinfected and uninfected individuals and analyzed using two Bayesian codon-model based selection analysis approaches: FUBAR which was used to detect persistent positive and negative selection (selection respectively favoring and disfavoring nonsynonymous substitutions); and MEDS which was used to detect episodic directional selection specifically favoring nonsynonymous substitutions within HIV-1 infected individuals. Among the 25,251 polymorphic codon sites analyzed, FUBAR revealed that 189-fold more were detectably evolving under persistent negative selection than were evolving under persistent positive selection. Three specific codon sites within the genes celA2b, katG, and cyp138 were identified by MEDS as displaying significant evidence of evolving under directional selection influenced by HIV-1 coinfection. All three genes encode proteins that may indirectly interact with human proteins that, in turn, interact functionally with HIV proteins. Unexpectedly, epitope encoding regions were enriched for sites displaying weak evidence of directional selection influenced by HIV-1. Although the low degree of genetic diversity observed in our M. tuberculosis data set means that these results should be interpreted carefully, the effects of HIV-1 on epitope evolution in M. tuberculosis may have implications for the design of M. tuberculosis vaccines that are intended for use in populations with high HIV-1 infection rates. Dicentric chromosomes are products of genomic rearrangements that place two centromeres on the same chromosome. Due to the presence of two primary constrictions, they are inherently unstable and overcome their instability by epigenetically inactivating and/or deleting one of the two centromeres, thus resulting in functionally monocentric chromosomes that segregate normally during cell division. Our understanding to date of dicentric chromosome formation, behavior and fate has been largely inferred from observational studies in plants and humans as well as artificially produced de novo dicentrics in yeast and in human cells. We investigate the most recent product of a chromosome fusion event fixed in the human lineage, human chromosome 2, whose stability was acquired by the suppression of one centromere, resulting in a unique difference in chromosome number between humans (46 chromosomes) and our most closely related ape relatives (48 chromosomes). Using molecular cytogenetics, sequencing, and comparative sequence data, we deeply characterize the relicts of the chromosome 2q ancestral centromere and its flanking regions, gaining insight into the ancestral organization that can be easily broadened to all acrocentric chromosome centromeres. Moreover, our analyses offered the opportunity to trace the evolutionary history of rDNA and satellite III sequences among great apes, thus suggesting a new hypothesis for the preferential inactivation of some human centromeres, including IIq. Our results suggest two possible centromere inactivation models to explain the evolutionarily stabilization of human chromosome 2 over the last 5-6 million years. Our results strongly favor centromere excision through a one-step process. Several authors reported lower frequencies of protein sequence convergence between more distantly related evolutionary lineages and attributed this trend to epistasis, which renders the acceptable amino acids at a site more different and convergence less likely in more divergent lineages. A recent primate study, however, suggested that this trend is at least partially and potentially entirely an artifact of gene tree discordance (GTD). Here, we demonstrate in a genome-wide data set from 17 mammals that the temporal trend remains (1) upon the control of the GTD level, (2) in genes whose genealogies are concordant with the species tree, and (3) for convergent changes, which are extremely unlikely to be caused by GTD. Similar results are observed in a comparable data set of 12 fruit flies in some but not all of these tests. We conclude that, at least in some cases, the temporal decline of convergence is genuine, reflecting an impact of epistasis on protein evolution. Plastid sequences are a cornerstone in plant systematic studies and key aspects of their evolution, such as uniparental inheritance and absent recombination, are often treated as axioms. While exceptions to these assumptions can profoundly influence evolutionary inference, detecting them can require extensive sampling, abundant sequence data, and detailed testing. Using advancements in high-throughput sequencing, we analyzed the whole plastomes of 65 accessions of Picea, a genus of similar to 35 coniferous forest tree species, to test for deviations from canonical plastome evolution. Using complementary hypothesis and data-driven tests, we found evidence for chimeric plastomes generated by interspecific hybridization and recombination in the clade comprising Norway spruce (P. abies) and 10 other species. Support for interspecific recombination remained after controlling for sequence saturation, positive selection, and potential alignment artifacts. These results reconcile previous conflicting plastid-based phylogenies and strengthen the mounting evidence of reticulate evolution in Picea. Given the relatively high frequency of hybridization and biparental plastid inheritance in plants, we suggest interspecific plastome recombination may be more widespread than currently appreciated and could underlie reported cases of discordant plastid phylogenies. The placental epigenome plays a vital role in regulating mammalian growth and development. Aberrations in placental DNA methylation are linked to several disease states, including intrauterine growth restriction and preeclampsia. Studying the evolution and development of the placental epigenome is critical to understanding the origin and progression of such diseases. Although high-resolution studies have found substantial variation between placental methylomes of different species, the nature of methylome variation has yet to be characterized within any individual species. We conducted a study of placental DNA methylation at high resolution in multiple strains and closely related species of house mice (Mus musculus musculus, Mus m. domesticus, and M. spretus), across developmental timepoints (embryonic days 15-18), and between two distinct layers (labyrinthine transport and junctional endocrine). We observed substantial genome-wide methylation heterogeneity in mouse placenta compared with other differentiated tissues. Species-specific methylation profiles were concentrated in retrotransposon subfamilies, specifically RLTR10 and RLTR20 subfamilies. Regulatory regions such as gene promoters and CpG islands displayed cross-species conservation, but showed strong differences between layers and developmental timepoints. Partially methylated domains exist in the mouse placenta and widen during development. Taken together, our results characterize the mouse placental methylome as a highly heterogeneous and deregulated landscape globally, intermixed with actively regulated promoter and retrotransposon sequences. Herpes simplex viruses 1 and 2 (HSV-1 and HSV-2) are seen as close relatives but also unambiguously considered as evolutionary independent units. Here, we sequenced the genomes of 18 HSV-2 isolates characterized by divergent UL30 gene sequences to further elucidate the evolutionary history of this virus. Surprisingly, genome-wide recombination analyses showed that all HSV-2 genomes sequenced to date contain HSV-1 fragments. Using phylogenomic analyses, we could also show that two main HSV-2 lineages exist. One lineage is mostly restricted to subSaharan Africa whereas the other has reached a global distribution. Interestingly, only the worldwide lineage is characterized by ancient recombination events with HSV-1. Our findings highlight the complexity of HSV-2 evolution, a virus of putative zoonotic origin which later recombined with its human-adapted relative. They also suggest that coinfections with HSV-1 and 2 may have genomic and potentially functional consequences and should therefore be monitored more closely. The composition and structure of fleece variation observed in mammals is a consequence of a strong selective pressure for fiber production after domestication. In sheep, fleece variation discriminates ancestral species carrying a long and hairy fleece from modern domestic sheep (Ovis aries) owning a short and woolly fleece. Here, we report that the "woolly" allele results from the insertion of an antisense EIF2S2 retrogene (called asEIF2S2) into the 3' UTR of the IRF2BP2 gene leading to an abnormal IRF2BP2 transcript. We provide evidence that this chimeric IRF2BP2/asEIF2S2 messenger 1) targets the genuine sense EIF2S2 RNA and 2) creates a long endogenous double-stranded RNA which alters the expression of both EIF2S2 and IRF2BP2mRNA. This represents a unique example of a phenotype arising via a RNA-RNA hybrid, itself generated through a retroposition mechanism. Our results bring new insights on the sheep population history thanks to the identification of the molecular origin of an evolutionary phenotypic variation. Although intratumor diversity driven by selection has been the prevailing view in cancer biology, recent population genetic analyses have been unable to reject the neutral interpretation. As the power to reject neutrality in tumors is often low, it will be desirable to have an alternative means to test selection directly. Here, we utilize gene expression data as a surrogate for functional significance in intra-and intertumor comparisons. The expression divergence between samples known to be driven by selection (e.g., between tumor and normal tissues) is always higher than the divergence between normal samples, which should be close to the neutral level of divergence. In contrast, the expression differentiation between regions of the same tumor, being lower than the neutral divergence, is incompatible with the hypothesis of selectively driven divergence. To further test the hypothesis of neutral evolution, we select a hepatocellular carcinoma tumor that has large intratumor SNV and CNV (single nucleotide variation and copy number variation, respectively) diversity. This tumor enables us to calibrate the level of expression divergence against that of genetic divergence. We observe that intratumor divergence in gene expression profile lags far behind genetic divergence, indicating insufficient phenotypic differences for selection to operate. All these expression analyses corroborate that natural selection does not operate effectively within tumors, supporting recent interpretations of within-tumor diversity. As the expected level of genetic diversity, hence the potential for drug resistance, would be much higher under neutrality than under selection, the issue is of both theoretical and clinical significance. Insects with restricted diets rely on symbiotic bacteria to provide essential metabolites missing in their diet. The blood-sucking lice are obligate, host-specific parasites of mammals and are themselves host to symbiotic bacteria. In human lice, these bacterial symbionts supply the lice with B-vitamins. Here, we sequenced the genomes of symbiotic and heritable bacterial of human, chimpanzee, gorilla, and monkey lice and used phylogenomics to investigate their evolutionary relationships. We find that these symbionts have a phylogenetic history reflecting the louse phylogeny, a finding contrary to previous reports of symbiont replacement. Examination of the highly reduced symbiont genomes (0.53-0.57 Mb) reveals much of the genomes are dedicated to vitamin synthesis. This is unchanged in the smallest symbiont genome and one that appears to have been reorganized. Specifically, symbionts from human lice, chimpanzee lice, and gorilla lice carry a small plasmid that encodes synthesis of vitamin B5, a vitamin critical to the bacteria-louse symbiosis. This plasmid is absent in an old world monkey louse symbiont, where this pathway is on its primary chromosome. This suggests the unique genomic configuration brought about by the plasmid is not essential for symbiosis, but once obtained, it has persisted for up to 25 My. We also find evidence that human, chimpanzee, and gorilla louse endosymbionts have lost a pathway for synthesis of vitamin B1, whereas the monkey louse symbiont has retained this pathway. It is unclear whether these changes are adaptive, but they may point to evolutionary responses of louse symbionts to shifts in primate biology. Many bacteria, including the model bacterium Escherichia coli can survive for years within spent media, following resource exhaustion. We carried out evolutionary experiments, followed by whole genome sequencing of hundreds of evolved clones to study the dynamics by which E. coli adapts during the first 4 months of survival under resource exhaustion. Our results reveal that bacteria evolving under resource exhaustion are subject to intense selection, manifesting in rapid mutation accumulation, enrichment in functional mutation categories and extremely convergent adaptation. In the most striking example of convergent adaptation, we found that across five independent populations adaptation to conditions of resource exhaustion occurs through mutations to the three same specific positions of the RNA polymerase core enzyme. Mutations to these three sites are strongly antagonistically pleiotropic, in that they sharply reduce exponential growth rates in fresh media. Such antagonistically pleiotropic mutations, combined with the accumulation of additional mutations, severely reduce the ability of bacteria surviving under resource exhaustion to grow exponentially in fresh media. We further demonstrate that the three positions at which these resource exhaustion mutations occur are conserved for the ancestral E. coli allele, across bacterial phyla, with the exception of nonculturable bacteria that carry the resource exhaustion allele at one of these positions, at very high frequencies. Finally, our results demonstrate that adaptation to resource exhaustion is not limited by mutational input and that bacteria are able to rapidly adapt under resource exhaustion in a temporally precise manner through allele frequency fluctuations. Mutation is the ultimate source of genetic variation, and knowledge of mutation rates is fundamental for our understanding of all evolutionary processes. High throughput sequencing of mutation accumulation lines has provided genome wide spontaneous mutation rates in a dozen model species, but estimates from nonmodel organisms from much of the diversity of life are very limited. Here, we report mutation rates in four haploid marine bacterial-sized photosynthetic eukaryotic algae; Bathycoccus prasinos, Ostreococcus tauri, Ostreococcus mediterraneus, and Micromonas pusilla. The spontaneous mutation rate between species varies from mu = 4.4 x 10(-10) to 9.8 x 10(-10) mutations per nucleotide per generation. Within genomes, there is a two-fold increase of the mutation rate in intergenic regions, consistent with an optimization of mismatch and transcription-coupled DNA repair in coding sequences. Additionally, we show that deviation from the equilibrium GC content increases the mutation rate by similar to 2% to similar to 12% because of a GC bias in coding sequences. More generally, the difference between the observed and equilibrium GC content of genomes explains some of the inter-specific variation in mutation rates. Kin selection is thought to drive the evolution of cooperation and conflict, but the specific genes and genome-wide patterns shaped by kin selection are unknown. We identified thousands of genes associated with the sterile ant worker caste, the archetype of an altruistic phenotype shaped by kin selection, and then used population and comparative genomic approaches to study patterns of molecular evolution at these genes. Consistent with population genetic theoretical predictions, worker-upregulated genes experienced reduced selection compared with genes upregulated in reproductive castes. Worker-upregulated genes included more taxonomically restricted genes, indicating that the worker caste has recruited more novel genes, yet these genes also experienced reduced selection. Our study identifies a putative genomic signature of kin selection and helps to integrate emerging sociogenomic data with longstanding social evolution theory. The human genome is dominated by large tracts of DNA with extensive biochemical activity but no known function. In particular, it is well established that transcriptional activities are not restricted to known genes. However, whether this intergenic transcription represents activity with functional significance or noise is under debate, highlighting the need for an effective method of defining functional genomic regions. Moreover, these discoveries raise the question whether genomic regions can be defined as functional based solely on the presence of biochemical activities, without considering evolutionary (conservation) and genetic (effects of mutations) evidence. Here, computational models integrating genetic, evolutionary, and biochemical evidence are established that provide reliable predictions of human protein-coding and RNA genes. Importantly, in addition to sequence conservation, biochemical features allow accurate predictions of genic sequences with phenotypic evidence under strong purifying selection, suggesting that they can be used as an alternative measure of selection. Moreover, 18.5% of annotated noncoding RNAs exhibit higher degrees of similarity to phenotype genes and, thus, are likely functional. However, 64.5% of noncoding RNAs appear to belong to a sequence class of their own, and the remaining 17% are more similar to pseudogenes and random intergenic sequences that may represent noisy transcription. With the advent of low cost, high-throughput genome sequencing technology, population genomic data sets are being generated for hundreds of species of pathogenic, industrial, and agricultural importance. The challenge is how best to analyze and visually display these complex data sets to yield intuitive representations capable of capturing complex evolutionary relationships. Here we present PopNet, a novel computational method that identifies regions of shared ancestry in the chromosomes of related strains through clustering patterns of genetic variation. These relationships are subsequently visualized within a network by a novel implementation of chromosome painting. We apply PopNet to three diverse populations that feature differential rates of recombination and demonstrate its ability to capture evolutionary relationships as well as associate traits to specific loci. Compared with existing tools, PopNet provides substantial advances by both removing the need to predefine a single reference genome that can bias interpretation of population structure, as well as its ability to visualize multiple evolutionary relationships, such as recombination events and shared ancestry, across hundreds of strains. Evolutionary information on species divergence times is fundamental to studies of biodiversity, development, and disease. Molecular dating has enhanced our understanding of the temporal patterns of species divergences over the last five decades, and the number of studies is increasing quickly due to an exponential growth in the available collection of molecular sequences from diverse species and large number of genes. Our TimeTree resource is a public knowledge-base with the primary focus to make available all species divergence times derived using molecular sequence data to scientists, educators, and the general public in a consistent and accessible format. Here, we report a major expansion of the TimeTree resource, which more than triples the number of species (> 97,000) and more than triples the number of studies assembled (> 3,000). Furthermore, scientists can access not only the divergence time between two species or higher taxa, but also a timetree of a group of species and a timeline that traces a species' evolution through time. The new timetree and timeline visualizations are integrated with display of events on earth and environmental history over geological time, which will lead to broader and better understanding of the interplay of the change in the biosphere with the diversity of species on Earth. The next generation TimeTree resource is publicly available online at http://www.timetree.org. Background: There is a distinct gap between theory and practice with respect to research use in clinical practice, particularly in critical care units, that could be related to the presence of a number of barriers that hinder the use of research findings. Aims: The aims of the study were to identify barriers and facilitators to research use as perceived by Jordanian nurses in critical care units and to examine the predictors of research use among those nurses. Methods: The study used a cross-sectional, correlational design. The self-administered ''Barriers Scale'' was introduced to 200 registered critical care nurses, using the drop-and-collect technique, between October and November 2015. Results: The results revealed that ''nurse does not have time to read research at work'' was the top ranked barrier that hinders research use (mean [SD], 3.45 [0.79]). The first 7 ranked barriers were related to the organizational subscale. Managerial support was the top perceived facilitator for research use. Only ''attending special training courses in nursing research'' was the significant predictor of research use and explained 59.1% of the variance in research use, t(190) = -3.93, P =.003. The most identified barriers toward research use revealed by the qualitative data include dominant routine nursing tasks, existence of gap between theory and practice, shortage of nursing staff, and public negative image about nursing profession. Participants suggested the importance of increasing organizational support and creating an organizational research culture to further promote research use in clinical nursing practice. Conclusions: Research use has not been widely implemented yet in Jordan because of various barriers. The organization-related barriers were the most influential. Factors hindering research use are multidimensional, and optimizing them should be a shared responsibility of nurse managers, researchers, clinicians, and academicians. Further initiatives are required to raise awareness of the importance of using evidence-based practice. Critical care environments are known for provoking anxiety, pain, and sleeplessness. Often, these symptoms are attributed to patients' underlying physiological conditions; life-sustaining or life-prolonging treatments such as ventilators, invasive procedures, tubes, and monitoring lines; and noise and the fast-paced technological nature of the critical care environment. This, in turn, possibly increases length of stay and morbidity and challenges the recovery and healing of critically ill patients. Complementary therapies can be used as adjunctive therapies alongside pharmacological interventions and modalities. One complementary therapy with promise in critical care for improving symptoms of anxiety, pain, and sleeplessness is music. A review of current literature from Ovid MEDLINE, Cumulative Index to Nursing and Allied Health Literature, and PubMed was conducted to examine the evidence for the use of this complementary therapy in critical care settings. This review presents the evidence on effectiveness of music for the symptom management of anxiety, pain, and insomnia in critically ill adult patients. The evidence from this review supports music in symptom management of pain, insomnia, and anxiety in critically ill patients. This review provides practice recommendations, generates dialog, and promotes future research. This review is part I of a 2-part series that focuses on evidence for use of music, aromatherapy and guided imagery for improving anxiety, pain, and sleeplessness of patients in critically ill patients. Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations aremade for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry. Background: Promoting a culture of evidence-based practice within a health care facility is a priority for health care leaders and nursing professionals; however, tangible methods to promote translation of evidence to bedside practice are lacking. Objectives: The purpose of this quality improvement project was to design and implement a nursing education intervention demonstrating to the bedside nurse how current evidence-based guidelines are used when creating standardized stroke order sets at a primary stroke center, thereby increasing confidence in the use of standardized order sets at the point of care and supporting evidence-based culture within the health care facility. Methods: This educational intervention took place at a 286-bed community hospital certified by the Joint Commission as a primary stroke center. Bedside registered nurse (RN) staff from 4 units received a poster presentation linking the American Heart Association's and American Stroke Association's current evidence-based clinical practice guidelines to standardized stroke order sets and bedside nursing care. The 90-second oral poster presentation was delivered by a graduate nursing student during preshift huddle. The poster and supplemental materials remained in the unit break roomfor 1week for RN viewing. After the pilot unit, a pdf of the poster was also delivered via an e-mail attachment to all RNs on the participating unit. A preintervention online survey measured nurses' self-perceived likelihood of performing an ordered intervention based on whether they were confident the order was evidence based. The preintervention survey also measured nurses' self-reported confidence in their ability to explain how the standardized order sets are derived from current evidence. The postintervention online survey again measured nurses' self-reported confidence level. However, the postintervention survey was modified midway through data collection, allowing for the final 20 survey respondents to retrospectively rate their confidence before and after the educational intervention. This modification ensured that the responses for each individual participant in this group were matched. Results: Registered nurses reported a significant increase in perceived confidence in ability to explain how standardized stroke order sets reflect current evidence after the intervention (n = 20, P G.001). This sample was matched for each individual respondent. No significant change was shown in unmatched group mean self-reported confidence ratings overall after the intervention or separately by unit for the progressive care unit, critical care unit, or intensive care unit (n = 89 preintervention, n = 43 postintervention). However, the emergency department demonstrated a significant increase in group mean perceived confidence scores (n = 20 preintervention, n = 11 postintervention, P =.020). Registered nurses reported a significantly higher self-perceived likelihood of performing an ordered nursing intervention when they were confident that the order was evidence based compared with if they were unsure the order was evidence based (n = 88, P G.001). Discussion: This nurse education strategy increased RNs' confidence in ability to explain the path from evidence to bedside nursing care by demonstrating how evidence-based clinical practice guidelines provide current evidence used to create standardized order sets. Although further evaluation of the intervention's effectiveness is needed, this educational intervention has the potential for generalization to different types of standardized order sets to increase nurse confidence in utilization of evidence-based practice. Background: Critical-care nurses (CCNs) provide end-of-life (EOL) care on a daily basis as 1 in 5 patients dies while in intensive care units. Critical-care nurses overcome many obstacles to perform quality EOL care for dying patients. Objectives: The purposes of this study were to collect CCNs' current suggestions for improving EOL care and determine if EOL care obstacles have changed by comparing results to data gathered in 1998. Methods: A 72-item questionnaire regarding EOL care perceptions was mailed to a national, geographically dispersed, random sample of 2000 members of the American Association of Critical-Care Nurses. One of 3 qualitative questions asked CCNs for suggestions to improve EOL care. Comparative obstacle size (quantitative) data were previously published. Results: Of the 509 returned questionnaires, 322 (63.3%) had 385 written suggestions for improving EOL care. Major themes identified were ensuring characteristics of a good death, improving physician communication with patients and families, adjusting nurse-to-patient ratios to 1:1, recognizing and avoiding futile care, increasing EOL education, physicians who are present and ''on the same page,''not allowing families to override patients' wishes, and the need for more support staff. When compared with data gathered 17 years previously, major themes remained the same but in a few cases changed in order and possible causation. Conclusion: Critical-care nurses' suggestions were similar to those recommendations from 17 years ago. Although the order of importance changed minimally, the number of similar themes indicated that obstacles to providing EOL care to dying intensive care unit patients continue to exist over time. BACKGROUND: Many students do not receive return to learn (RTL) services upon return to academics following a concussion. METHODS: Using a mixed-methods approach, we conducted a survey of RTL practices and experiences in Washington State schools between January 2015 and June 2015. We then held a statewide summit of RTL stakeholders and used a modified Delphi process to develop a consensus-based RTL implementation model and process. RESULTS: Survey participants included 83 educators, 57 school nurses, 14 administrators, and 30 parents, representing 144 schools in rural and urban areas. Unmet need domains and recommendations identified were (1) a current lack of school policies; (2) barriers to providing or receiving accommodations; (3) wide variability in communication patterns; and (4) recommendations shared by all stakeholder groups (including desire for readily available best practices, development of a formal school RTL policy for easy adoption and more training). Using stakeholder input from RTL summit participants and survey responses, we developed an RTL implementation model and checklist for RTL guideline adoption. CONCLUSIONS: Washington State children have unmet needs upon returning to public schools after concussion. The student-centered RTL model and checklist for implementing RTL guidelines can help schools provide timely RTL services following concussion. BACKGROUND: The National School Breakfast Program (SBP) is a federally funded program that allows states to offer nutritious breakfast to K-12 students. However, rates of SBP participation are low in some rural states, and the reasons are not well understood. The purpose of the study was to explore administrators' perceptions, attitudes, and beliefs related to the SBP, and factors they identify as barriers or facilitators to increased participation. METHODS: Data were collected from a cross-sectional, online survey of K-12 school administrators (N=152) in a rural, midsized Midwestern state fielded over an academic year. Descriptive statistics were calculated and open-ended questions were coded and analyzed for relevant themes. RESULTS: Administrators identified busing schedules, time constraints, and a lack of flexibility within the school schedule to accommodate breakfast as primary structural barriers to SBP participation. Administrators described family-centered norms as reasons for low participation in rural areas. Administrators are at varying stages of readiness to work on improving participation. CONCLUSIONS: Low SBP participation can be explained in part by a convergence of factors related to access, community norms, and structural barriers. Results may be used to inform ways in which administrators at the state, district, and school level can work to increase participation. BACKGROUND: Previous research on associations between early sexual debut and other health risk behaviors has not examined the effect of forced sexual intercourse on those associations. METHODS: We analyzed data from a nationally representative sample of 19,240 high school students in the United States, age >= 16 years, to describe the effect of forced sexual intercourse on associations between early sexual debut and other health risk behaviors using adjusted prevalence ratios (APR). RESULTS: Early sexual debut and forced sexual intercourse were simultaneously and independently associated with sexual risk-taking, violence-related behaviors, and substance use. For example, even after controlling for forced sexual intercourse and race/ethnicity, students who experienced their first sexual intercourse before age 13 years were more likely than students who initiated sexual intercourse at age >= 16 years to have had >= 4 sexual partners during their lifetime (girls, APR = 4.55; boys, APR=5.82) and to have not used a condom at last sexual intercourse (girls, APR=1.74; boys, APR=1.47). CONCLUSIONS: Associations between early sexual debut and other health risk behaviors occur independently of forced sexual intercourse. School-based sexual health education programs might appropriately include strategies that encourage delay of initiation of sexual intercourse, and coordinate with violence and substance use prevention programs. BACKGROUND: We assessed the associations of 5 school and 7 neighborhood variables with fifth-grade students achieving Healthy Fitness Zone (HFZ) or Needs Improvement-Health Risk (NI-HR) on aerobic capacity (AC) and body composition (BC) physical fitness components of the state-mandated FITNESSGRAM (R) physical fitness test. METHODS: Data for outcome (physical fitness) and predictor (school and neighborhood) variables were extracted from various databases (eg, Data Quest, Walk Score (R)) for 160 schools located in San Diego, California. Predictor variables that were at least moderately correlated (|r >=.30) with >= 1 outcome variables in univariate analyses were retained for ordinary least squares regression analyses. RESULTS: The mean percentages of students achieving HFZ AC (65.7%) and BC (63.5%) were similar (t = 1.13, p = .26), while those for NI-HR zones were significantly different (AC = 6.0% vs BC = 18.6%; t = 12.60, p < .001). Correlations were greater in magnitude for school than neighborhood demographics and stronger for BC than AC. The school variables free/reduced-price lunch (negative) and math achievement (positive) predicted fitness scores. Among neighborhood variables, percent Hispanic predicted failure of meeting the HFZ BC criterion. CONCLUSIONS: Creating school and neighborhood environments conducive to promoting physical activity and improving fitness is warranted. BACKGROUND: We analyze trends in bullying victimization prevalence in a representative sample of Spanish adolescent schoolchildren in 2006, 2010, and 2014. METHODS: We distinguish between reported bullying, which is assessed via the global question in the Revised Bully/Victim Questionnaire by Olweus, and observed bullying, which is a measure developed from the answers that the adolescents gave to specific items that refer to different types of bullying that have been codified as physical, verbal, and relational bullying. RESULTS: For 2006 and 2010/2014, the results show stability in the assessment of reported bullying and an increase in observed bullying, analyzed both globally and within the 3 categories: physical, verbal, and relational. CONCLUSIONS: A valid, reliable, and accurate measure to detect cases of bullying is necessary, as is the importance of continuing efforts devoted to raising awareness and the prevention of this phenomenon. BACKGROUND: Schools may be an effective avenue for interventions that prevent childhood obesity. I am Moving I am Learning/Choosy Kids (c) (IMIL/CK) is a curriculum recommended by Head Start (HS) for education in nutrition, physical activity, and healthy lifestyle habits. METHODS: We formed an academic-community partnership (ACP), the Springfield Collaborative for Active Child Health, to promote prevention of childhood obesity, in part, to implement the IMIL/CK curriculum in local HS sites. The ACP included a medical school, HS program, public school district, and state health department. RESULTS: Community-based participatory research principles helped identify and organize important implementation activities: community engagement, curriculum support, professional teacher training, and evaluation. IMIL/CK was piloted in 1 school then implemented in all local HS sites. All sites were engaged in IMIL/CK professional teacher training, classroom curriculum delivery, and child physical activity assessments. Local HS policy changed to include IMIL/CK in lesson plans and additional avenues of collaboration were initiated. Furthermore, improvements in physical activity and/or maintenance or improvement of healthy weight prevalence was seen in 4 of the 5 years evaluated. CONCLUSIONS: An ACP is an effective vehicle to implement and evaluate childhood obesity prevention programming in HS sites. BACKGROUNDGay-Straight Alliances (GSAs) are school-based clubs that can contribute to a healthy school climate for lesbian, gay, bisexual, transgender, and questioning (LGBTQ) youth. While positive associations between health behaviors and GSAs have been documented, less is known about how youth perceive GSAs. METHODSA total of 58 LGBTQ youth (14-19 years old) mentioned GSAs during go-along interviews in 3 states/provinces in North America. These 446 comments about GSAs were thematically coded and organized using Atlas.ti software by a multidisciplinary research team. RESULTSA total of 3 themes describe youth-perceived attributes of GSAs. First, youth identified GSAs as an opportunity to be members of a community, evidenced by their sense of emotional connection, support and belonging, opportunities for leadership, and fulfillment of needs. Second, GSAs served as a gateway to resources outside of the GSA, such as supportive adults and informal social locations. Third, GSAs represented safety. CONCLUSIONSGSAs positively influence the physical, social, emotional, and academic well-being of LGBTQ young people and their allies. School administrators and staff are positioned to advocate for comprehensive GSAs. Study findings offer insights about the mechanisms by which GSAs benefit youth health and well-being. BACKGROUNDTo improve school store food environments, the South Korean government implemented 2 policies restricting unhealthy food sales in school stores. A food-based policy enacted in 2007 restricts specific food sales (soft drinks); and a nutrient-based policy enacted in 2009 restricts energy-dense and nutrient-poor (EDNP) food sales. The purpose of the study was to assess how the 2 policies have changed the school store food environment. METHODSFoods sold in school stores in Seoul, South Korea were observed before (2006, 15 stores) and after (2013, 12 stores) implementation of the school store policies. Food availability in school stores in 2006 and 2013 was compared and EDNP food availability in 2013 was examined. RESULTSWhen controlling the total number of foods sold in school stores and school characteristics, the mean number of soft drinks sold in a school store in 2013 (0.3 items) was significantly lower than in 2006 (1.9 items, p=.032). Soft drinks were still available in 50% of school stores observed in 2013, with all school stores selling EDNP foods in 2013. CONCLUSIONSSouth Korean policies have had a modest influence on availability of unhealthy school store foods. Alternative strategies to improve school store food environments are needed. BACKGROUNDComputerized surveys present many advantages over paper surveys. However, school-based adolescent research questionnaires still mainly rely on paper-and-pencil surveys as access to computers in schools is often not practical. Tablet-assisted self-interviews (TASI) present a possible solution, but their use is largely untested. This paper presents a method for and our experiences withimplementing a TASI in a school setting. METHODSA TASI was administered to 3907 middle and high school students from 79 schools. The survey assessed use of tobacco products and exposure to tobacco marketing. To assess in-depth tobacco use behaviors, the TASI employed extensive skip patterns to reduce the number of not-applicable questions that nontobacco users received. Pictures were added to help respondents identify the tobacco products they were being queried about. RESULTSStudents were receptive to the tablets and required no instructions in their use. None were lost, stolen, or broken. Item nonresponse, unanswered questions, was a pre-administration concern; however, 92% of participants answered 96% or more of the questions. CONCLUSIONSThis method was feasible and successful among a diverse population of students and schools. It generated a unique dataset of in-depth tobacco use behaviors that would not have been possible through a paper-and-pencil survey. BACKGROUNDThis process study is a companion to a randomized evaluation of a school-based, peer-led comprehensive sexual health education program, Teen Prevention Education Program (Teen PEP), in which 11th- and 12th-grade students are trained by school health educators to conduct informative workshops with ninth-grade peers in schools in North Carolina. The process study was designed to understand youth participants' perspectives on the program in order to gain insight into program effectiveness. METHODSThis is a mixed-methods study in 7 schools, with online surveys (N=88) and 8 focus groups with peer educators (N=116), end-of-program surveys (N=1122), 8 focus groups with ninth-grade workshop participants (N=89), and observations of the Teen PEP class and workshops during the semester of implementation in each school, 2012-2014. RESULTSBoth peer educators and ninth graders perceived benefits of participating in Teen PEP across a range of domains, including intentions, skills, and knowledge and that the peer education modality was important in their valuation of the experience. CONCLUSIONSOur findings suggest that the peer-led comprehensive sexual health education approach embodied in Teen PEP can be an important educational mechanism for teaching students information and skills to promote sexual health. BACKGROUNDProject Connect is a national program to build partnerships among public health agencies and domestic violence services to improve the health care sector response to partner and sexual violence. Pennsylvania piloted the first school nurse-delivered adolescent relationship abuse intervention in the certified school nurses' office setting. The purpose of this study was to assess the feasibility of implementing this prevention intervention. METHODSIn 5 schools in Pennsylvania, school nurses completed a survey before and 1year after receiving training on implementing the intervention as well as a phone interview. Students seeking care at the nurses' offices completed a brief anonymous feedback survey after their nurse visit. RESULTSThe school nurses adopted the intervention readily, finding ways to incorporate healthy relationship discussions into interactions with students. School nurses and students found the intervention to be acceptable. Students were positive in their feedback. Barriers included difficulty with school buy-in and finding time and private spaces to deliver the intervention. CONCLUSIONSA school nurse healthy relationships intervention was feasible to implement and acceptable to the students as well as the implementing nurses. While challenges arose with the initial uptake of the program, school nurses identified strategies to achieve school and student support for this intervention. BACKGROUNDWe examined longitudinal changes in children's physical activity during the school day, afterschool, and evening across fifth, sixth, and seventh grades. METHODSThe analytical sample included children who had valid accelerometer data in fifth grade and at least one other time-point, and provided complete sociodemographic information (N=768, 751, and 612 for the 3 time-periods studied). Accelerometer-derived total physical activity (TPA) and moderate-to-vigorous physical activity (MVPA) were expressed in minutes per hour for the school day (approximate to 7:45am to 3:30pm), afterschool (approximate to 2:25 to 6:00pm), and evening (6:00 to 10:00pm) periods. We used growth curve analyses to examine changes in TPA and MVPA. RESULTSSchool day TPA and MVPA declined significantly; we observed a greater decrease from fifth to sixth grades than from sixth to seventh grades. Afterschool TPA declined significantly, but MVPA increased significantly among girls and remained stable for boys. Evening TPA decreased significantly and MVPA declined significantly in girls and remained stable among boys. CONCLUSIONSTo inform the development of effective intervention strategies, research should focus on examining factors associated with the decline in physical activity during the transition from elementary to middle school, particularly during the hours when children are in school. BACKGROUNDSchools are an important setting for improving behaviors associated with obesity, including physical activity. However, within schools there is often a tension between spending time on activities promoting academic achievement and those promoting physical activity. METHODSA community-based intervention provided administrators and teachers with a training on evidence-based public health and then collaborated with them to identify and implement environmental (walking track) and local school policy interventions (brain breaks). The evaluation included conducting in-depth interviews and SOPLAY observations to assess the facilitators and barriers and impact of the dissemination of environmental and policy changes. RESULTSIndividual, organizational, intervention, and contextual factors influenced dissemination. Teachers reported that brain breaks increased student focus and engagement with classroom material and decreased student behavioral problems. Students decreased sedentary behavior and increased vigorous behavior. Of the 4 schools, 2 increased walking. CONCLUSIONSActive dissemination of environmental and policy interventions by engaging school administrators and teachers in planning and implementation shows potential for increasing physical activity in rural school settings. BACKGROUNDSchool closure is one of the primary measures considered during severe influenza pandemics and other emergencies. However, prolonged school closures may cause unintended adverse consequences to schools, students, and their families. A better understanding of these consequences will inform prepandemic planning, and help public health and education authorities in making informed decisions when considering school closures. METHODSWe conducted a household survey and interviewed school officials following an 8-day long closure of a school district in rural Illinois. We described household responses regarding difficulties of school closure, and summarized main themes from school official interviews. RESULTSA total of 208 (27%) household surveys were completed and returned. This school closure caused difficulties to 36 (17%) households; uncertain duration of closure, childcare arrangements, and lost pay were the most often reported difficulties. Having 1 adult in the household losing pay and household income below $25,000 were significantly associated with overall difficulty during this school closure. Concern about student health and safety was the most frequent theme in school administrator interviews. CONCLUSIONSWhereas the majority of responding households did not report difficulties during this school closure, households with 1 adult losing pay during the closure reported incurring additional expenses for childcare. Background: The denervated hippocampus provides a proper microenvironment for the survival and neuronal differentiation of neural progenitors. While thousands of lncRNAs were identified, only a few lncRNAs that regulate neurogenesis in the hippocampus are reported. The present study aimed to perform microarray expression profiling to identify long noncoding RNAs (lncRNAs) that might participate in the hippocampal neurogenesis, and investigate the potential roles of identified lncRNAs in the hippocampal neurogenesis. Results: In this study, the profiling suggested that 74 activated and 29 repressed (|log fold-change|> 1.5) lncRNAs were differentially expressed between the denervated and the normal hippocampi. Furthermore, differentially expressed lncRNAs associated with neurogenesis were found. According to the tissue-specific expression profiles, and a novel lncRNA (lncRNA2393) was identified as a neural regulator in the hippocampus in this study. The expression of lncRNA2393 was activated in the denervated hippocampus. FISH showed lncRNA2393 specially existed in the sub-granular zone of the dentate gyrus in the hippocampus and in the cytoplasm of neural stem cells (NSCs). The knockdown of lncRNA2393 depletes the EdU-positive NSCs. Besides, the increased expression of lncRNA2393 was found to be triggered by the change in the microenvironment. Conclusion: We concluded that expression changes of lncRNAs exists in the microenvironment of denervated hippocampus, of which promotes hippocampal neurogenesis. The identified lncRNA lncRNA2393 expressed in neural stem cells, located in the sub-granular zone of the dentate gyrus, which can promote NSCs proliferation in vitro. Therefore, the question is exactly which part of the denervated hippocampus induced the expression of lncRNA2393. Further studies should aim to explore the exact molecular mechanism behind the expression of lncRNA2393 in the hippocampus, to lay the foundation for the clinical application of NSCs in treating diseases of the central nervous system. PDZ-binding kinase (PBK/TOPK) acts as oncogene in various cancers and correlates with drug response. However, few studies have examined the expression and roles of PBK in colonrectal cancer (CRC). In this study, we found a significant increase in the expression of PBK in CRC tissues and cell lines. While overexpression of PBK promoted cell growth and decreased the toxicity effect of oxaliplation (OXA), targeting PBK with short hairpin RNA (shRNA) or novel PBK inhibitor HI-TOPK-032 effectively suppressed tumor growth and potentiated chemosensitivity in vitro and in vivo. Furthermore, there was a significant inverse correlation between the expressions of miR-216b and PBK. Further found that miR-216b could down-regulate PBK levels by binding to the 3' untranslated region (3'UTR) of PBK. Notably, while miR-216b decreased cell proliferation and enhanced sensitivity of CRC cells to oxaliplation, re-expression of PBK dramatically reversed these events. Collectively, our data indicated that miR-216b may function as a tumor suppressor though regulating PBK expression, which provided promising targets and possible therapeutic strategies for CRC treatment. (C) 2017 Elsevier Inc. All rights reserved. CRM1 (chromosome maintenance region 1, Exportin 1) binds to nuclear export signals and is required for nucleocytoplasmic transport of a large variety of proteins and RNP complexes. Leptomycin B (LMB), the first specific inhibitor of CRM1 identified, binds covalently to cysteine 528 in the nuclear export signal binding region of CRM1 leading to the inhibition of protein nuclear export. Although the biochemical mechanisms of action of CRM1 inhibitors such as LMB are well studied, the subcellular effects of inhibition on CRM1 are unknown. We have found that LMB causes CRM1 to redistribute from the nucleus to the cytoplasm in A549 cells. A significant decrease in nuclear CRMI coupled with an increase in cytoplasmic CRM1 was sustained for up to 4 h, while there was no change in total CRM1 protein in fractionated cells. Cells expressing an LMB insensitive HA-tagged CRM1-0528S protein were unaffected by LMB treatment, whereas HA-tagged wildtype CRM1 redistributed from the nucleus to the cytoplasm with LMB treatment, similar to endogenous CRMI. GFP-tagged CRM1 protein microinjected into the cytoplasm of A549 cells distributed throughout the cell in untreated cells remained primarily cytoplasmic in LMB-treated cells. Upon nuclear microinjection, GFP-CRM1 translocated to and accumulated in the cytoplasm of LMB-treated cells. Thus, LMB binds to CRM1 and causes its redistribution to the cytoplasm by inhibiting its nuclear import. Decieasing the nuclear availability of CRM1 likely contributes to the accumulation of CRM1 cargo proteins in the nucleus, suggesting a new mechanism of action for LMB. (C) 2017 Elsevier Inc. All rights reserved. Lipases play an important role in physiological metabolism and diseases, and also have multiple industrial applications. Rational modification of lipase specificity may increase the commercial utility of this group of enzymes, but is hindered by insufficient mechanistic understanding. Here, we report the 2.0 angstrom resolution crystal structure of a mono- and di-acylglycerols lipase from Malassezia globosa (MgMDL2). Interestingly, residues Phe278 and G1u282 were found to involve in substrate recognition because mutation on each residue led to convert MgMDL2 to a triacylglycerol (TAG) lipase. The Phe278Ala and Glu282Ala mutants also acquired ability to synthesize TAGs by esterification of glycerol and fatty acids. By in silicon analysis, steric hindrance of these residues seemed to be key factors for the altered substrate specificity. Our work may shed light on understanding the unique substrate selectivity mechanism of mono- and di-acylglycerols lipases, and provide a new insight for engineering biocatalysts with desired catalytic behaviors for biotechnological application. (C) 2017 Elsevier Inc. All rights reserved. Drug-resistance is a major challenge in targeted therapy of EGFR mutated non-small cell lung cancers (NSCLCs). The third-generation irreversible inhibitors such as AZD9291, CO-1686 and WZ4002 can overcome EGFR T790M drug-resistance mutant through covalent binding through Cys 797, but ultimately lose their efficacy upon emergence of the new mutation C797S. To develop new reversible inhibitors not relying on covalent binding through Cys 797 is therefore urgently demanded. Go6976 is a staurosporine-like reversible inhibitor targeting T790M while sparing the wild-type EGFR. In the present work, we reported the complex crystal structures of EGFR T790M/C797S + Go6976 and T790M + Go6976, along with enzyme kinetic data of EGFR wild-type, T790M and T790M/C797S. These data showed that the C797S mutation does not significantly alter the structure and function of the EGFR kinase, but increases the local hydrophilicity around residue 797. The complex crystal structures also elucidated the detailed binding mode of Go6976 to EGFR and explained why this compound prefers binding to T790M mutant. These structural pharmacological data would facilitate future drug development studies. (C) 2017 Elsevier Inc. All rights reserved. Acquisition of sperm capacitation post-ejaculation into the female reproductive tract is an essential step in the fertilization process. Accordingly, during in vitro fertilization, successful fertilization requires the induction of capacitation in epididymis-retrieved sperm. To date, many candidate substances have been considered as capacitation inducers; however, there are no reports on the efficiency of inducing sperm capacitation among the diverse inducers. Therefore, we attempted to determine the inducer with the best capacitation in inbred mouse sperm by comparing the capacitation performance of a variety of inducers and the percentage of in vitro fertilization-generated zygotes. Calcium chloride, progesterone, bovine serum albumin (BSA), heparin, and lysophosphatidylcholine (Lyso-PC) were used as candidate capacitation inducers. Optimized concentrations of each inducer (2.7 mM calcium, 15 mu M progesterone, 0.3% (w/v) BSA, 50 mM heparin, and 100 mu M Lyso-PC) were determined based on the ratio of sperm showing an acrosome reaction using Coomassie G-250 blue staining. Calcium at 2.7 mM showed the strongest capacitation induction compared to the other inducers. In vitro fertilization was performed using sperm incubated in each inducer and the ratio of fertilized oocytes was determined. Calcium at 2.7 mM and 0.3% (w/v) BSA showed the highest fertilization rates compared to 15 mu M progesterone, 50 mM heparin, and 100 mu M Lyso-PC. From these results, we found that 2.7 mM calcium and 0.3% (w/v) BSA were the most effective sperm capacitation inducers of mouse sperm for in vitro fertilization. (C) 2017 Elsevier Inc. All rights reserved. Recent studies have demonstrated that remote ischemic conditioning (RIC) creates cardioprotection against ischemia/reperfusion injury and myocardial infarction (MI); however, the effects of non-invasive remote ischemic conditioning (nRIC) on prognosis and rehabilitation after MI (post-MI) remain unknown. We successfully established MI models involving healthy adult male Sprague-Dawley rats. The nRIC group repeatedly underwent 5 min of ischemia and 5 min of reperfusion in the left hind limb for three cycles every other day until weeks 4, 6, and 8 after MI. nRIC improved cardiac hemodynamic function and mitochondrial respiratory function through increasing myocardial levels of mitochondrial respiratory chain complexes I, II, Ill, IV, and adenosine triphosphate (ATP) and decreasing the activity of nitric oxide synthase (NOS). nRIC could inhibit cardiomyocytes apoptosis and reduce myocardium injury through raising the expression of Bcl-2 and reduced the content of creatine kinase-MB, cardiac troponin I and Bax. The results indicated that long-term nRIC could accelerate recovery and improve prognosis and rehabilitation in post-MI rats. (C) 2017 Elsevier Inc. All rights reserved. Human AlkB homolog 3 (ALKBH3) is overexpressed in non-small cell lung cancers (NSCLC) and its high expression is significantly correlated with poor prognosis. While ALKBH3 knockdown induces apoptosis in NSCLC cells, the underlying anti-apoptotic mechanisms of ALKBH3 in NSCLC cells remain unclear. Here we show that ALKBH3 knockdown induces cell cycle arrest or apoptosis depending on the TP53 gene status in NSCLC cells. In comparison to parental cells, TP53-knockout A549 cells showed DNA damage responsive signal induced by ALKBH3 knockdown. TP53 knockout shifted the phenotypes of A549 cells induced by ALKBH3 knockdown from cell cycle arrest to apoptosis induction, suggesting that the TP53 gene status is a critical determinant of the phenotypes induced by ALKBH3 knockdown in NSCLC cells. (C) 2017 Elsevier Inc. All rights reserved. Ubiquitinlation of proteins is prevalent and important in both normal and pathological cellular processes. Deubiquitinating enzymes (DUBs) can remove the ubiquitin tags on substrate proteins and dynamically regulate the ubiquitination process. The PPPDE family proteins were predicted to be a novel class of deubiquitinating peptidase, but this has not yet been experimentally proved. Here we validated the deubiquitinating activity of PPPDE1 and revealed its isopeptidase activity against ubiquitin conjugated through Lys 48 and Lys 63. We also identified ribosomal protein S7, RPS7, as a substrate protein of PPPDE1. Moreover, PPPDE1 could mediate the ubiquitin chain editing of RPS7, deubiquitinating Lys 48 linked ubiquitination, and finally stabilize RPS7 proteins. Taken together, we report that PPPDE1 is a novel deubiquitinase that belongs to a cysteine isopeptidase family. (C) 2017 Elsevier Inc. All rights reserved. Hippo signaling pathway is an evolutionarily conserved developmental network that governs the downstream transcriptional co-activators, YAP and TAZ, which bind to and activate the output of TEADs that responsible for cell proliferation, apoptosis, and stem cell self renewal. Emerging evidence has shown the tumor suppressor properties of Hippo signaling. However, limited knowledge is available concerning the downstream transcription factors of Hippo pathway in osteosarcoma (OS). In this study, we demonstrated that TEAD1 was the major transcription factor of Hippo signaling pathway in OS. Genetic silencing of TEAD1 suppressed multiple malignant phenotypes of OS cells including cell proliferation, apoptosis resistance, and invasive potential. Mechanistically, we showed that TEAD1 largely exerted its transcriptional control of its functional targets, PTGS2 and CYR61. Collectively, this work identifies the YAP1/TEAD1 complex as the representative dysregulated profile of Hippo signaling in OS and provides proof-of-principle that targeting TEAD1 may be a therapeutic strategy of osteosarcoma. (C) 2017 Published by Elsevier Inc. CXCL12 overexpression improves neurobehavioral recovery during post-ischemic stroke through multiple mechanisms including promoting endothelial progenitor cells function in animal models. It has been proposed that the monomer and dimer forms possess differential chemotactic and regulatory function. The aim of present study is to explore whether a monomeric or dimeric CXCL12 plays a different role in the endothelial progenitor cells proliferation, migration, and tube-formation in vitro. In this study, we transferred monomeric, dimeric and wild type CXCL12 gene into endothelial progenitor cells via lentiviral vectors. We investigated endothelial progenitor cells function following the interaction of CXCL12/CXCR4 or CXCL12/CXCR7 and downstream signaling pathways. Our results showed that the monomeric CXCL12 transfected endothelial progenitor cells had enhanced ability in cell proliferation, migration, and tube-formation compared to that in dimeric or wild type controls (p < 0.05). Both CXCR4 and CXCR7 were significantly overexpressed in the monomeric CXCL12 transfected endothelial progenitor cells compared to that in the dimeric or wide type controls (p < 0.05). The function of migration, but not proliferation or tube-formation, was significantly reduced in the monomeric CXCL12 transfected endothelial progenitor cells when the cells were pre-treated with either CXCR4 inhibitor AMD3100 or siCXCR7 (p < 0.05), suggesting this cell function was partially regulated by CXCL12/CXCR4 and CXCL12/CXCR7 signal pathways. Our study demonstrated that monomeric CXCL12 was the fundamental form, which played important roles in endothelial progenitor cells' proliferation, migration, and tube formation. (C) 2017 Elsevier Inc. All rights reserved. Immuno-PCR (IPCR) combines the versatile ELISA antigen detection with ultrasensitive PCR signal amplification, thereby enabling the highly sensitive detection of a broad range of targets with a typically very large dynamic detection range. The quantification of the antigen is usually achieved by real-time PCR, which provides a correlation between the target concentration and amplified DNA marker. We here report on the implementation of digital droplet PCR as a means for direct quantification of DNA copies to enable the highly sensitive detection of protein biomarkers. To this end, two alternative approaches, based on either magnetic microbead-based IPCR or a microplate-release IPCR were tested. The latter format worked well and revealed an extraordinary high robustness and sensitivity. While rtIPCR already fulfills typical immunoassay acceptance criteria, ddIPCR enables improved accuracy and precision of the assay because signal response and analyte concentrations are directly correlated. The utility of the novel ddIPCR technology is demonstrated at the example of two cytokines, interleukin 2 and interleukin 6 (IL2, IL6, respectively), with an overall average CV% of 5.0 (IL2) and 7.4 (IL6). (C) 2017 Elsevier Inc. All rights reserved. Huntington's disease (HD) is a fatal genetic disease caused by abnormal aggregation of mutant huntingtin protein (mHtt). Reduction of mHtt aggregation decreases cell death of the brain and is a promising therapeutic strategy of HD. MicroRNAs are short non-coding nucleotides which modulate various genes and dysregulated in many diseases including HD. MicroRNA miR-27a was reported to be reduced in the brain of R6/2 HD mouse model and modulate multidrug resistance protein-1 (MDR-1). Using subventricular zone-derived neuronal stem cells (NSCs), we used in vitro HD model to test the effect of miR-27a on MDR-1 and mHtt aggregation. R6/2-derived NSCs can be differentiated under condition of growth factor deprivation, and the progression of differentiation leads to a decrease of MDR-1 level and efflux function of cells. Immunocytochemistry result also confirmed that mHtt aggregation was increased with differentiation. We transfected miR-27a in the R6/2-derived differentiated NSCs, and examined phenotype of HD, mHtt aggregation. As a result, miR-27a transfection resulted in reduction of mHtt aggregation in HD cells. In addition, MDR-1, which can transport mHtt, protein level was increased by miR-27a transfection. Conversely, knock-down of MDR-1 through MDR-1 siRNA increased mHtt aggregation in vitro. Our results indicate that miR-27a could reduce mHtt level of the HD cell by augmenting MDR-1 function. (C) 2017 Published by Elsevier Inc. Paclitaxel (PTX) is a cytotoxic chemotherapy drug with encouraging activity in human malignancies. However, free PTX has a very low oral bioavailability due to its low aqueous solubility and the gastrointestinal drug barrier. In order to overcome this obstacle, we have designed erythrocyte membrane nanoparticles (EMNP) using sonication method. The permeability of PTX by EMNP was 3.5-fold (P-app = 0.425 nm/s) and 16.2 fold (P-app = 394.1 nm/s) higher than free PTX in MDCK-MDR1 cell monolayers and intestinal mucosal tissue, respectively. The in vivo pharmacokinetics indicated that the AUC(0-t)(mu g/mL.h) and C-max(mu g/mL) of EMNP were 14.2-fold and 6.0-fold higher than that of free PTX, respectively. In summary, the EMNP appears to be a promising nanoformulation to enhance the oral bioavailability of insoluble and poorly permeable drugs. (C) 2017 Elsevier Inc. All rights reserved. N-methyl-D-aspartate (NMDA) receptor activation increases regional cerebral blood flow (rCBF) and induces neuronal injury, but similarities between these processes are poorly understood. In this study, by measuring rCBF in vivo, we identified a clear correlation between cerebral hyperemia and brain injury. NMDA receptor activation induced brain injury as a result of rCBF increase, which was attenuated by an inhibitor of mitogen-activated protein kinase or calcineurin. Moreover, NMDA induced phosphorylation of extracellular signal-regulated kinase (ERK) and nuclear translocation of nuclear factor of activated T cell (NFAT) in neurons. Therefore, a MEK/ERK- and calcineurin/NFAT-mediated mechanism of neurovascular coupling underlies the pathophysiology of neurovascular disorders. (C) 2017 Elsevier Inc. All rights reserved. The living gills of the fungus Mycena chlorophos spontaneously emit green light. It was previously reported that trans-4-hydroxycinnamic acid and trans-3,4-dihydroxycinnnamic acid are essential for the bright light production in the living gills. However, the chemical mechanisms underlying their bioluminescence are unknown. In the present study, trans-4-aminocinnamic acid was found to inhibit light production in the living gills. The concentrations of trans-4-aminocinnamic acid that inhibited the bioluminescence intensity by 50% of initial values for immature and mature gills were 0.07 mu M and 4 mu M, respectively. Approximately 20% of the bioluminescence intensity of the immature and mature gills was not inhibited by a further increase in the concentration of trans-4-aminocinnamic acid. Moreover, the bioluminescence that was activated by trans-4-hydroxycinnamic acid or trans-3,4-dihydroxycinnamic acid (0.01 mM) was completely inhibited by trans-4-aminocinnamic acid. Therefore, trans-4-hydroxycinnamic acid and trans-3,4-dihydroxycinnamic acid functioned for the bioluminescence that was inhibited by trans-4-aminocinnamic acid. trans-4-Aminocinnamic acid strongly bound to the bioluminescence system(s) and withstood rinsing of the gills with 10 mM phosphate buffer (pH = 7), and high concentrations of trans-4-hydroxycinnamic acid (1 mM) and trans-3,4-dihydroxycinnamic acid (0.1 mM) functioned to displace trans-4-aminocinnamic acid from the bioluminescence system(s) and reactivate bioluminescence. Benzenamine, trans-cinnamic acid, trans-2-aminocinnamic acid, and trans-3-aminocinnamic acid did not inhibit bioluminescence. Therefore, the structure-specific inhibition by trans-4-aminocinnamic acid suggested that the 4-hydroxy group in trans-4-hydroxycinnamic acid and trans-3,4-dihydroxycinnamic acid molecules plays a functional role in the bioluminescence reaction. (C) 2017 Elsevier Inc. All rights reserved. Malignant neoplasms exhibit an elevated rate of glycolysis and a high demand for glucose over normal cells. This characteristic can be exploited for in vivo imaging and tumor targeting examined. In this manuscript, we describe the synthesis of near-infrared (NIR) fluorochrome IR-822-labeled 2-amino-2deoxy-o-glucose (DG) for optical imaging of tumors in mice. NIR fluorescent dye IR-820 was subsequently conjugated with 3-Mercaptopropionic acid and 2-amino-2-deoxy-D-glucose to form IR-822-DG. The cell experiments and acute toxicity studies demonstrated the low toxicity of IR-822-DG to normal cells/tissues. The dynamic behavior and targeting ability of IR-822-DG in normal mice was investigated with a NIR fluorescence imaging system. The in vitro and in vivo tumor targeting capabilities of IR-822-DG were evaluated in tumor cells and tumor bearing mice, respectively. Results demonstrated that IR822-DG actively and efficiently accumulated at the site of the tumor. The probe also exhibited good photostability and excellent cell membrane permeability. The study indicates the broad applicability of IR-822-DG for tumors diagnosis, especially in the glucose-related pathologies. (C) 2017 Elsevier Inc. All rights reserved. Long noncoding RNAs (lncRNAs) are important regulators of various biological processes, but few studies have identified IncRNAs in plants; genome-wide discovery of novel lncRNAs is thus required. We used deep strand-specific sequencing (s5RNA-seq) to obtain approximately 62 million reads from all developmental stages of Arabidopsis thaliana and identified 156 novel IncRNAs that we classified according to their localization. These novel identified lncRNAs showed low expression levels and sequence conservation. Bioinformatic analysis predicted potential target genes or cis-regulated genes of 91 antisense and 32 intergenic IncRNAs. Functional annotation of these potential targets and sequence motif analysis indicated that the IncRNAs participate in various biological processes underlying Arabidopsis growth and development. Seventeen of the IncRNAs were predicted targets of 22 miRNAs, and a network of interactions between ncRNAs and mRNAs was constructed. In addition, nine IncRNAs functioned as miRNA precursors. Finally, qRT-PCR revealed that novel lncRNAs have stage- and tissue-specific expression patterns in A. thaliana. Our study provides insight into the potential functions and regulatory interactions of novel Arabidopsis IncRNAs, and enhances our understanding of plant IncRNAs, which will facilitate functional research. (C) 2017 Elsevier Inc. All rights reserved. RAPTA compounds, ([Ru(eta(6)-arene)(PTA)Cl-2], PTA = 1,3,5-triaza-7-phosphaadamantane), have been reported to overcome drug resistance in cisplatin resistant cells. However, the exact mechanism of these complexes is still largely unexplored. In this study, the interaction of some RAPTA compounds with the N-terminal fragment of the BRCA1 RING domain protein was investigated. The binding of the RAPTA compounds to the BRCA1 protein resulted in a release of Zn2+ ions in a dose and time dependent manner, as well as thermal alteration of ruthenated-BRCA1 proteins. Electron Transfer Dissociation (ETD) fragmentation mass spectrometry revealed the preferential binding sites of the RAPTA complexes on the BRCA1 zinc finger RING domain at a similar short peptide stretch, Cys(24)Lys(25)Phe(26)Cys(27)Met(28)Leu(29) and Lys(35) (residues 44-49 and 55 on full length BRCA1). Changes in the conformation and binding constants of ruthenium-BRCA1 adducts were established, resulting in inactivation of the RING heterodimer BRCA1/BARD1-mediated E3 ubiquitin ligase function. These findings could provide mechanistic insight into the mode of action of RAPTA complexes for on tested BRCA1 model protein. (C) 2017 Elsevier Inc. All rights reserved. beta 1-adrenergic receptor (Adrb1) belongs to the superfamily of G-protein-coupled receptors (GPCRs) and plays a critical role in the regulation of heart rate and myocardial contraction force. GPCRs are phosphorylated at multiple sites to regulate distinct signal transduction pathways in different tissues. However, little is known about the location and function of distinct phosphorylation sites of Adrbl in vivo. To clarify the mechanisms underlying functional regulation associated with Adrbl phosphorylation in vivo, we aimed to identify Adrb1 phosphorylation sites in the mouse heart using phosphoproteomics techniques with nano-flow liquid chromatography/tandem mass spectrometry (LC-MS/MS). We revealed the phosphorylation residues of Adrb1 to be Ser274 and Ser280 in the third intracellular loop and Ser412, Ser417, Ser450, Ser451, and Ser462 at the C-terminus. We also found that phosphorylation at Ser274, Ser280, and Ser462 was enhanced in response to stimulation with an Adrbl agonist. This is the first study to identify Adrb1 phosphorylation sites in vivo. These findings will provide novel insights into the regulatory mechanisms mediated by Adrbl phosphorylation. (C) 2017 Elsevier Inc. All rights reserved. The skeletal muscle consists of contractile myofibers and plays essential roles for maintenance of body posture, movement, and metabolic regulation. During the development and regeneration of the skeletal muscle tissue, the myoblasts fuse into multinucleated myotubes that subsequently form myofibers. Transplantation of myoblasts may make possible a novel regenerative therapy against defects or dysfunction of the skeletal muscle. It is reported that rodent fibroblasts are converted into myoblast-like cells and fuse to form syncytium after forced expression of exogenous myogenic differentiation 1 (MYOD1) that is a key transcription factor for myoblast differentiation. But human fibroblasts are less efficiently converted into myoblasts and rarely fused by MYOD1 alone. Here we found that transduction of v-myc avian myelocytomatosis viral oncogene lung carcinoma derived homolog (MYCL) gene in combination with MYOD1 gene induced myoblast-like phenotypes in human fibroblasts more strongly than MYOD1 gene alone. The rate of conversion was approximately 90%. The directly converted myoblasts (dMBs) underwent fusion in an ERK5 pathway-dependent manner. The dMBs also formed myofiber-like structure in vivo after an inoculation into mice at the subcutaneous tissue. The present results strongly suggest that the combination of MYCL plus MYOD1 may promote direct conversion of human fibroblasts into functional myoblasts that could potentially be used for regenerative therapy for muscle diseases and congenital muscle defects. (C) 2017 Elsevier Inc. All rights reserved. The pathogenic mechanism of polycystic kidney disease (PKD) is unclear. Abnormal glucose metabolism is maybe involved in hyper-proliferation of renal cyst epithelial cells. Mini-pigs are more similar to humans than rodents and therefore, are an ideal large animal model. In this study, for the first time, we systematically investigated the changes in glucose metabolism and cell proliferation signaling pathways in the kidney tissues of chronic progressive PKD mini-pig models created by knock-outing PKD1gene. The results showed that in the kidneys of PKD mini-pigs, the glycolysis is increased and the expressions of key oxidative phosphorylation enzymes Complexes I and IV significantly decreased. The activities of mitochondrial respiration chain Complexes I and IV significantly decreased; the phosphorylation level of key metabolism-modulating molecule AMP-activated protein kinase (AMPK) significantly decreased; and the mammalian target of rapamycin (mTOR) and extracellular signal-regulated kinase (ERK) signaling pathway are activated obviously. This study showed that in the kidneys of PKD mini-pigs, the level of glycolysis significantly increased, oxidative phosphorylation significantly decreased, and cell proliferation signaling pathways significantly activated, suggesting that metabolic changes in PKD may result in the occurrence and development of PKD through the activation of proliferation signaling pathways. (C) 2017 Elsevier Inc. All rights reserved. As a novel class of endogenous non-coding RNAs, circular RNAs (circRNAs) have become a new research hotspot in recent years. The wide distribution of circRNAs in different plant species has been proven. Furthermore, circRNAs show significant tissue-specific expression patterns in plant development and are responsive to a variety of biotic and abiotic stresses, indicating that circRNAs might have important biological functions in plant development. Here, we summarize the current knowledge of plant circRNAs in recent years and discuss views and perspectives on the possible regulatory roles of plant circRNAs, including the function of miRNA sponges, regulating the expression of their parental genes or linear mRNAs, translating into peptides or proteins and responses to different stresses. These advances have sculpted a framework of plant circRNAs and provide new insights for functional RNA regulation research in the future. (C) 2017 Elsevier Inc. All rights reserved. Aside from a role in clot dissolution, the fibrinolytic factor, plasmin is implicated in tumorigenesis. Although abnormalities of coagulation and fibrinolysis have been reported in multiple myeloma patients, the biological roles of fibrinolytic factors in multiple myeloma (MM) using in vivo models have not been elucidated. In this study, we established a murine model of fulminant MM with bone marrow and extramedullar engraftment after intravenous injection of B53 cells. We found that the fibrinolytic factor expression pattern in murine B53 MM cells is similar to the expression pattern reported in primary human MM cells. Pharmacological targeting of plasmin using the plasmin inhibitors YO-2 did not change disease progression in MM cell bearing mice although systemic plasmin levels was suppressed. Our findings suggest that although plasmin has been suggested to be a driver for disease progression using clinical patient samples in MM using mostly in vitro studies, here we demonstrate that suppression of plasmin generation or inhibition of plasmin cannot alter MM progression in vivo. (C) 2017 Elsevier Inc. All rights reserved. Prolyl-tRNA synthetase (PRS) is a member of the aminoacyl-tRNA synthetase family of enzymes and catalyzes the synthesis of prolyl-tRNA(Pro) using ATP, L-proline, and tRNA(Pro) as substrates. An ATP dependent PRS inhibitor, halofuginone, was shown to suppress autoimmune responses, suggesting that the inhibition of PRS is a potential therapeutic approach for inflammatory diseases. Although a few PRS inhibitors have been derivatized from natural sources or substrate mimetics, small-molecule human PRS inhibitors have not been reported. In this study, we discovered a novel series of pyrazinamide PRS inhibitors from a compound library using pre-transfer editing activity of human PRS enzyme. Steady-state biochemical analysis on the inhibitory mode revealed its distinctive characteristics of inhibition with proline uncompetition and ATP competition. The binding activity of a representative compound was time-dependently potentiated by the presence of L-proline with K-d of 0.76 nM. Thermal shift assays demonstrated the stabilization of PRS in complex with L-proline and pyrazinamide PRS inhibitors. The binding mode of the PRS inhibitor to the ATP site of PRS enzyme was elucidated using the ternary complex crystal structure with L-proline. The results demonstrated the different inhibitory and binding mode of pyrazinamide PRS inhibitors from preceding halofuginone. Furthermore, the PRS inhibitor inhibited intracellular protein synthesis via a different mode than halofuginone. In conclusion, we have identified a novel drug-like PRS inhibitor with a distinctive binding mode. This inhibitor was effective in a cellular context. Thus, the series of PRS inhibitors are considered to be applicable to further development with differentiation from preceding halofuginone. (C) 2017 Elsevier Inc. All rights reserved. Alloxan has been used as a diabetogenic agent to induce diabetes. It selectively induces pancreatic beta-cell death. The specific toxicity, however, is not fully understood. In this study, we observed the effect of alloxan on proteasome function. We found that alloxan caused the accumulation of ubiquitinated proteins in NRK cells through the inhibition of the proteolytic activities of the proteasome. Biochemistry experiments with purified 26S and 20S proteasomes revealed that alloxan directly acts on the chymotrypsin- and trypsin-like peptidase activities. These results demonstrate that alloxan is a proteasome inhibitor, which suggests that its specific toxicity toward beta-cell is at least in part through proteasome inhibition. (C) 2017 Elsevier Inc. All rights reserved. The tRNA methyltransferase J (TrmJ) and D (TrmD) catalyze the transferring reaction of a methyl group to the tRNA anticodon loop. They commonly have the N-terminal domain (NTD) and the C-terminal domain (CTD). Whereas two monomeric CTDs symmetrically interact with a dimeric NTD in TrmD, a CTD dimer has exhibited an asymmetric interaction with the NTD dimer in the presence of a product. The elucidated apo-structure of the full-length TrmJ from Zymomonas mobilis ZM4 shows a dimeric CTD that asymmetrically interacts with the NTD dimer, thereby distributing non-symmetrical potential charge on the both side of the protein surface. Comparison with the product-bound structures reveals a local reorientation of the two arginine-containing loop at the active site, which interacts with the product. Further, the CTD dimers have diverse orientations compared to the NTD dimers, suggesting their flexibility. These data indicate that an asymmetric interaction between the NTD dimer and the CTD dimer is a common structural feature among TrmJ proteins, regardless of the presence of a substrate or a product. (C) 2017 Elsevier Inc. All rights reserved. Bacterial lipid modification of proteins is an essential post-translational event committed by Phosphatidylglycerol: prolipoprotein diacylglyceryl transferase (Lgt) by catalysing diacyglyceryl transfer from Phosphatidylglycerol to cysteine present in the characteristic 'lipobox' ([LVI] ((-3)) [ASTVI] ((-2)) [GAS] ((-1)) C ((+1))) of prolipoprotein signal peptides. This is then followed by the cleavage of the signal peptide by lipoprotein-specific signal peptidase (LspA). It had been known for long that threonine at the -1 position allows diacylglyceryl modification by Lgt, but not signal peptide cleavage by LspA. We have addressed this unexplained stringency by computational analysis of the recently published 3D structure of LspA with its competitive inhibitor as well as transition state analogue, globomycin using PyMoL viewing tool and VADAR (Volume, Area, Dihedral Angle Reporter) web server. The propensity to form hydrogen bond (2.9A) between the hydroxyl group of threonine (not possible with serine) and the NH of the lipid modified cysteine, possible only in the transition state, will prevent the protonation of NH of the leaving peptide and therefore its cleavage. This knowledge could be useful for designing inhibitors of this essential pathway in bacteria or for engineering LspA. (C) 2017 Elsevier Inc. All rights reserved. Gut microbiota is critical for maintaining body immune homeostasis and thus affects tumor growth and therapeutic efficiency. Here, we investigated the link between microbiota and tumorgenesis in a mice model of subcutaneous melanoma cell transplantation, and explored the underlying mechanism. We found disruption of gut microbiota by pretreating mice with antibiotics promote tumor growth and remodeling the immune compartment within the primary tumor. Indeed, gut microbial dysbiosis reduced the infiltrated mature antigen-presenting cells of tumor, together with lower levels of co-stimulators, such as CD80, CD86 and MHCII, as well as defective Th1 cytokines, including IFN gamma, TNF alpha, IL12p40, and IL12p35. Meantime, splenic APCs displayed blunted ability in triggering T cell proliferation and IFN gamma secretion. However, oral administration of LPS restored the immune surveillance effects and thus inhibited tumor growth in the antibiotics induced gut microbiota dysbiosis group. Taken together, these data highly supported that antibiotics induced gut microbiota dysbiosis promotes tumor initiation, while LPS supplementation would restore the effective immune surveillance and repress tumor initiation. (C) 2017 Elsevier Inc. All rights reserved. Cancer immunotherapy has many great achievements in recent years. One of the most promising cancer immunotherapies is PD-1/PD-L1 pathway blockade. miRNAs (MicroRNAs) belongs to small noncoding RNA and can regulate gene expression by binding to the 3'UTR. Many miRNAs can inhibit cancer growth by regulating the PD-L1 expression in cancer cells. Herein, we firstly found that PD-L1 could be the target of miR-142-5p by using bioinformatics methods, then we conduct luciferase activity assay, RT-PCR and western blot experiments to demonstrate that miR-142-5p can regulate PD-L1 expression by binding to its 3'UTR. And in vivo experiments certified that miR-142-5p overexpression can inhibit pancreatic cancer growth. Flow cytometry and RT-PCR experiment demonstrated that miR-142-5p overexpression on tumor cells inhibits the expression of PD-L1 on tumor cells which result in the increase of CD4(+) T lymphocytes and CD8(+) T lymphocytes, the decrease of PD-1(+) T lymphocytes and increase of IFN-gamma and TNF-alpha. So, miR-142-5p overexpression can enhance anti-tumor immunity by blocking PD-Ll/PD-1 pathway. Our results identify a novel mechanism by which PD-L1 is regulated by miR-142-5p and overexpression of miR-142-5p could enhance the anti-tumor immunity. (C) 2017 Elsevier Inc. All rights reserved. The resuscitation of an adult trauma patient has been researched and written about for the past century. Throughout those discussions, 2 major controversies persist when discussing resuscitation methods: (1) the ideal choice of fluid type to use during the initial resuscitation period, and (2) the ideal fluid volume to infuse during the initial resuscitation period. This article presents a brief historical perspective of fluids used during a trauma resuscitation, along with the latest research findings as they relate to the 2 stated issues. Chronic heart failure is a chronic condition that is associated with increased health care expenditures and high rates of morbidity and mortality. Mainstay in heart failure management has been the prescription of a fluid restriction. The purpose of this article is to review the available evidence for fluid restriction in chronic heart failure patients. Ultrasonography is a first-line diagnostic tool when evaluating volume status in the critical care patient population. Ultrasonography leads to a prompt diagnosis and more appropriate management plan, while decreasing health care costs, time to diagnosis, hospital length of stay, time to definitive operation, and mortality. It is recommended that critical care providers treating critically ill patients be skilled and competent in critical care ultrasonography. As the critical care population and the shortage of critical care physicians increases, advanced practice providers are becoming more prevalent in critical care areas and should be competent in this skill as well. Many urologic reconstructive techniques involve the use of autologous bowel for urinary diversion and bladder augmentation. The resection of bowel and its reimplantation into the urinary system often comes with a variety of metabolic and electrolyte derangements, depending on the type of bowel used and the quantity of urine it is exposed to in its final anatomic position. Clinicians should be aware of these potential complications due to the serious consequences that may result from uncorrected electrolyte disturbances. This article reviews the common electrolyte complications related to both bowel resection and the interposition of bowel within the urinary tract. The microcirculation is responsible for blood flow regulation and red blood cell distribution throughout individual organs. Patients with circulatory shock have acute failure of the cardiovascular system in which there is insufficient delivery of oxygen to meet metabolic tissue requirements. All subtypes of shock pathophysiology have a hypovolemic component. Fluid resuscitation guided by systemic hemodynamic end points is a common intervention. Evidence shows that microcirculatory shock persists even after optimization of macrocirculatory hemodynamics. The ability for nurses to assess the microcirculation at the bedside in real-time during fluid resuscitation could lead to improved algorithms designed to resuscitate the microcirculation. Overall, there is a lack of randomized controlled trials examining the correlation between fluid volume delivery and outcomes in postoperative lung transplant patients. However, using thoracic surgery patients as a guide, the evidence suggests that hypervolemia correlates with pulmonary edema and should be avoided in lung transplant patients. However, it is recognized that patients with hemodynamic instability may require volume for attenuation of this situation, but it can likely be mitigated with the use of inotropic medication to maintain adequate perfusion and avoid the development of edema. Tumor lysis syndrome (TLS) is a life-threatening disorder that is an oncologic emergency. Risk factors for TLS are well-known, but the current literature shows case descriptions of unexpected acute TLS. Solid tumors and untreated hematologic tumors can lyse under various circumstances in children and adults. International guidelines and recommendations, including the early involvement of the critical care team, have been put forward to help clinicians properly manage the syndrome. Advanced practice nurses may be in the position of triaging and initiating treatment of patients with TLS, and need a thorough understanding of the syndrome and its treatment. Dysnatremia is a common finding in the intensive care unit (ICU) and may be a predictor for mortality and poor clinical outcomes. Depending on the time of onset (ie, on admission vs later in the ICU stay), the incidence of dysnatremias in critically ill patients ranges from 6.9% to 15%, respectively. The symptoms of sodium derangement and their effect on brain physiology make early recognition and correction paramount in the neurologic ICU. Hyponatremia in brain injured patients can lead to life threatening conditions such as seizures and may worsen cerebral edema and contribute to alterations in intracranial pressure. Fluid resuscitation is a primary concern of nurse clinicians. Excessive resuscitation with crystalloids places patients at particular risk for many subsequent complications that carry associated increases in mortality and morbidity. Intra-abdominal hypertension and abdominal compartment syndrome are deadly complications of third spacing and capillary leak that occur secondary to excessive fluid resuscitation. Careful consideration is necessary when achieving fluid balance in acutely ill patients, including reducing the use of crystalloids, implementing damage control resuscitation, and establishing measurable resuscitation endpoints. Nurse clinicians are capable of reducing mortality in intra-abdominal hypertension and abdominal compartment syndrome patients by incorporating the latest evidence in fluid resuscitation techniques. Background: Poor antiretroviral therapy (ART) adherence leads to drug resistance and treatment failures. The options for second and third line ART regimens, particularly for pediatric patients, are very limited in low and middle-income countries. HIV-infected children are mostly passive drug-takers, thus caretakers play a very important role in assuring ART adherence. Pediatric ART adherence is still a challenging problem in Vietnam since non-adherence is the major risk factor for treatment failure. Our study explores and measures caretakers' barriers in order to improve pediatric ART adherence in future. Methods: Exploring caretakers' barriers was conducted through a qualitative study with Focus Group Discussion (FGD) on two topics: 1. Current society - family support and difficulties in takingcare children under ART; 2. Stigma experience. Based on the finding from the qualitative study a quantitative study measuring caretakers' barriers was conducted through a designed questionnaire. Study methods strictly followed the consolidated criteria with 32-item checklist for interviews and focus groups. Results: In total eight FGDs with 53 participants were conducted. Common caretakers' barriers to children's ART adherence, were financial burden, lack of ART KP (Knowledge-Practice), stigma, depression, shifting caretaker, drug taste and side effects, lack of family support, fixed health check-up schedule and HIV non-disclosure. In the questionnaire study a total of 209 caretakers participated. The most commonly reported caretakers' barriers were: financial burden (144; 69%), KP burden (143; 68%), depression (85; 41%) and stigma (30; 14.8%). Some caretakers' characteristics that significantly associated with reported barriers (p < 0.05). Rural caretakers reported significantly more financial burden (OR = 2.26) and stigma (OR = 3.53) than urban. Caretakers with under high school level education reported significantly more financial burdens (OR = 2.08) and stigma (OR = 4.15) compared to caretakers with high school education or over. Conclusion: Financial burden, KP burden, depression and stigma were common reported caretakers' barriers to pediatric ART adherence. Family residence, caretaker's education level and job were considered as the key factors determining caretakers' barriers related to financial burden and stigma. These findings may be important for policy makers and researchers in order to develop effective interventions regarding to caretakers' burdens and associated factors. Furthermore, a tool for nurses in monitoring caretakers' barriers to pediatric ART adherence was developed first with FDG, and then interview questionnaire. This tool could be applied and modified easily in any pediatric ART clinic settings in accordance to economic, social and cultural circumstances. (C) 2017 Elsevier Inc. All rights reserved. Objectives: Literature data show that excess and primary deficiency in particular nutrients, vitamins and minerals may lead to pre-eclampsia, gestational diabetes, hypertension and neural tube defects in the foetus. The aim of the study was to determine differences in average daily consumption of selected nutrients during pregnancy in women who did not supplement their diet and to evaluate the influence of dietary habits on the occurrence of pre-term delivery and hypertension in pregnant women. Sample group and methods: Information on the course of pregnancy and the newborn's health status at birth was derived from the Charter of Pregnancy and documents recorded by the hospital. Women's eating habits and dietary composition were analyzed on the basis of a dietary questionnaire. The sample group was divided into four groups: women who delivered neonates appropriate for gestational age (AGA), women with gestosis who delivered AGA neonates by means of caesarean sections, women who delivered pre-term neonates (PTB) and women with gestosis who delivered PTB by means of caesarean sections. Results: In the case of women with vaginal delivery at term the average intake of iodine was always higher than in other groups. Analysis of average daily intake of folates revealed a higher intake in the group of women who gave birth to full-term neonates with proper neonatal weight in comparison with the groups of women with pre-term delivery. P <= 0.05. Conclusions: Statistically significant differences in average daily intake of folates, iodine, retinol, magnesium and iron were observed between the group of women with vaginal delivery at term and the groups of women with diagnosed hypertension who delivered preterm. Correlation was demonstrated between average daily intake of iodine and vitamin D and the occurrence of arterial hypertension. Supplementation of the diet of women in the preconception and prenatal period with minerals and vitamins should be considered. (C) 2017 Elsevier Inc. All rights reserved. Background: With the worldwide increase in the incidence and prevalence of diabetes, there has been an increase in the scope and scale of nursing care and education required for patients with diabetes. The high prevalence of diabetes in Saudi Arabia makes this a particular priority for this country. Aim: The aim of this study was to examine nurses' perceived and actual knowledge of diabetes and its care and management in Saudi Arabia. Methods: A convenience sample of 423 nurses working in Prince Sultan Medical Military City in Saudi Arabia was surveyed in this descriptive, cross-sectional study. Perceived knowledge was assessed using the Diabetes Self-Report Tool, while the Diabetes Basic Knowledge Tool was used to assess the actual knowledge of participants. Results: The nurses generally had a positive view of their diabetes knowledge, with a mean score (SD) of 46.9 (6.1) (of maximum 60) for the Diabetes Self-Report Tool. Their actual knowledge scores ranged from 2 to 35 with a mean (SD) score of 25.4 (6.2) (of maximum of 49). Nurses' perceived and actual knowledge of diabetes varied according to their demographic and practice details. Perceived competency, current provision of diabetes care, education level and attendance at any diabetes education programs predicted perceived knowledge; these factors, with gender predicted, with actual diabetes knowledge scores. Conclusion: In this multi-ethnic workforce, findings indicated a significant gap between participants' perceived and actual knowledge. Factors predictive of high levels of knowledge provide pointers to ways to improve diabetes knowledge amongst nurses. Crown Copyright (C) 2017 Published by Elsevier Inc. All rights reserved. Aim: To describe how frequently RNs provide 17 spiritual care therapeutics (or interventions) during a 72-80 h timeframe. Background: Plagued by conceptual muddiness as well as weak methods, research quantifying the frequency of spiritual care is not only methodologically limited, but also sparse. Methods: Secondary analysis of data from four studies that used the Nurse Spiritual Care Therapeutics Scale (NSCTS). Data from US American RNs who responded to online surveys about spiritual care were analyzed. The four studies included intensive care unit nurses in Ohio (n = 93), hospice and palliative care nurses across the US (n = 104), nurses employed in a Christian health care system (n = 554), and nurses responding to an invitation to participate found on a journal website (n = 279). Results: The NSCTS mean of 38 (with a range from 17 to 79 [of 85 possible]) suggested respondents include spiritual care therapeutics infrequently in their nursing care. Particularly concerning is the finding that 17-33% (depending on NSCTS item) never completed a spiritual screening during the timeframe. "Remaining present just to show caring" was the most frequent therapeutic (3.4 on a 5-point scale); those who practiced presence at least 12 times during the timeframe provided other spiritual care therapeutics more frequently than those who offered presence less frequently. Conclusion: Findings affirm previous research that suggests nurses provide spiritual care infrequently. These findings likely provide the strongest evidence yet for the need to improve spiritual care education and support for nurses. (C) 2017 Elsevier Inc. All rights reserved. Aims: The main goal of this study was to explore the relationships between empathy, empathy-based pathogenic guilt and professional quality of life (burnout and compassion fatigue). We aim to test a model in which we hypothesize that when empathic feelings are related to pathogenic guilt, burnout and compassion fatigue symptoms may be increased. Background: Empathy is at the core of nursing practice, and has been associated with positive outcomes not only for the healthcare provider but also for the patient. However, empathy is also at the core of guilt feelings that, when excessive and misdirected, can lead to pathogenic guilt beliefs. We focused on two types of empathy based guilt characterized by excessive responsibility over others' well-being and how these can be related to professional quality of life. Methods and participants: This study is a cross-sectional self-report survey. Data were collected during 2014 and 2015. Two hundred ninety-eight nurses from public hospitals in Portugal's north and center region were surveyed. Professional quality of life (burnout and compassion fatigue), empathy, and empathy-based guilt were measured using validated self-report measures. Results: Correlation analyses showed that empathy-based guilt was positively associated with empathy, and with burnout and compassion fatigue. Results from multiple mediation models further indicated when empathy is associated with empathy-based guilt, this leads to greater levels of burnout and compassion fatigue. Conclusions: Given the nature of their work, nurses who experience pathogenic guilt feelings may have compromised well-being, and this should be addressed in training programs aiming at preventing or treating burnout and compassion fatigue. (C) 2017 Elsevier Inc. All rights reserved. Context: Between 7% and 30% of people with treated coeliac disease suffer from residual symptoms, and there is a knowledge gap about their own management of these symptoms. Aim: To explore experiences and management concerning residual symptoms despite a gluten-free diet in people with coeliac disease. Methods: A qualitative explorative design with semi-structured interviews with 22 adults with coeliac disease in Sweden. Data were analysed using qualitative content analysis. Results: The informants had, at diagnosis, thought that their symptoms would disappear if they followed a gluten free diet, but the disease was continuing to have a substantial impact on their lives, despite several years of treatment. They experienced cognitive, somatic as well as mental symptoms, including impact on personality (e.g. having a "shorter fuse", being more miserable or tired). However, only a few informants had sought medical care for persistent symptoms. Instead they tried to manage these by themselves, e.g. abstaining from food during periods of more intense symptom, or using distraction. The management of persistent symptoms resembled thorough detective work. To prevent problems related to residual symptoms the informants used withdrawal of social contact as well as acceptance of their situation. Conclusion: People with treated coeliac disease may experience residual symptoms of both a physical and psychological nature, causing major negative impacts on their lives in different ways. In the light of this, healthcare staff should change their practices regarding the follow-up of these people, and in addition to medical care should provide guidance on management strategies to facilitate the daily life. Furthermore, information to newly diagnosed persons should make them aware of the possibility to experience continued symptoms, despite treatment. (C) 2017 Elsevier Inc. All rights reserved. Background: Implementation of evidence-based practice (EBP) remains limited in healthcare settings and knowledge of predictors of healthcare professionals' EBP activities is lacking. Aim: Describe nurses' readiness for EBP and identify related predictors in Greek healthcare settings we conducted a survey. Results: Nurses scored high in the EBP readiness scale reflecting significant positive readiness toward EBP and consistently reported favourable attitudes toward and beliefs about EBP. However, half of them were unsure about their ability to engage in EBP despite the fact that they valued research-based practice as important. EBP specific domains including the "EBP-attitude", the "EBP-knowledge", the "Informational needs" and the "Workplace culture" and nurses' demographics as well, were found to be strong predictors of EBP readiness among Greek nurses. Conclusion: As nurses are now more aware of and open to the idea of EBP, diverse strategies and well-designed interventions to facilitate the desired change to practice are needed. (C) 2017 Elsevier Inc. All rights reserved. Objective: The objective research is analyzing the nursing care in intensive care units from the perspective of patient safety based on health evaluation. Methods: This is an evaluation research, for the purpose of issuance of judgment or judgment on a given system, carried out in six intensive care units. Data collection occurred from April to July 2014, in locu, with a validated instrument containing 97 questions related to patient safety. These, 73 items targeted to analyze the element "process" in safety patient nursing care. The 73 items were grouped into three elements meaning of the patient safety: "Communication and Identification", "Health and Comfort" and "Drug and Nutritional Therapy". Data analyses were used from Kappa measurement, observations conducted by the evaluators and literature on the theme. Results: The result of three elements significant showed the following: 23 items (31.5%) were considered adequate and 50 (68.4%), non-compliant with the required standards for reliable care. Of these, 29 (39.7%) were classified as partially adequate and 21 (28.7%) as inadequate, setting a worrying care in regards care of security with large probability precipitation of undesirable events. Conclusion: It is emphasized that the classification unsuitability of items prevailed. Patient safety is impaired due to unsafe actions in nursing care processes. Unsafe actions in care processes increase the risk to patient safety, as precipitation falls, errors in medication administration, communication difficulties and continuity of care. Thus, immediate interventions are imperative to implement a safety culture and to avoid negligence in relation to care. (C) 2017 Elsevier Inc. All rights reserved. Purpose: To examine nurses' health-promoting lifestyle behaviors, describe their self-reported engagement in employee wellness program benefit options, and explore relationships between nurse demographic factors, health characteristics and lifestyle behaviors. Background: Nurses adopting unhealthy lifestyle behaviors are at significantly higher risk for developing a number of chronic diseases and are at increased susceptibility to exhaustion, job dissatisfaction and turnover. Strengthening professional nurses' abilities to engage in healthy lifestyle behaviors could serve as a valuable tool in combating negative workplace stress, promote improved work-life balance and personal well-being, and help retain qualified health-care providers. Methods: In a 187-bed community hospital in the Washington D.C. metropolitan area, we conducted an IRB-approved exploratory descriptive study. We examined 127 nurses' demographic characteristics, self-reported employer wellness program use, and measured their healthy lifestyle behaviors using the 52-item Health Promoting Lifestyle Profile-II (HPLP-II) survey instrument. Nurse demographic and HPLP-II scores were analyzed in SPSS v20.0. Inferential univariate statistical testing examined relationships between nurse demographic factors, health and job characteristics, and HPLP-II score outcomes. Results: Nurses over 40 years old were more likely to report participation in hospital wellness program options. Statistically significant age differences were identified in total HPLP-II score (p = 0.005), and two subscale scores spiritual growth (p = 0.002) and interpersonal relations (p = 0.000). Post-hoc testing identified nurse participants 40-49 years old and >= 50 years old experienced slightly lower total HPLP-II score, subscale scores in comparison to younger colleagues. Conclusions: Nurses >= 40 years old may benefit from additional employer support and guidance to promote and maintain healthy lifestyles, personal well-being, and positive interpersonal relationships. (C) 2017 Elsevier Inc. All rights reserved. Research on aftercare for human trafficking survivors highlights the limited knowledge of the needs of survivors; the evaluation of current aftercare; and the process of recovery navigated by the survivor in aftercare (Oram et al., 2012; Locke, 2010; Hacker & Cohen, 2012). Furthermore there has been a transition in aftercare where the victim or survivor, who before was seen as a passive victim of circumstance of their life and in need of therapeutic intervention, is now seen as having an active role in their recovery, thus facilitating recovery (Hacker & Cohen, 2012). The need for a theory grounded in survivor's voices therefore motivated this grounded theory study underpinned by Freire's (1970) Pedagogy of the oppressed. The aim of the theory is to inform nursing care of human trafficking survivors in low resource settings. The findings elicit a theoretical model of the renewed self, and the conditions that facilitate this process in care of human trafficking survivors. The recommendations of this paper may improve the nursing care provided to human trafficking survivors and equip nurses and other health professionals with the knowledge and skills to promote the renewing of human trafficking survivors. (C) 2017 Elsevier Inc. All rights reserved. Patients with multiple myeloma and their family caregivers must master self-management tasks related not only to the disease and treatment, but also associated with transitioning to living with chronic illness. The aim of this study was to assess the feasibility, acceptability, safety, and fidelity of an intervention that had a psychoeducational approach and included a low-impact, home-based walking activity. A secondary aim was to obtain preliminary data of the effect of the intervention, as compared to an attention control group, on anxiety, activation for self-management, fatigue, depression and health-related quality of life (HRQOL). A sample of 15 adult patients with multiple myeloma and their family caregivers were randomized into either an intervention or attention-control group. The intervention was delivered to the dyad in one session and booster calls were made at 1 and 3 weeks. The control group received printed educational resources and telephone contacts. Measures were done at baseline, and 6 and 12 weeks. Descriptive statistics were used. The intervention was safe, feasible, and acceptable to patients and caregivers. Fidelity was high for the initial session, but low with booster calls. Improvement in scores for activation, fatigue, depression, anxiety, physical HRQOL, and emotional distress was seen in at least 40% of patients in the intervention group. Fewer caregivers in the intervention group showed improvement on the outcome variables. Leveraging a behavioral strategy such as walking, along with supportive and educational resources, is promising for promoting well-being within the patient/caregiver dyad. Further refinement of the intervention is needed to strengthen its efficacy for the caregiver and exploratory work is essential to understand the interpersonal supportive processes associated with the walking activity. (C) 2017 Elsevier Inc. All rights reserved. Aim: This paper compares two qualitative approaches used to thematically analyse data obtained from focus groups conducted with critical care nurses from Australia. Background: Focus groups are an effective mechanism to generate understanding and gain insight into the research participants' world. Traditional verbatim transcription of participants' recorded words necessitates significant investment of time and resources. An alternative approach under reported in the literature is to directly analyse the audio recordings. To identify the effectiveness of the audio recording only approach, the study aimed to independently compare two qualitative methods of data analysis, namely the traditional transcribed method with the audio recording method. Methods: The study to revise the specialist critical care competency standards included focus groups conducted in each state in Australia (n = 12) facilitated by experienced researchers. Two of the research team analysed transcribed focus group data and two team members were blinded to the transcription process and directly analysed audio recordings from the focus groups. A process of thematic analysis used independently by the two teams was used to identify themes. Results: When the findings were compared, the themes generated using each technique were consistent and there were no different themes or subthemes identified. The two techniques appeared to be comparable. Over-arching key themes were consistent with the approach. Conclusion: The direct analysis method appears to have advantages. It is cost effective, trustworthy and possibly a superior alternative when used with focus group data. However, the audio only method requires experienced researchers who understand the context and if combining the two approaches takes time to do. (C) 2017 Elsevier Inc. All rights reserved. Current research indicates a relationship between El, stress, coping strategies, well-being and mental health. Emotional intelligence skills and knowledge, and coping strategies can be increased with training. Objective: The aims of this study were to use a controlled design to test the impact of theoretically based training on the different components of El and coping styles in a sample of nurses working with older adults. Methods: A group of 92 professionals (RN and CAN) who attended a workshop on El were included in the study. They completed a self-reported measure of El and coping styles on three occasions: pre- and post-workshop and at one year follow-up. The El workshop consisted of four 4-h sessions conducted over a four-week period. Each session was held at the one-week interval. This interval allowed participants to apply what was taught during the session to their daily life. The instruments to measure the El and coping were the Trait Meta-Mood Scale and the CAE test. Results: There were significant differences between the pre- and post-workshop measures both at the end of the workshop and up to one year for both the Trait Meta-Mood Scale scores and the CAE test. There was a significant increase in the El and coping styles after the workshop and one year thereafter. Conclusion: The workshop was useful for developing El in the professionals. The immediate impact of the emotional consciousness of individuals was particularly significant for all participants. The long-term impact was notable for the significant increase in El and most coping styles. (C) 2017 Elsevier Inc. All rights reserved. Background: After being diagnosed with breast cancer, women must make a number of decisions about their treatment and management. When the decision-making process among breast cancer patients is ineffective, it results in harm to their health. Little is known about the decision-making process of breast cancer patients during the entire course of treatment and management. Objectives: We investigated women with breast cancer to explore the decision-making processes related to treatment and management. Methods: Eleven women participated, all of whom were receiving treatment or management in Korea. The average participant age was 43.5 years. For data collection and analysis, a grounded theory methodology was used. Results: Through constant comparative analyses, a core category emerged that we referred to as "finding the right individualized healthcare trajectory." The decision-making process occurred in four phases: turmoil, exploration, balance, and control. The turmoil phase included weighing the credibility of information and lowering the anxiety level. The exploration phase included assessing the expertise/promptness of medical treatment and evaluating the effectiveness of follow-up management. The balance phase included performing analyses from multiple angles and rediscovering value as a human being. The control phase included constructing an individualized management system and following prescribed and other management options. Conclusions: It is important to provide patients with accurate information related to the treatment and management of breast cancer so that they can make effective decisions. Healthcare providers should engage with patients on issues related to their disease, understand the burden placed on patients because of issues related to their sex, and ensure that the patient has a sufficient support system. The results of this study can be used to develop phase-specific, patient-centered, and tailored interventions for breast cancer patients. (C) 2017 Elsevier Inc. All rights reserved. Multiple myeloma is a hematologic disease characterized by an excessive number of abnormal plasma cells that infiltrate the bone marrow and overproduction of monoclonal immunoglobulins. It is the third most prevalent hematologic disease and makes up just over 15% of all hematologic malignancies. Patients may present with 1 or more nonspecific symptoms of hypercalcemia, renal insufficiency, anemia, or new-onset bone pain. Early recognition, referral, and monitoring may improve the quality of life and extend the survival of patients. Attention deficit hyperactivity disorder (ADHD) is one of the most common childhood neurodevelopmental diseases and nearly two thirds of children with ADHD have symptoms that persist into adulthood. Approximately 750,000 children with special health care needs transition from pediatric to adult health care annually in the United States. For youth with ADHD, organized, coordinated, and systematic care transition from pediatric to adult health care providers is essential to prevent negative consequences related to unmanaged ADHD symptoms and to optimize health and promote maximum functioning. The Got Transition model's 6 core elements provide a guide to support successful transition for adolescents with ADHD. Many health care providers are uncomfortable having conversations with patients about their sexual identity or sexual behaviors. Avoiding this discomfort is causing a serious threat to the mental and physical health of Americans, particularly those in the lesbian, gay, bisexual, transgender, questioning, or intersex (LGBTQI) community. The health-related disparities among LGBTQI patients range from bullying and physical assault to refusal of health care and housing. Many individuals choose not to seek health care due of fear of being judged, marginalized, or abused. This article focuses on the many disparities faced by the LGBTQI community and describes how simple changes in the practices of health care providers can potentially improve their health outcomes. The purpose of this article is to assist nurse practitioners (NPs) and other primary care providers in differentiating between lactose intolerance, celiac disease, and diarrhea predominant irritable bowel syndrome in adults. Based on subtle characteristics gathered from the history and physical examination, the NP's examination and approach to testing will help distinguish between the 3 conditions. NPs should use a sequential process of examination and testing to distinguish gastrointestinal disorders that share common symptoms. A best practice algorithm is provided. Patients with foot pain present to their primary care providers for treatment. Plantar fasciitis is easily diagnosed based on history and exam with little to no need for diagnostic testing. Initial treatment is conservative and is easily initiated in the primary care office, with the focus on alleviation and resolution of the foot pain. Initial treatment modalities include taping, icing, proper footwear, stretching, and rest. When pain persists, other modalities include night splints or referrals to physical therapy and podiatry or orthopedic or sports medicine for a corticosteroid injection. Patient enablement after consultations has not yet been adequately investigated among patients of nurse practitioners (NP) in primary health care. The lens of enablement and a qualitative parallel multistrand approach were used to explore patients' experiences and NPs' perspectives of consultations. Metainferences made from this study suggest NPs enable patients by creating opportunities for education and knowledge transference and building on patients' strengths and promoting self-efficacy. Three existential components of the experience of consultations (ie, relationality, temporality, and corporality) also played a role. These findings were used to develop a conceptual framework of how patient enablement is experienced within an NP consultation. Research indicates patient-centered care (PCC) has the potential to enhance patient outcomes and relationships within health care systems. Incorporating spiritual care (SC) into practice is 1 intervention nurse practitioners (NPs) can use to promote a PCC practice model and develop relationships and connections vital to SC. This descriptive study identifies a lack of education on SC as 1 barrier to incorporating SC in NP practice within a PCC model. Concepts for inclusion in education are suggested based on the findings. The synthesis of specific deuterated derivatives of the long chained ceramides [EOS] and [EOP] is described. The structural differences with respect to the natural compounds are founded in the substitution of the 2 double bonds containing linoleic acid by a palmitic acid branched with a methyl group in 10-position. The specific deuteration is introduced both in the branched and in the terminal methyl group, which was realized by common methods of successive deuteration of carboxylic groups in 3 steps. These modified fatty acids resp. the corresponding ceramides [EOS] and [EOP] were prepared for neutron scattering investigations. First results of these investigations were presented in this manuscript showing that the deuterated compounds could be detected in the stratum corneum lipid model membranes. The deuterated ceramides [EOS] and [EOP] are valuable tools to investigate the influence of these long chained ceramide species on the nanostructure of stratum corneum lipid model membranes. N-(2-[F-18]Fluoropropionyl)-l-glutamic acid ([F-18]FPGLU) is a potential amino acid tracer for tumor imaging with positron emission tomography. However, due to the complicated multistep synthesis, the routine production of [F-18]FPGLU presents many challenging laboratory requirements. To simplify the synthesis process of this interesting radiopharmaceutical, an efficient automated synthesis of [F-18]FPGLU was performed on a modified commercial fluorodeoxyglucose synthesizer via a 2-step on-column hydrolysis procedure, including F-18-fluorination and on-column hydrolysis reaction. [F-18]FPGLU was synthesized in 12 +/- 2% (n=10, uncorrected) radiochemical yield based on [F-18]fluoride using the tosylated precursor 2. The radiochemical purity was 98%, and the overall synthesis time was 35minutes. To further optimize the radiosynthesis conditions of [F-18]FPGLU, a brominated precursor 3 was also used for the preparation of [F-18]FPGLU, and the improved radiochemical yield was up to 20 +/- 3% in 35minutes. Moreover, all these results were achieved using the similar on-column hydrolysis procedure on the modified fluorodeoxyglucose synthesis module. O-(2-Fluoroethyl)-O-(p-nitrophenyl) methylphosphonate 1 is an organophosphate cholinesterase inhibitor that creates a phosphonyl-serine covalent adduct at the enzyme active site blocking cholinesterase activity in vivo. The corresponding radiolabeled O-(2-[F-18]fluoroethyl)-O-(p-nitrophenyl) methylphosphonate, [F-18]1, has been previously prepared and found to be an excellent positron emission tomography imaging tracer for assessment of cholinesterases in live brain, peripheral tissues, and blood. However, the previously reported [F-18]1 tracer synthesis was slow even with microwave acceleration, required high-performance liquid chromatography separation of the tracer from impurities, and gave less optimal radiochemical yields. In this paper, we report a new synthetic approach to circumvent these shortcomings that is reliant on the facile reactivity of bis-(O,O-p-nitrophenyl) methylphosphonate, 2, with 2-fluoroethanol in the presence of DBU. The cold synthesis was successfully translated to provide a more robust radiosynthesis. Using this new strategy, the desired tracer, [F-18]1, was obtained in a non-decay-corrected radiochemical yield of 8 +/- 2% (n=7) in >99% radiochemical and >95% chemical purity with a specific activity of 3174 +/- 345 Ci/mmol (EOS). This new facile radiosynthesis routinely affords highly pure quantities of [F-18]1, which will further enable tracer development of OP cholinesterase inhibitors and their evaluation in vivo. We have evaluated the commercially available Burgess catalyst in hydrogen isotope exchange reactions with several substrates bearing different directing group functionalities and have obtained moderate to high (50%-97%D) deuterium incorporations. The broad applicability in hydrogen isotope exchange reactions makes the Burgess catalyst a possible alternative compared to other commercially available iridium(I)-catalysts. Detecting and measuring the dynamic redox events that occur in vivo is a prerequisite for understanding the impact of oxidants and redox events in normal and pathological conditions. These aspects are particularly relevant in cardiovascular tissues wherein alterations of the redox balance are associated with stroke, aging, and pharmacological intervention. An ambiguous aspect of redox biology is how redox events occur in subcellular organelles including mitochondria, and nuclei. Genetically-encoded Rogfp2 fluorescent probes have become powerful tools for real-time detection of redox events. These probes detect hydrogen peroxide (H2O2) levels and glutathione redox potential (EGSH), both with high spatiotemporal resolution. By generating novel transgenic (Tg) zebrafish lines that express compartment-specific Rogfp2-Orp1 and Grx1-Rogfp2 sensors we analyzed cytosolic, mitochondrial, and the nuclear redox state of endothelial cells and cardiomyocytes of living zebrafish embryos. We provide evidence for the usefulness of these Tg lines for pharmacological compounds screening by addressing the blocking of pentose phosphate pathways (PPP) and glutathione synthesis, thus altering subcellular redox state in vivo. Rogfp2-based transgenic zebrafish lines represent valuable tools to characterize the impact of redox changes in living tissues and offer new opportunities for studying metabolic driven antioxidant response in biomedical research. The process of cold acclimation is an important adaptive response whereby many plants from temperate regions increase their freezing tolerance after being exposed to low non-freezing temperatures. The correct development of this response relies on proper accumulation of a number of transcription factors that regulate expression patterns of cold-responsive genes. Multiple studies have revealed a variety of molecular mechanisms involved in promoting the accumulation of these transcription factors. Interestingly, however, the mechanisms implicated in controlling such accumulation to ensure their adequate levels remain largely unknown. In this work, we demonstrate that prefoldins (PFDs) control the levels of HY5, an Arabidopsis transcription factor with a key role in cold acclimation by activating anthocyanin biosynthesis, in response to low temperature. Our results show that, under cold conditions, PFDs accumulate into the nucleus through a DELLA-dependent mechanism, where they interact with HY5, triggering its ubiquitination and subsequent degradation. The degradation of HY5 would result, in turn, in anthocyanin biosynthesis attenuation, ensuring the accurate development of cold acclimation. These findings uncover an unanticipated nuclear function for PFDs in plant responses to abiotic stresses. Deposition of cell wall-reinforcing papillae is an integral component of the plant immune response. The Arabidopsis PENETRATION 3 (PEN3) ATP binding cassette (ABC) transporter plays a role in defense against numerous pathogens and is recruited to sites of pathogen detection where it accumulates within papillae. However, the trafficking pathways and regulatory mechanisms contributing to recruitment of PEN3 and other defenses to the host-pathogen interface are poorly understood. Here, we report a confocal microscopy-based screen to identify mutants with altered localization of PEN3-GFP after inoculation with powdery mildew fungi. We identified a mutant, aberrant localization of PEN3 3 (alp3), displaying accumulation of the normally plasma membrane (PM)-localized PEN3-GFP in endomembrane compartments. The mutant was found to be disrupted in the P4-ATPase AMINOPHOSPHOLIPID ATPASE 3 (ALA3), a lipid flippase that plays a critical role in vesicle formation. We provide evidence that PEN3 undergoes continuous endocytic cycling from the PM to the trans-Golgi network (TGN). In alp3, PEN3 accumulates in the TGN, causing delays in recruitment to the host-pathogen interface. Our results indicate that PEN3 and other defense proteins continuously cycle through the TGN and that timely exit of these proteins from the TGN is critical for effective pre-invasive immune responses against powdery mildews. R-loop structures (RNA: DNA hybrids) have important functions in many biological processes, including transcriptional regulation and genome instability among diverse organisms. DNA topoisomerase 1 (TOP1), an essential manipulator of DNA topology during RNA transcription and DNA replication processes, can prevent R-loop accumulation by removing the positive and negative DNA supercoiling that is made by RNA polymerases during transcription. TOP1 is required for plant development, but little is known about its function in preventing co-transcriptional R-loop accumulation in various biological processes in plants. Here we show that knockdown of OsTOP1 strongly affects rice development, causing defects in root architecture and gravitropism, which are the consequences of misregulation of auxin signaling and transporter genes. We found that R-loops are naturally formed at rice auxin-related gene loci, and overaccumulate when OsTOP1 is knocked down or OsTOP1 protein activity is inhibited. OsTOP1 therefore sets the accurate expression levels of auxin-related genes by preventing the overaccumulation of inherent R-loops. Our data reveal R-loops as important factors in polar auxin transport and plant root development, and highlight that OsTOP1 functions as a key to link transcriptional R-loops with plant hormone signaling, provide new insights into transcriptional regulation of hormone signaling in plants. Seed germination is a crucial checkpoint for plant survival under unfavorable environmental conditions. Abscisic acid (ABA) signaling plays a vital role in integrating environmental information to regulate seed germination. It has been well known that MCM1/AGAMOUS/DEFICIENS/SRF (MADS)-box transcription factors are key regulators of seed and flower development in Arabidopsis. However, little is known about their functions in seed germination. Here we report that MADS-box transcription factor AGL21 is a negative regulator of seed germination and post-germination growth by controlling the expression of ABA-INSENSITIVE 5 (ABI5) in Arabidopsis. The AGL21-overexpressing plants were hypersensitive to ABA, salt, and osmotic stresses during seed germination and early post-germination growth, whereas agl21 mutants were less sensitive. We found that AGL21 positively regulated ABI5 expression in seeds. Consistently, genetic analyses showed that AGL21 is epistatic to ABI5 in controlling seed germination. Chromatin immunoprecipitation assays further demonstrated that AGL21 could directly bind to the ABI5 promoter in plant cells. Moreover, we found that AGL21 responded to multiple environmental stresses and plant hormones during seed germination. Taken together, our results suggest that AGL21 acts as a surveillance integrator that incorporates environmental cues and endogenous hormonal signals into ABA signaling to regulate seed germination and early post-germination growth. The switch from skotomorphogenesis to photomorphogenesis is a key developmental transition in the life of seed plants. While much of the underpinning proteome remodeling is driven by light-induced changes in gene expression, the proteolytic removal of specific proteins by the ubiquitin-26S proteasome system is also likely paramount. Through mass spectrometric analysis of ubiquitylated proteins affinity-purified from etiolated Arabidopsis seedlings before and after red-light irradiation, we identified a number of influential proteins whose ubiquitylation status is modified during this switch. We observed a substantial enrichment for proteins involved in auxin, abscisic acid, ethylene, and brassinosteroid signaling, peroxisome function, disease resistance, protein phosphorylation and light perception, including the phytochrome (Phy) A and phototropin photoreceptors. Soon after red-light treatment, PhyA becomes the dominant ubiquitylated species, with ubiquitin attachment sites mapped to six lysines. A PhyA mutant protected from ubiquitin addition at these sites is substantially more stable in planta upon photoconversion to Pfr and is hyperactive in driving photomorphogenesis. However, light still stimulates ubiquitylation and degradation of this mutant, implying that other attachment sites and/or proteolytic pathways exist. Collectively, we expand the catalog of ubiquitylation targets in Arabidopsis and show that this post-translational modification is central to the rewiring of plants for photoautotrophic growth. Tea is the world's oldest and most popular caffeine-containing beverage with immense economic, medicinal, and cultural importance. Here, we present the first high-quality nucleotide sequence of the repeat-rich(80.9%), 3.02-Gb genome of the cultivated tea tree Camellia sinensis. We show that an extraordinarily large genome size of tea tree is resulted from the slow, steady, and long-term amplification of a few LTR retrotransposon families. In addition to a recent whole-genome duplication event, lineage-specific expansions of genes associated with flavonoid metabolic biosynthesis were discovered, which enhance catechin production, terpene enzyme activation, and stress tolerance, important features for tea flavor and adaptation. We demonstrate an independent and rapid evolution of the tea caffeine synthesis pathway relative to cacao and coffee. A comparative study among 25 Camellia species revealed that higher expression levels of most flavonoid-and caffeine but not theanine-related genes contribute to the increased production of catechins and caffeine and thus enhance tea-processing suitability and tea quality. These novel findings pave the way for further metabolomic and functional genomic refinement of characteristic biosynthesis pathways and will help develop a more diversified set of tea flavors that would eventually satisfy and attract more tea drinkers worldwide. Harnessing natural variation in photosynthetic capacity is a promising route toward yield increases, but physiological phenotyping is still too laborious for large-scale genetic screens. Here, we evaluate the potential of leaf reflectance spectroscopy to predict parameters of photosynthetic capacity in Brassica oleracea and Zea mays, a C-3 and a C-4 crop, respectively. To this end, we systematically evaluated properties of reflectance spectra and found that they are surprisingly similar over a wide range of species. We assessed the performance of a wide range of machine learning methods and selected recursive feature elimination on untransformed spectra followed by partial least squares regression as the preferred algorithm that yielded the highest predictive power. Learning curves of this algorithm suggest optimal species-specific sample sizes. Using the Brassica relative Moricandia, we evaluated the model transferability between species and found that cross-species performance cannot be predicted from phylogenetic proximity. The final intra-species models predict crop photosynthetic capacity with high accuracy. Based on the estimated model accuracy, we simulated the use of the models in selective breeding experiments, and showed that high-throughput photosynthetic phenotyping using our method has the potential to greatly improve breeding success. Our results indicate that leaf reflectance phenotyping is an efficient method for improving crop photosynthetic capacity. Traversal of a complicated route is often facilitated by considering it as a set of related sub-spaces. Such compartmentalization processes could occur within retrosplenial cortex, a structure whose neurons simultaneously encode position within routes and other spatial coordinate systems. Here, retrosplenial cortex neurons were recorded as rats traversed a track having recurrent structure at multiple scales. Consistent with a major role in compartmentalization of complex routes, individual retrosplenial cortex (RSC) neurons exhibited periodic activation patterns that repeated across route segments having the same shape. Concurrently, a larger population of RSC neurons exhibited single-cycle periodicity over the full route, effectively defining a framework for encoding of sub-route positions relative to the whole. The same population simultaneously provides a novel metric for distance from each route position to all others. Together, the findings implicate retrosplenial cortex in the extraction of path sub-spaces, the encoding of their spatial relationships to each other, and path integration. The human brain is organized into large-scale functional modules that have been shown to evolve in childhood and adolescence. However, it remains unknown whether the underlying white matter architecture is similarly refined during development, potentially allowing for improvements in executive function. In a sample of 882 participants (ages 8-22) who underwent diffusion imaging as part of the Philadelphia Neurodevelopmental Cohort, we demonstrate that structural network modules become more segregated with age, with weaker connections between modules and stronger connections within modules. Evolving modular topology facilitates global network efficiency and is driven by age-related strengthening of hub edges present both within and between modules. Critically, both modular segregation and network efficiency are associated with enhanced executive performance and mediate the improvement of executive functioning with age. Together, results delineate a process of structural network maturation that supports executive function in youth. In morphological terms, "form" is used to describe an object's shape and size. In dogs, facial form is stunningly diverse. Facial retrusion, the proximodistal shortening of the snout and widening of the hard palate is common to brachycephalic dogs and is a welfare concern, as the incidence of respiratory distress and ocular trauma observed in this class of dogs is highly correlated with their skull form. Progress to identify the molecular underpinnings of facial retrusion is limited to association of a missense mutation in BMP3 among small brachycephalic dogs. Here, we used morphometrics of skull isosurfaces derived from 374 pedigree and mixed-breed dogs to dissect the genetics of skull form. Through deconvolution of facial forms, we identified quantitative trait loci that are responsible for canine facial shapes and sizes. Our novel insights include recognition that the FGF4 retrogene insertion, previously associated with appendicular chondrodysplasia, also reduces neurocranium size. Focusing on facial shape, we resolved a quantitative trait locus on canine chromosome 1 to a 188-kb critical interval that encompasses SMOC2. An intronic, transposable element within SMOC2 promotes the utilization of cryptic splice sites, causing its incorporation into transcripts, and drastically reduces SMOC2 gene expression in brachycephalic dogs. SMOC2 disruption affects the facial skeleton in a dose-dependent manner. The size effects of the associated SMOC2 haplotype are profound, accounting for 36% of facial length variation in the dogs we tested. Our data bring new focus to SMOC2 by highlighting its clinical implications in both human and veterinary medicine. To better understand how a stream of sensory data is transformed into a percept, we examined neuronal activity in vibrissal sensory cortex, vS1, together with vibrissal motor cortex, vM1 (a frontal cortex target of vS1), while rats compared the intensity of two vibrations separated by an interstimulus delay. Vibrations were "noisy", constructed by stringing together over time a sequence of velocity values sampled from a normal distribution; each vibration's mean speed was proportional to the width of the normal distribution. Durations of both stimulus 1 and stimulus 2 could vary from 100 to 600 ms. Psychometric curves reveal that rats overestimated the longer-duration stimulus-thus, perceived intensity of a vibration grew over the course of hundreds of milliseconds even while the sensory input remained, on average, stationary. Human subjects demonstrated the identical perceptual phenomenon, indicating that the underlying mechanisms of temporal integration generalize across species. The time dependence of the percept allowed us to ask to what extent neurons encoded the ongoing stimulus stream versus the animal's percept. We demonstrate that vS1 firing correlated with the local features of the vibration, whereas vM1 firing correlated with the percept: the final vM1 population state varied, as did the rat's behavior, according to both stimulus speed and stimulus duration. Moreover, vM1 populations appeared to participate in the trace of the percept of stimulus 1 as the rat awaited stimulus 2. In conclusion, the transformation of sensory data into the percept appears to involve the integration and storage of vS1 signals by vM1. In most sexually reproducing plants, a single somatic, sub-epidermal cell in an ovule is selected to differentiate into a megaspore mother cell, which is committed to giving rise to the female germline. However, it remains unclear how intercellular signaling among somatic cells results in only one cell in the sub-epidermal layer differentiating into the megaspore mother cell. Here we uncovered a role of the THO complex in restricting the megaspore mother cell fate to a single cell. Mutations in TEX1, HPR1, and THO6, components of the THO/TREX complex, led to the formation of multiple megaspore mother cells, which were able to initiate gametogenesis. We demonstrated that TEX1 repressed the megaspore mother cell fate by promoting the biogenesis of TAS3-derived trans-acting small interfering RNA (ta-siRNA), which represses ARF3 expression. The TEX1 protein was present in epidermal cells, but not in the germline, and, through TAS3-derived ta-siRNA, restricted ARF3 expression to the medio domain of ovule primordia. Expansion of ARF3 expression into lateral epidermal cells in a TAS3 ta-siRNA-insensitive mutant led to the formation of supernumerary megaspore mother cells, suggesting that TEX1- and TAS3-mediated restriction of ARF3 expression limits excessive megaspore mother cell formation non-cell-autonomously. Our findings reveal the role of a small-RNA pathway in the regulation of female germline specification in Arabidopsis. Many insect species use multi-component sex pheromones to discriminate among potential mating partners [1-5]. In moths, pheromone blends tend to be dominated by one or two major components, but behavioral responses are frequently optimized by the inclusion of less abundant minor components [6]. An increasing number of studies have shown that female insects use these chemicals to convey their mating availability to males, who can assess the maturity of females and thus decide when to mate [7, 8]. However, little is known about the biological mechanisms that enable males to assess female reproductive status. In this study, we found that females of Helicoverpa armigera avoid nonoptimal mating by inhibiting males with pheromone antagonist cis-11-Hexadecenol (Z11-16: OH). We also show that this antagonist-mediated optimization of mating time ensures maximum fecundity. To further investigate molecular aspects of this phenomenon, we used the CRISPR/Cas9 system to knock out odorant receptor 16 (OR16), the only pheromone receptor tuned to Z11-16: OH. In mutant males, electrophysiological and behavioral responses to Z11-16: OH were abolished. Inability to detect Z11-16: OH prompted the males to mate with immature females, which resulted in significantly reduced viability of eggs. In conclusion, our study demonstrates that the sensitivity of OR16 to Z11-16: OH regulates optimalmating time and thus ensures maximum fecundity. These results may suggest novel strategies to disrupt pest insect mating. Recent climate change on the Antarctic Peninsula is well documented [1-5], with warming, alongside increases in precipitation, wind strength, and melt season length [1, 6, 7], driving environmental change [8, 9]. However, meteorological records mostly began in the 1950s, and paleoenvironmental datasets that provide a longer-term context to recent climate change are limited in number and often from single sites [7] and/or discontinuous in time [10, 11]. Here we use moss bank cores from a 600-km transect from Green Island (65.3 degrees S) to Elephant Island (61.1 degrees S) as paleoclimate archives sensitive to regional temperature change, moderated by water availability and surface microclimate [12, 13]. Mosses grow slowly, but cold temperatures minimize decomposition, facilitating multiproxy analysis of preserved peat [14]. Carbon isotope discrimination (Delta C-13) in cellulose indicates the favorability of conditions for photosynthesis [15]. Testate amoebae are representative heterotrophs in peatlands [16-18], so their populations are an indicator of microbial productivity [14]. Moss growth and mass accumulation rates represent the balance between growth and decomposition [19]. Analyzing these proxies in five cores at three sites over 150 years reveals increased biological activity over the past ca. 50 years, in response to climate change. We identified significant changepoints in all sites and proxies, suggesting fundamental and widespread changes in the terrestrial biosphere. The regional sensitivity of moss growth to past temperature rises suggests that terrestrial ecosystems will alter rapidly under future warming, leading to major changes in the biology and landscape of this iconic region-an Antarctic greening to parallel well-established observations in the Arctic [20]. Melanopsin photoreception enhances retinal responses to variations in ambient light (irradiance) and drives non-image-forming visual reflexes such as circadian entrainment [1-6]. Melanopsin signals also reach brain regions responsible for form vision [7-9], but melanopsin's contribution, if any, to encoding visual images remains unclear. We addressed this deficit using principles of receptor silent substitution to present images in which visibility for melanopsin versus rods+ cones was independently modulated, and we recorded evoked responses in the mouse dorsal lateral geniculate nucleus (dLGN; thalamic relay for cortical vision). Approximately 20% of dLGN units responded to patterns visible only to melanopsin, revealing that melanopsin signals alone can convey spatial information. Spatial receptive fields (RFs) mapped using melanopsin-isolating stimuli had ON centers with diameters similar to 13 degrees. Melanopsin and rod+ cone responses differed in the temporal domain, and responses to slow changes in radiance (< 0.9 Hz) and stationary images were deficient when stimuli were rendered invisible for melanopsin. We employed these data to devise and test a mathematical model of melanopsin's involvement in form vision and applied it, along with further experimental recordings, to explore melanopsin signals under simulated active view of natural scenes. Our findings reveal that melanopsin enhances the thalamic representation of scenes containing local correlations in radiance, compensating for the high temporal frequency bias of cone vision and the negative correlation between magnitude and frequency for changes in direction of view. Together, these data reveal a distinct melanopsin contribution to encoding visual images, predicting that, under natural view, melanopsin augments the early visual system's ability to encode patterns over moderate spatial scales. A direct retinal projection targets the suprachiasmatic nucleus (SCN) (an important hypothalamic control center). The accepted function of this projection is to convey information about ambient light (irradiance) to synchronize the SCN's endogenous circadian clock with local time and drive the diurnal variations in physiology and behavior [1-4]. Here, we report that it also renders the SCN responsive to visual images. We map spatial receptive fields (RFs) for SCN neurons and find that only a minority are excited (or inhibited) by light from across the scene as expected for irradiance detectors. The most commonly encountered units have RFs with small excitatory centers, combined with very extensive inhibitory surrounds that reduce their sensitivity to global changes in light in favor of responses to spatial patterns. Other units have larger excitatory RF centers, but these always cover a coherent region of visual space, implying visuotopic order at the single-unit level. Approximately 75% of light-responsive SCN units modulate their firing according to simple spatial patterns (drifting or inverting gratings) without changes in irradiance. The time-averaged firing rate of the SCN is modestly increased under these conditions, but including spatial contrast did not significantly alter the circadian phase resetting efficiency of light. Our data indicate that the SCN contains information about irradiance and spatial patterns. This newly appreciated sensory capacity provides a mechanism by which behavioral and physiological systems downstream of the SCN could respond to visual images [5]. The collapse of marine ecosystems during the end-Cretaceous mass extinction involved the base of the food chain [1] up to ubiquitous vertebrate apex predators [2-5]. Large marine reptiles became suddenly extinct at the Cretaceous-Paleogene (K/Pg) boundary, whereas other contemporaneous groups such as bothremydid turtles or dyrosaurid crocodylomorphs, although affected at the familial, genus, or species level, survived into post-crisis environments of the Paleocene [5-9] and could have found refuge in freshwater habitats [10-12]. A recent hypothesis proposes that the extinction of plesiosaurians and mosasaurids could have been caused by an important drop in sea level [13]. Mosasaurids are unusually diverse and locally abundant in the Maastrichtian phosphatic deposits of Morocco, and with large sharks and one species of elasmosaurid plesiosaurian recognized so far, contribute to an overabundance of apex predators [3, 7, 14, 15]. For this reason, high local diversity of marine reptiles exhibiting different body masses and a wealth of tooth morphologies hints at complex trophic interactions within this latest Cretaceous marine ecosystem. Using calcium isotopes, we investigated the trophic structure of this extinct assemblage. Our results are consistent with a calcium isotope pattern observed in modern marine ecosystems and show that plesiosaurians and mosasaurids indiscriminately fall in the tertiary piscivore group. This suggests that marine reptile apex predators relied onto a single dietary calcium source, compatible with the vulnerable wasp-waist food webs of the modern world [16]. This inferred peculiar ecosystem structure may help explain plesiosaurian and mosasaurid extinction following the end-Cretaceous biological crisis. X-cells have long been associated with tumor-like formations (xenomas) in marine fish, including many of commercial interest. The name was first used to refer to the large polygonal cells that were found in epidermal xenomas from flatfish from the Pacific Northwest [1]. Similar looking cells from pseudobranchial xenomas had previously been reported from cod in the Atlantic [2] and Pacific Oceans [3]. X-cell pathologies have been reported from five teleost orders: Pleuronectiformes (flatfish), Perciformes (perch-like fish), Gadiformes (cods), Siluriformes (catfish), and Salmoniformes (salmonids). Various explanations have been elicited for their etiology, including being adenomas or adenocarcinomas [4, 5], virally transformed fish cells [6-8], or products of coastal pollution [9, 10]. It was hypothesized that X-cells were protozoan parasites [1, 11-13], and although recent molecular analyses have confirmed this, they have failed to place them in any phylum [14-18], demonstrating weak phylogenetic associations with the haplosporidians [16] or the alveolates [15]. Here, we sequenced rRNA genes from European and Japanese fish that are known to develop X-cell xenomas. We also generated a metagenomic sequence library from X-cell xenomas of blue whiting and Atlantic cod and assembled 63 X-cell protein-coding genes for a eukaryote-wide phylogenomic analysis. We show that X-cells group in two highly divergent clades, robustly sister to the bivalve parasite Perkinsus. We formally describe these as Gadixcellia and Xcellia and provide a phylogenetic context to catalyze future research. We also screened Atlantic cod populations for xenomas and residual pathologies and show that X-cell infections are more prevalent and widespread than previously known. Coordination of growth between individual organs and the whole body is essential during development to produce adults with appropriate size and proportions [1, 2]. How local organ-intrinsic signals and nutrient-dependent systemic factors are integrated to generate correctly proportioned organisms under different environmental conditions is poorly understood. In Drosophila, Hippo/Warts signaling functions intrinsically to regulate tissue growth and organ size [3, 4], whereas systemic growth is controlled via antagonistic interactions of the steroid hormone ecdysone and nutrient-dependent insulin/insulin-like growth factor (IGF) (insulin) signaling [2, 5]. The interplay between insulin and ecdysone signaling regulates systemic growth and controls organismal size. Here, we show that Warts (Wts; LATS1/2) signaling regulates systemic growth in Drosophila by activating basal ecdysone production, which negatively regulates body growth. Further, we provide evidence that Wts mediates effects of insulin and the neuropeptide prothoracicotropic hormone (PTTH) on regulation of ecdysone production through Yorkie (Yki; YAP/TAZ) and the microRNA bantam (ban). Thus, Wts couples insulin signaling with ecdysone production to adjust systemic growth in response to nutritional conditions during development. Inhibition of Wts activity in the ecdysone-producing cells non-autonomously slows the growth of the developing imaginal-disc tissues while simultaneously leading to overgrowth of the animal. This indicates that ecdysone, while restricting overall body growth, is limiting for growth of certain organs. Our data show that, in addition to its well-known intrinsic role in restricting organ growth, Wts/Yki/ban signaling also controls growth systemically by regulating ecdysone production, a mechanism that we propose controls growth between tissues and organismal size in response to nutrient availability. Half a century ago, MacArthur and Wilson proposed that the number of species on islands tends toward a dynamic equilibrium diversity around which species richness fluctuates [1]. The current prevailing view in island biogeography accepts the fundamentals of MacArthur and Wilson's theory [2] but questions whether their prediction of equilibrium can be fulfilled over evolutionary time-scales, given the unpredictable and ever-changing nature of island geological and biotic features [3-7]. Here we conduct a complete molecular phylogenetic survey of the terrestrial bird species from four oceanic archipelagos that make up the diverse Macaronesian bioregion-the Azores, the Canary Islands, Cape Verde, and Madeira [8, 9]. We estimate the times at which birds colonized and speciated in the four archipelagos, including many previously unsampled endemic and non-endemic taxa and their closest continental relatives. We develop and fit a new multi-archipelago dynamic stochastic model to these data, explicitly incorporating information from 91 taxa, both extant and extinct. Remarkably, we find that all four archipelagos have independently achieved and maintained a dynamic equilibrium over millions of years. Biogeographical rates are homogeneous across archipelagos, except for the Canary Islands, which exhibit higher speciation and colonization. Our finding that the avian communities of the four Macaronesian archipelagos display an equilibrium diversity pattern indicates that a diversity plateau may be rapidly achieved on islands where rates of in situ radiation are low and extinction is high. This study reveals that equilibrium processes may be more prevalent than recently proposed, supporting MacArthur and Wilson's 50-year-old theory. Plesiosaurs were the longest-surviving group of secondarily marine tetrapods, comparable in diversity to today's cetaceans. During their long evolutionary history, which spanned the Jurassic and the Cretaceous (201 to 66 Ma), plesiosaurs repeatedly evolved long-and short-necked body plans [1, 2]. Despite this postcranial plasticity, short-necked plesiosaur clades have traditionally been regarded as being highly constrained to persistent and clearly distinct ecological niches: advanced members of Pliosauridae (ranging from the Middle Jurassic to the early Late Cretaceous) have been characterized as apex predators [2-5], whereas members of the distantly related clade Polycotylidae (middle to Late Cretaceous) were thought to have been fast-swimming piscivores [1, 5-7]. We report a new, highly unusual pliosaurid from the Early Cretaceous of Russia that shows close convergence with the cranial structure of polycotylids: Luskhan itilensis gen. et sp. nov. Using novel cladistic and ecomorphological data, we show that pliosaurids iteratively evolved polycotylid-like cranial morphologies from the Early Jurassic until the Early Cretaceous. This underscores the ecological diversity of derived pliosaurids and reveals a more complex evolutionary history than their iconic representation as gigantic apex predators of Mesozoic marine ecosystems suggests. Collectively, these data demonstrate an even higher degree of morphological plasticity and convergence in the evolution of plesiosaurs than previously thought and suggest the existence of an optimal ecomorphology for short-necked piscivorous plesiosaurs through time and across phylogeny. Red algal plastid genomes are often considered ancestral and evolutionarily stable, and thus more closely resembling the last common ancestral plastid genome of all photosynthetic eukaryotes [1, 2]. However, sampling of red algal diversity is still quite limited (e.g., [2-5]). We aimed to remedy this problem. To this end, we sequenced six new plastid genomes from four undersampled and phylogenetically disparate red algal classes (Porphyridiophyceae, Stylonematophyceae, Compsopogonophyceae, and Rhodellophyceae) and discovered an unprecedented degree of genomic diversity among them. These genomes are rich in introns, enlarged intergenic regions, and transposable elements (in the rhodellophycean Bulboplastis apyrenoidosa), and include the largest and most intron-rich plastid genomes ever sequenced (that of the rhodellophycean Corynoplastis japonica; 1.13 Mbp). Sophisticated phylogenetic analyses accounting for compositional heterogeneity show that these four "basal" red algal classes form a larger monophyletic group, Proteorhodophytina subphylum nov., and confidently resolve the large-scale relationships in the Rhodophyta. Our analyses also suggest that secondary red plastids originated before the diversification of all mesophilic red algae. Our genomic survey has challenged the current paradigmatic view of red algal plastid genomes as "living fossils" [1, 2, 6] by revealing an astonishing degree of divergence in size, organization, and non-coding DNA content. A closer look at red algae shows that they comprise the most ancestral (e.g., [2, 7, 8]) as well as some of the most divergent plastid genomes known. Understanding both the organization of the human cortex and its relation to the performance of distinct functions is fundamental in neuroscience. The primary sensory cortices display topographic organization, whereby receptive fields follow a characteristic pattern, from tonotopy to retinotopy to somatotopy [1]. GABAergic signaling is vital to the maintenance of cortical receptive fields [2]; however, it is unclear how this fine-grain inhibition relates to measurable patterns of perception [3, 4]. Based on perceptual changes following perturbation of the GABAergic system, it is conceivable that the resting level of cortical GABAergic tone directly relates to the spatial specificity of activation in response to a given input [5-7]. The specificity of cortical activation can be considered in terms of cortical tuning: greater cortical tuning yields more localized recruitment of cortical territory in response to a given input. We applied a combination of fMRI, MR spectroscopy, and psychophysics to substantiate the link between the cortical neurochemical milieu, the tuning of cortical activity, and variability in perceptual acuity, using human somatosensory cortex as a model. We provide data that explain human perceptual acuity in terms of both the underlying cellular and metabolic processes. Specifically, higher concentrations of sensorimotor GABA are associated with more selective cortical tuning, which in turn is associated with enhanced perception. These results show anatomical and neurochemical specificity and are replicated in an independent cohort. The mechanistic link from neurochemistry to perception provides a vital step in understanding population variability in sensory behavior, informing metabolic therapeutic interventions to restore perceptual abilities clinically. The kinetochore links chromosomes to dynamic spindle microtubules and drives both chromosome congression and segregation. To do so, the kinetochore must hold on to depolymerizing and polymerizing microtubules. At metaphase, one sister kinetochore couples to depolymerizing microtubules, pulling its sister along polymerizing microtubules [1, 2]. Distinct kinetochore-microtubule interfaces mediate these behaviors: active interfaces transduce microtubule depolymerization into mechanical work, and passive interfaces generate friction as the kinetochore moves along microtubules [3, 4]. Despite a growing understanding of the molecular components that mediate kinetochore binding [5-7], we do not know how kinetochores physically interact with polymerizing versus depolymerizing microtubule bundles, and whether they use the same mechanisms and regulation to do so. To address this question, we focus on the mechanical role of the essential load-bearing protein Hec1 [8-11] inmammalian cells. Hec1's affinity for microtubules is regulated by Aurora B phosphorylation on its N-terminal tail [12-15], but its role at the interface with polymerizing versus depolymerizing microtubules remains unclear. Here we use laser ablation to trigger cellular pulling on mutant kinetochores and decouple sisters in vivo, and thereby separately probe Hec1's role on polymerizing versus depolymerizing microtubules. We show that Hec1 tail phosphorylation tunes friction along polymerizing microtubules and yet does not compromise the kinetochore's ability to grip depolymerizing microtubules. Together, the data suggest that kinetochore regulation has differential effects on engagement with growing and shrinking microtubules. Through this mechanism, the kinetochore can modulate its grip on microtubules over mitosis and yet retain its ability to couple to microtubules powering chromosome movement. When we recall an experience, we rely upon the associations that we formed during the experience, such as those among objects, time, and place [1]. These associations are better remembered when they are familiar and draw upon generalized knowledge, suggesting that we use semantic memory in the service of episodic memory [2, 3]. Moreover, converging evidence suggests that episodic memory retrieval involves the reinstatement of neural activity that was present when we first experienced the event. Therefore, we hypothesized that retrieving associations should also reinstate the neural activity responsible for semantic processing. Indeed, previous studies have suggested that verbal memory retrieval leads to the reinstatement of activity across regions of the brain that include the distributed semantic processing network [4-6], but it is unknown whether and how individual neurons in the human cortex participate in the reinstatement of semantic representations. Recent advances using high-density microelectrode arrays (MEAs) have allowed clinicians to record from populations of neurons in the human cortex [7, 8]. Here we used MEAs to record neuronal spiking activity in the human middle temporal gyrus (MTG), a cortical region supporting the semantic representation of words [9-11], as participants performed a verbal paired-associates task. We provide novel evidence that population spiking activity in the MTG forms distinct representations of semantic concepts and that these representations are reinstated during the retrieval of those words. Background: Deregulations of long non-coding RNAs (lncRNAs) have been implicated in cancer initiation and progression. Current methods can only capture differential expression of lncRNAs at the population level and ignore the heterogeneous expression of lncRNAs in individual patients. Methods: We propose a method (LncRIndiv) to identify differentially expressed (DE) lncRNAs in individual cancer patients by exploiting the disrupted ordering of expression levels of lncRNAs in each disease sample in comparison with stable normal ordering. LncRIndiv was applied to lncRNA expression profiles of lung adenocarcinoma (LUAD). Based on the expression profile of LUAD individual-level DE lncRNAs, we used a forward selection procedure to identify prognostic signature for stage I-II LUAD patients without adjuvant therapy. Results: In both simulated data and real pair-wise cancer and normal sample data, LncRIndiv method showed good performance. Based on the individual-level DE lncRNAs, we developed a robust prognostic signature consisting of two lncRNA (C1orf132 and TMPO-AS1) for stage I-II LUAD patients without adjuvant therapy (P = 3.06 x 10(-6), log-rank test), which was confirmed in two independent datasets of GSE50081 (P = 1.82 x 10(-2), log-rank test) and GSE31210 (P = 7.43 x 10(-4), log-rank test) after adjusting other clinical factors such as smoking status and stages. Pathway analysis showed that TMPO-AS1 and C1orf132 could affect the prognosis of LUAD patients through regulating cell cycle and cell adhesion. Conclusions: LncRIndiv can successfully detect DE lncRNAs in individuals and be applied to identify prognostic signature for LUAD patients. Background: Cancer immunotherapy offers a promising approach in cancer treatment. The adenosine A2A receptor (A2AR) could protect cancerous tissues from immune clearance via inhibiting T cells response. To date, the role of A2AR in head and neck squamous cell carcinoma (HNSCC) has not been investigated. Here, we sought to explore the expression and immunotherapeutic value of A2AR blockade in HNSCC. Methods: The expression of A2AR was evaluated by immunostaining in 43 normal mucosae, 48 dysplasia and 165 primary HNSCC tissues. The immunotherapeutic value of A2AR blockade was assessed in vivo in genetically defined immunocompetent HNSCC mouse model. Results: Immunostaining of HNSCC tissue samples revealed that increased expression of A2AR on tumor infiltrating immune cells correlated with advanced pathological grade, larger tumor size and positive lymph node status. Elevated A2AR expression was also detected in recurrent HNSCC and HNSCC tissues with induction chemotherapy. The expression of A2AR was found to be significantly correlated with HIF-1 alpha, CD73, CD8 and Foxp3. Furthermore, the increased population of CD4(+)Foxp3(+) regulatory T cells (Tregs), which partially expressed A2AR, was observed in an immunocompetent mouse model that spontaneously develops HNSCC. Pharmacological blockade of A2AR by SCH58261 delayed the tumor growth in the HNSCC mouse model. Meanwhile, A2AR blockade significantly reduced the population of CD4(+) Foxp3(+) Tregs and enhanced the anti-tumor response of CD8(+) T cells. Conclusions: These results offer a preclinical proof for the administration of A2AR inhibitor on prophylactic experimental therapy of HNSCC and suggest that A2AR blockade can be a potential novel strategy for HNSCC immunotherapy. Uterine smooth muscle tumors range from benign leiomyomas to malignant leiomyosarcomas. Based on numerous molecular studies, leiomyomas and leiomyosarcomas mostly lack shared mutations and the majority of tumors are believed to develop through distinct mechanisms. To further characterize the molecular variability among uterine smooth muscle tumors, and simultaneously insinuate their potential malignant progression, we examined the frequency of known genetic leiomyoma driver alterations (MED12 mutations, HMGA2 overexpression, biallelic FH inactivation) in 65 conventional leiomyomas, 94 histopathological leiomyoma variants (18 leiomyomas with bizarre nuclei, 22 cellular, 29 highly cellular, and 25 mitotically active leiomyomas), and 51 leiomyosarcomas. Of the 210 tumors analyzed, 107 had mutations in one of the three driver genes. No tumor had more than one mutation confirming that all alterations are mutually exclusive. MED12 mutations were the most common alterations in conventional and mitotically active leiomyomas and leiomyosarcomas, while leiomyomas with bizarre nuclei were most often FH deficient and cellular tumors showed frequent HMGA2 overexpression. Highly cellular leiomyomas displayed the least amount of alterations leaving the majority of tumors with no known driver aberration. Our results indicate that based on the molecular background, histopathological leiomyoma subtypes do not only differ from conventional leiomyomas, but also from each other. The presence of leiomyoma driver alterations in nearly one third of leiomyosarcomas suggests that some tumors arise through leiomyoma precursor lesion or that these mutations provide growth advantage also to highly aggressive cancers. It is clinically relevant to understand the molecular background of various smooth muscle tumor subtypes, as it may lead to improved diagnosis and personalized treatments in the future. Biallelic loss-of-function mutations in the RNA-binding protein EIF4A3 cause Richieri-Costa-Pereira syndrome (RCPS), an autosomal recessive condition mainly characterized by craniofacial and limb malformations. However, the pathogenic cellular mechanisms responsible for this syndrome are entirely unknown. Here, we used two complementary approaches, patient-derived induced pluripotent stem cells (iPSCs) and conditional Eif4a3 mouse models, to demonstrate that defective neural crest cell (NCC) development explains RCPS craniofacial abnormalities. RCPS iNCCs have decreased migratory capacity, a distinct phenotype relative to other craniofacial disorders. Eif4a3 haploinsufficient embryos presented altered mandibular process fusion and micrognathia, thus recapitulating the most penetrant phenotypes of the syndrome. These defects were evident in either ubiquitous or NCC-specific Eif4a3 haploinsufficient animals, demonstrating an autonomous requirement of Eif4a3 in NCCs. Notably, RCPS NCC-derived mesenchymal stem-like cells (nMSCs) showed premature bone differentiation, a phenotype paralleled by premature clavicle ossification in Eif4a3 haploinsufficient embryos. Likewise, nMSCs presented compromised in vitro chondrogenesis, and Meckel's cartilage was underdeveloped in vivo. These findings indicate novel and essential requirements of EIF4A3 for NCC migration and osteochondrogenic differentiation during craniofacial development. Altogether, complementary use of iPSCs and mouse models pinpoint unique cellular mechanisms by which EIF4A3 mutation causes RCPS, and provide a paradigm to study craniofacial disorders. Myotonic Dystrophy type 1 (DM1) is caused by an expansion of CUG repeats in DMPK mRNAs. This mutation affects alternative splicing through misregulation of RNA-binding proteins. Amongst pre-mRNAs that are mis-spliced, several code for proteins involved in calcium homeostasis suggesting that calcium-handling and signaling are perturbed in DM1. Here, we analyzed expression of such proteins in DM1 mouse muscle. We found that the levels of several sarcoplasmic reticulum proteins (SERCA1, sarcolipin and calsequestrin) are altered, likely contributing to an imbalance in calcium homeostasis. We also observed that calcineurin (CnA) signaling is hyperactivated in DM1 muscle. Indeed, CnA expression and phosphatase activity are both markedly increased in DM1 muscle. Coherent with this, we found that activators of the CnA pathway (MLP, FHL1) are also elevated. Consequently, NFATc1 expression is increased in DM1 muscle and becomes relocalized to myonuclei, together with an up-regulation of its transcriptional targets (RCAN1.4 and myoglobin). Accordingly, DM1 mouse muscles display an increase in oxidative metabolism and fiber hypertrophy. To determine the functional consequences of this CnA hyperactivation, we administered cyclosporine A, an inhibitor of CnA, to DM1 mice. Muscles of treated DM1 mice showed an increase in CUGBP1 levels, and an exacerbation of key alternative splicing events associated with DM1. Finally, inhibition of CnA in cultured human DM1 myoblasts also resulted in a splicing exacerbation of the insulin receptor. Together, these findings show for the first time that calcium-CnA signaling is hyperactivated in DM1 muscle and that such hyperactivation represents a beneficial compensatory adaptation to the disease. Collagen prolyl 4-hydroxylases (C-P4Hs) play a central role in the formation and stabilization of the triple helical domain of collagens. P4HA1 encodes the catalytic a(I) subunit of the main C-P4H isoenzyme (C-P4H-I). We now report human bi-allelic P4HA1 mutations in a family with a congenital-onset disorder of connective tissue, manifesting as early-onset joint hypermobility, joint contractures, muscle weakness and bone dysplasia as well as high myopia, with evidence of clinical improvement of motor function over time in the surviving patient. Similar to P4ha1 null mice, which die prenatally, the muscle tissue from P1 and P2 was found to have reduced collagen IV immunoreactivity at the muscle basement membrane. Patients were compound heterozygous for frameshift and splice site mutations leading to reduced, but not absent, P4HA1 protein level and C-P4H activity in dermal fibroblasts compared to age-matched control samples. Differential scanning calorimetry revealed reduced thermal stability of collagen in patient-derived dermal fibroblasts versus age-matched control samples. Mutations affecting the family of C-P4Hs, and in particular C-P4H-I, should be considered in patients presenting with congenital connective tissue/myopathy overlap disorders with joint hypermobility, contractures, mild skeletal dysplasia and high myopia. In retinal photoreceptors, vectorial transport of cargo is critical for transduction of visual signals, and defects in intracellular trafficking can lead to photoreceptor degeneration and vision impairment. Molecular signatures associated with routing of transport vesicles in photoreceptors are poorly understood. We previously reported the identification of a novel rod photoreceptor specific isoform of Receptor Expression Enhancing Protein (REEP) 6, which belongs to a family of proteins involved in intracellular transport of receptors to the plasma membrane. Here we show that loss of REEP6 in mice (Reep6(-/-)) results in progressive retinal degeneration. Rod photoreceptor dysfunction is observed in Reep6(-/-) mice as early as one month of age and associated with aberrant accumulation of vacuole-like structures at the apical inner segment and reduction in selected rod phototransduction proteins. We demonstrate that REEP6 is detected in a subset of Clathrin-coated vesicles and interacts with the t-SNARE, Syntaxin3. In concordance with the rod degeneration phenotype in Reep6(-/-) mice, whole exome sequencing identified homozygous REEP6-E75K mutation in two retinitis pigmentosa families of different ethnicities. Alpha-synuclein (aSyn) is considered a major culprit in Parkinson's disease (PD) pathophysiology. However, the precise molecular function of the protein remains elusive. Recent evidence suggests that aSyn may play a role on transcription regulation, possibly by modulating the acetylation status of histones. Our study aimed at evaluating the impact of wild-type (WT) and mutant A30P aSyn on gene expression, in a dopaminergic neuronal cell model, and decipher potential mechanisms underlying aSyn-mediated transcriptional deregulation. We performed gene expression analysis using RNA-sequencing in Lund Human Mesencephalic (LUHMES) cells expressing endogenous (control) or increased levels of WT or A30P aSyn. Compared to control cells, cells expressing both aSyn variants exhibited robust changes in the expression of several genes, including downregulation of major genes involved in DNA repair. WT aSyn, unlike A30P aSyn, promoted DNA damage and increased levels of phosphorylated p53. In dopaminergic neuronal cells, increased aSyn expression led to reduced levels of acetylated histone 3. Importantly, treatment with sodium butyrate, a histone deacetylase inhibitor (HDACi), rescued WT aSyn-induced DNA damage, possibly via upregulation of genes involved in DNA repair. Overall, our findings provide novel and compelling insight into the mechanisms associated with aSyn neurotoxicity in dopaminergic cells, which could be ameliorated with an HDACi. Future studies will be crucial to further validate these findings and to define novel possible targets for intervention in PD. Myotonic dystrophy type 1 (DM1) is caused by an expansion of CTG repeats in the 3' untranslated region (UTR) of the dystrophia myotonia protein kinase (DMPK) gene. Cognitive impairment associated with structural change in the brain is prevalent in DM1. How this histopathological abnormality during disease progression develops remains elusive. Nuclear accumulation of mutant DMPK mRNA containing expanded CUG RNA disrupting the cytoplasmic and nuclear activities of muscleblind-like (MBNL) protein has been implicated in DM1 neural pathogenesis. The association between MBNL dysfunction and morphological changes has not been investigated. We generated a mouse model for postnatal expression of expanded CUG RNA in the brain that recapitulates the features of the DM1 brain, including the formation of nuclear RNA and MBNL foci, learning disability, brain atrophy and misregulated alternative splicing. Characterization of the pathological abnormalities by a time-course study revealed that hippocampus-related learning and synaptic potentiation were impaired before structural changes in the brain, followed by brain atrophy associated with progressive reduction of axon and dendrite integrity. Moreover, cytoplasmic MBNL1 distribution on dendrites decreased before dendrite degeneration, whereas reduced MBNL2 expression and altered MBNL-regulated alternative splicing was evident after degeneration. These results suggest that the expression of expanded CUG RNA in the DM1 brain results in neurodegenerative processes, with reduced cytoplasmic MBNL1 as an early event response to expanded CUG RNA. Nesprins-1 and -2 are highly expressed in skeletal and cardiac muscle and together with SUN (Sad1p/UNC84)-domain containing proteins and lamin A/C form the LInker of Nucleoskeleton-and-Cytoskeleton (LINC) bridging complex at the nuclear envelope (NE). Mutations in nesprin-1/2 have previously been found in patients with autosomal dominant Emery-Dreifuss muscular dystrophy (EDMD) as well as dilated cardiomyopathy (DCM). In this study, three novel rare variants (R8272Q, S8381C and N8406K) in the C-terminus of the SYNE1 gene (nesprin-1) were identified in seven DCM patients by mutation screening. Expression of these mutants caused nuclear morphology defects and reduced lamin A/C and SUN2 staining at the NE. GST pull-down indicated that nesprin-1/lamin/SUN interactions were disrupted. Nesprin-1 mutations were also associated with augmented activation of the ERK pathway in vitro and in hearts in vivo. During C2C12 muscle cell differentiation, nesprin-1 levels are increased concomitantly with kinesin light chain (KLC-1/2) and immunoprecipitation and GST pull-down showed that these proteins interacted via a recently identified LEWD domain in the C-terminus of nesprin-1. Expression of nesprin-1 mutants in C2C12 cells caused defects in myoblast differentiation and fusion associated with dysregulation of myogenic transcription factors and disruption of the nesprin-1 and KLC-1/2 interaction at the outer nuclear membrane. Expression of nesprin-1 alpha(2) WT and mutants in zebrafish embryos caused heart developmental defects that varied in severity. These findings support a role for nesprin-1 in myogenesis and muscle disease, and uncover a novel mechanism whereby disruption of the LINC complex may contribute to the pathogenesis of DCM. The Niemann-Pick type C1 (NPC1) disease is a neurodegenerative lysosomal storage disorder due to mutations in the NPC1 gene, encoding a transmembrane protein related to the Sonic hedgehog (Shh) receptor, Patched, and involved in intracellular trafficking of cholesterol. We have recently found that the proliferation of cerebellar granule neuron precursors is significantly reduced in Npc1(-/-) mice due to the downregulation of Shh expression. This finding prompted us to analyze the formation of the primary cilium, a non-motile organelle that is specialized for Shh signal transduction and responsible, when defective, for several human genetic disorders. In this study, we show that the expression and subcellular localization of Shh effectors and ciliary proteins are severely disturbed in Npc1-deficient mice. The dysregulation of Shh signaling is associated with a shortening of the primary cilium length and with a reduction of the fraction of ciliated cells in Npc1-deficient mouse brains and the human fibroblasts of NPC1 patients. These defects are prevented by treatment with 2-hydroxypropyl-beta-cyclodextrin, a promising therapy currently under clinical investigation. Our findings indicate that defective Shh signaling is responsible for abnormal morphogenesis of the cerebellum of Npc1-deficient mice and show, for the first time, that the formation of the primary cilium is altered in NPC1 disease. CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60-80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multidimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. Cyclic-GMP is a second messenger in phototransduction, a G-protein signaling cascade that conveys photon absorption by rhodopsin to a change in current at the rod photoreceptor outer segment plasma membrane. Basal cGMP level is strictly controlled by the opposing actions of phosphodiesterase (PDE6) and retinal guanylyl cyclases (GCs), and mutations in genes that disrupt cGMP homeostasis leads to retinal degeneration in humans through mechanisms that are incompletely understood. The purpose of this study is to examine two distinct cellular targets of cGMP: the cGMP-gated (CNG) channels and protein kinase G (PRKG), and how each may contribute to rod cell death. Using a mouse genetic approach, we found that abolishing expression of CNG channels prolongs rod survival caused by elevated cGMP in a PDE6 mutant mouse model. This observation supports the use of channel blockers to delay rod death, which is expected to prolong useful vision through enhanced cone survival. However, the absence of CNG channel alone also caused abnormal cGMP accumulation. In a mouse model of CNG channel loss-of-function, abolishing PRKG1 expression had a long-lasting effect in promoting rod cell survival. Our data strongly implicate two distinct cGMP-mediated cell death pathways, and suggest that therapeutic designs targeting both pathways will be more effective at slowing photoreceptor cell death caused by elevated cGMP. Scribble1 (Scrib1) is a tumor suppressor gene that has long been established as an essential component of apicobasal polarity (ABP). In mouse models, mutations in Scrib1 cause a severe form of neural tube defects (NTDs) as a result of a defective planar cell polarity (PCP) signaling. In this study, we dissected the role of Scrib1 in the pathogenesis of NTDs in its mouse mutant Circletail (Crc), in cell lines and in a human NTD cohort. While there were no obvious defects in ABP in the Scrib1(Crc/Crc) neuroepihelial cells, we identified an abnormal localization of the apical protein Par-3 and of the PCP protein Vangl2. These results were concordant with those obtained following a partial knockdown of Scrib1 in MDCK II cells. Par-3 was able to rescue the localization defect of Vangl1 (paralog of Vangl2) caused by partial knockdown of Scrib1 suggesting that Scrib1 exerts its effect on Vangl1 localization indirectly through Par-3. This conclusion is supported by our findings of an apical enrichment of Vangl1 following a partial knockdown of Par-3. Re-sequencing analysis of SCRIB1 in 473 NTD patients led to the identification of 5 rare heterozygous missense mutations that were predicted to be pathogenic. Two of these mutations, p.Gly263Ser and p.Gln808His, and 2 mouse NTD mutations, p.Ile285Lys and p.Glu814Gly, affected Scrib1 membrane localization and its modulating role of Par-3 and Vangl1 localization. Our study demonstrates an important role of Scrib1 in the pathogenesis of NTDs through its mediating effect of Par-3 and Vangl1/2 localization and most likely independently of ABP. Mutations of various genes cause hereditary spastic paraplegia (HSP), a neurological disease involving dying-back degeneration of upper motor neurons. From these, mutations in the SPAST gene encoding the microtubule-severing protein spastin account for most HSP cases. Cumulative genetic and experimental evidence suggests that alterations in various intracellular trafficking events, including fast axonal transport (FAT), may contribute to HSP pathogenesis. However, the mechanisms linking SPAST mutations to such deficits remain largely unknown. Experiments presented here using isolated squid axoplasm reveal inhibition of FAT as a common toxic effect elicited by spastin proteins with different HSP mutations, independent of microtubule-binding or severing activity. Mutant spastin proteins produce this toxic effect only when presented as the tissue-specific M1 isoform, not when presented as the ubiquitously-expressed shorter M87 isoform. Biochemical and pharmacological experiments further indicate that the toxic effects of mutant M1 spastins on FAT involve casein kinase 2 (CK2) activation. In mammalian cells, expression of mutant M1 spastins, but not their mutant M87 counterparts, promotes abnormalities in the distribution of intracellular organelles that are correctable by pharmacological CK2 inhibition. Collectively, these results demonstrate isoform-specific toxic effects of mutant M1 spastin on FAT, and identify CK2 as a critical mediator of these effects. In humans, CERKL mutations cause widespread retinal degeneration: early dysfunction and loss of rod and cone photoreceptors in the outer retina and, progressively, death of cells in the inner retina. Despite intensive efforts, the function of CERKL remains obscure and studies in animal models have failed to clarify the disease mechanism of CERKL mutations. To address this gap in knowledge, we have generated a stable CERKL knockout zebrafish model by TALEN technology and a 7bp deletion in CERKL cDNA that caused the premature termination of CERKL. These CERKL-/- animals showed progressive degeneration of photoreceptor outer segments (OSs) and increased apoptosis of retinal cells, including those in the outer and inner retinal layers. Additionally, we confirmed by immunofluorescence and western-blot that rod degeneration in CERKL-/- zebrafish occurred earlier and was more significant than that in cone cells. Accumulation of shed OSs in the interphotoreceptor matrix was observed by transmission election microscopy (TEM). This suggested that CERKL may regulate the phagocytosis of OSs by the retinal pigment epithelium (RPE). We further found that the phagocytosis-associated protein MERTK was significantly reduced in CERKL-/- zebrafish. Additionally, in ARPE-19 cell lines, knockdown of CERKL also decreased the mRNA and protein level of MERTK, as well as the ox-POS phagocytosis. We conclude that CERKL deficiency in zebrafish may cause rod-cone dystrophy, but not cone-rod dystrophy, while interfering with the phagocytosis function of RPE associated with down-regulation of the expression of MERTK. Resting heart rate is a heritable trait, and an increase in heart rate is associated with increased mortality risk. Genome-wide association study analyses have found loci associated with resting heart rate, at the time of our study these loci explained 0.9% of the variation. This study aims to discover new genetic loci associated with heart rate from Exome Chip meta-analyses. Heart rate was measured from either elecrtrocardiograms or pulse recordings. We meta-analysed heart rate association results from 104 452 European-ancestry individuals from 30 cohorts, genotyped using the Exome Chip. Twenty-four variants were selected for follow-up in an independent dataset (UK Biobank, N = 134 251). Conditional and gene-based testing was undertaken, and variants were investigated with bioinformatics methods. We discovered five novel heart rate loci, and one new independent low-frequency non-synonymous variant in an established heart rate locus (KIAA1755). Lead variants in four of the novel loci are non-synonymous variants in the genes C10orf71, DALDR3, TESK2 and SEC31B. The variant at SEC31B is significantly associated with SEC31B expression in heart and tibial nerve tissue. Further candidate genes were detected from long-range regulatory chromatin interactions in heart tissue (SCD, SLF2 and MAPK8). We observed significant enrichment in DNase I hypersensitive sites in fetal heart and lung. Moreover, enrichment was seen for the first time in human neuronal progenitor cells (derived from embryonic stem cells) and fetal muscle samples by including our novel variants. Our findings advance the knowledge of the genetic architecture of heart rate, and indicate new candidate genes for follow-up functional studies. Common sequence variants at the haptoglobin gene (HP) have been associated with blood lipid levels. Through whole-genome sequencing of 8,453 Icelanders, we discovered a splice donor founder mutation in HP (NM_001126102.1:c.190 + 1G > C, minor allele frequency = 0.56%). This mutation occurs on the HP1 allele of the common copy number variant in HP and leads to a loss of function of HP1. It associates with lower levels of haptoglobin (P = 2.1 x 10(-54)), higher levels of non-high density lipoprotein cholesterol (beta = 0.26 mmol/l, P = 2.6 x 10(-9)) and greater risk of coronary artery disease (odds ratio = 1.30, 95% confidence interval: 1.10-1.54, P = 0.0024). Through haplotype analysis and with RNA sequencing, we provide evidence of a causal relationship between one of the two haptoglobin isoforms, namely Hp1, and lower levels of non-HDL cholesterol. Furthermore, we show that the HP1 allele associates with various other quantitative biological traits. Organizational research has long been interested in crises and crisis management. Whether focused on crisis antecedents, outcomes, or managing a crisis, research has revealed a number of important findings. However, research in this space remains fragmented, making it difficult for scholars to understand the literature's core conclusions, recognize unsolved problems, and navigate paths forward. To address these issues, we propose an integrative framework of crises and crisis management that draws from research in strategy, organizational theory, and organizational behavior as well as from research in public relations and corporate communication. We identify two primary perspectives in the literature, one focused on the internal dynamics of a crisis and one focused on managing external stakeholders. We review core concepts from each perspective and highlight the commonalities that exist between them. Finally, we use our integrative framework to propose future research directions for scholars interested in crises and crisis management. This review incorporates strategic planning research conducted over more than 30 years and ranges from the classical model of strategic planning to recent empirical work on intermediate outcomes, such as the reduction of managers' position bias and the coordination of subunit activity. Prior reviews have not had the benefit of more socialized perspectives that developed in response to Mintzberg's critique of planning, including research on planned emergence and strategy-as-practice approaches. To stimulate a resurgence of research interest on strategic planning, this review therefore draws on a diverse body of theory beyond the rational design and contingency approaches that characterized research in this domain until the mid-1990s. We develop a broad conceptualization of strategic planning and identify future research opportunities for improving our understanding of how strategic planning influences organizational outcomes. Our framework incorporates the role of strategic planning practitioners; the underlying routines, norms, and procedures of strategic planning (practices); and the concrete activities of planners (praxis). Workplace relationships are a cornerstone of management research. At the same time, there remain pressing calls for work relationships to be front and center in management literature, demanding an organizationally specific "relationship science." This article addresses these calls by unifying multiple scholarly fields of interest to develop a comprehensive understanding of interpersonal workplace relationships. Specifically, in this review, we move beyond the tendency to pit positive and negative relationships against each other and, instead, spotlight theory and research associated with ambivalent and indifferent relationships, which are prevalent and impactful yet persistently understudied. We organize our review into four streams: sources, outcomes, dynamics, and measurement. We then advance existing workplace relationships literature by integrating the social functions of emotions perspective. In doing so, we move beyond the positive-negative dichotomy by implicating discrete emotions and their interpersonal functions for workplace relationships. We conclude by offering an agenda for future scholarship. A widely accepted orthodoxy is that it is impossible to do replication studies within qualitative research paradigms. Ontologically and epistemologically speaking, such a view is largely correct. However, in this paper, I propose that what I call comparative re-production research-that is, the empirical study of qualitative phenomena that occur in one context, which are then shown also to obtain in another-is a well-attested practice in ethnomethodological conversation analysis (CA). By extension, I further argue that researchers who do research on second and foreign language (L2) classrooms inspired by the conversation analysis-for-second-language acquisition movement should engage in comparative re-production research in order to make broad statements about the generality or prototypicality of the qualitative organization of particular practices across languages, cultures and institutional contexts. The field of language teaching and learning is in dire need of replications of vocabulary and comprehension research with diverse populations of learners. We propose for replication one large-scale vocabulary intervention carried out successfully in a middle-school with monolingual and multilingual students. This study was carried out several years ago, was published in the Reading Research Quarterly, and has been generously cited since then. The findings and the instruments from this study have been leveraged in subsequent extension studies by the same group of researchers, but have not been replicated in different contexts. We offer multiple reasons and methods of replicating this study in high school and adult contexts in which there is a serious need for learners to comprehend technical or academic materials using deep, nuanced foundations of vocabulary. Learning a new sound system poses challenges of a social, psychological, and cognitive nature, but the learner's decisions are key to ultimate attainment. This presentation focuses on two essential concepts: choice, or how one wants to sound in the target language; and limits, or various challenges to one's goals vis-a-vis accent. Qualitative and quantitative data underscore the relevance of autonomy as a guiding principle from which to explore related constructs such as self-determination, motivation, decision-making and self-concept. I also review several prominent limits on phonological attainment to counterbalance and contextualize the aspect of choice. Suggestions are given for both teaching and research that prioritize autonomy with reference to a complexity perspective. This study examined the extent to which the association between increased student absence and lower achievement outcomes varied by student and school-level socioeconomic characteristics. Analyses were based on the enrolment, absence and achievement records of 89,365 Year 5, 7 and 9 students attending government schools in Western Australian between 2008 and 2012. Multivariate multi-level modelling methods were used to estimate numeracy, writing and reading outcomes based on school absence, and interactions between levels of absence and school socioeconomic index (SEI), prior achievement, gender, ethnicity, language background, parent education and occupation status. While the effects of absence on achievement were greater for previously high-achieving students, there were few significant interactions between absence and any of the socioeconomic measures on achievement outcomes. The results of first-difference regression models indicated that the negative effect of an increase in absence was marginally larger for students attending more advantaged schools, though most effects were very small. While students from disadvantaged schools have, on average, more absences than their advantaged peers, there is very little evidence to suggest that the effects of absence are greater for those attending lower-SEI schools. School attendance should therefore be a priority for all schools, and not just those with high rates of absence or low average achievement. Meritocratic ideals, which emphasise individual responsibility and self-motivation, have featured prominently in discourses about Australia's international competitiveness in academic achievement. Young people are often encouraged to attribute academic success and failure to individual factors such as hard work and talent, and to downplay extrinsic factors such as luck, task difficulty, or broader structural advantages and disadvantages. Using longitudinal data on a large, single-age cohort (n=2,145) of young Australians participating in the Social Futures and Life Pathways (Our Lives') project, we analyse the relationship between attributions for academic success across their final years of secondary schooling and how they related to educational attainment at the end of school. The belief that hard work would lead to academic success was widespread within the sample and positively associated with subsequent educational performance. Most students also emphasised the importance of having a supportive family, despite this being negatively associated with performance. Consistent with claims about a social inequality of motivation', the findings suggest that emphasising meritocracy may compound the disadvantages of young people from less educated or vocational backgrounds, and those living in rural and regional Australia, whilst impacting unevenly on males' and females' academic performance. Children with special educational needs (SEN) are known to experience lower average educational attainment than other children during their school years. But we have less insight into how far their poorer educational outcomes stem from their original starting points or from failure to progress during school. The extent to which early identification with SEN delivers support that enables children who are struggling academically to make appropriate progress is subject to debate. This is complicated by the fact that children with SEN are more likely to be growing up in disadvantaged families and face greater levels of behavioural and peer problems, factors which themselves impact attainment and progress through school. In this paper, we evaluate the academic progress of children with SEN in England, drawing on a large-scale nationally representative longitudinal UK study, the Millennium Cohort Study, linked to administrative records of pupil attainment. Controlling for key child, family and environmental factors, and using the SEN categories employed at the time of data collection, we first establish that children identified with SEN in 2008, when they were age 7, had been assessed with lower academic competence when they started school. We evaluate their progress between ages 5-7 and 7-11. We found that children identified with SEN at age 7 tended to be those who had made less progress between ages 5 and 7 than their comparable peers. However, children with SEN continued to make less progress than their similarly able peers between ages 7 and 11. Implications are discussed. Recent research has acknowledged the importance of the relationships of school principals with beginning teachers. However, little is known about how emotions inform these relationships from the beginning teacher's side. Applying the concept of emotional geographies, this paper explores the kinds of storied emotional distances that appear in the relationships between beginning teachers and their principals. Based on interviews with beginning Japanese teachers, the results indicate that such relationships may be: (1) very direct and personal; (2) acted out indirectly by the principal as personal facilitator behind the scenes' or as public gatekeeper; or (3) mediated by the teacher community. The analysis reveals beginning teachers' personal experiences of these relationships, as well as how such relationships are influenced by organisational and cultural context. Although principals are described as distant figures within the school organisation, they are seen to play an important role in facilitating beginning teachers' work by connecting with them at a personal level and providing good working conditions by influencing the emotional atmosphere of the teacher community or by sheltering them from parental pressure. The statutory phonics screening check' was introduced in 2012 and reflects the current emphasis in England on teaching early reading through systematic synthetic phonics. The check is intended to assess children's phonic abilities and their knowledge of 85 grapheme-phoneme correspondences (GPCs) through decoding 20 real words and 20 pseudo words. Since the national rollout, little attention has been devoted to the content of the checks. The current paper, therefore, reviews the first three years of the check between 2012 and 2014 to examine how the 85 specified GPCs have been assessed and whether children are only using decoding skills to read the words. The analysis found that out of the 85 GPCs considered testable by the check, just 15 GPCs accounted for 67% of all GPC occurrences, with 27 of the 85 specified GPCs (31.8%) not appearing at all. Where a grapheme represented more than one phoneme, the most frequently occurring pronunciation was assessed in 72.2% of cases, with vocabulary knowledge being required to determine the correct pronunciation within real words where multiple pronunciations were possible. The GPCs assessed, therefore, do not reflect the full range of GPCs that it is expected will be taught within a systematic synthetic phonics approach. Furthermore, children's ability to decode real words is dependent on their vocabulary knowledge, not just their phonic skills. These results question the purpose and validity of the phonics screening check and the role of synthetic phonics for teaching early reading. The relative lack of students studying post-compulsory STEM (Science, Technology, Engineering and Mathematics) subjects is a key policy concern. A particular issue is the disparities in uptake by students' family background, gender and ethnicity. It remains unclear whether the relationship between student characteristics and choice can be explained by academic disparities, and whether students' background, gender and ethnicity interact in determining university subject choices, rather than simply having additive effects. I use data from more than 4000 students in England from Next Steps' (previously the LSYPE) and logistic regression methods to explore the interacting relationships between student characteristics and subject choice. There are four main findings of this study. Firstly, disparities by students' ethnicity are shown to increase when controlling for prior attainment. Secondly, family background indicators are differentially related to uptake for male and female students, with parents' social class and education larger predictors of choice than financial resources. Thirdly, gender, ethnicity and family background interact in determining choices. Particularly, as socio-economic position increases, young women are more likely to choose STEM over other high-return subjects. Finally, associations between student characteristics and subject choices, including interactions, largely persisted when accounting for A-level choices. Implications for policy and future research are discussed. Low schooling, high non-attendance and school dropout rates are critical phenomena within disadvantaged groups, especially among the Gypsy community. For example, in the UK, 10%-25% of Gypsy children do not attend school regularly and have significantly higher levels of overall absence from school (percentage of half-day sessions missed) than pupils from other ethnic groups. In Portugal, available data on Gypsy children is sparse, yet data from one geographic region of the country reports high school failure (45%) and dropout rates (15%) among this population. The present study assessed the efficacy of a four-year intervention to promote Gypsy children's behavioural engagement and school success. Gypsy communities were contacted and 30 children participating in the four waves were randomly distributed into control and experimental groups. Every school day throughout four years, 16 children in the experimental group were called at home and invited to go to school. The effectiveness of the intervention was evaluated in four waves (at the end of each of the four school years), assessing behavioural engagement (i.e. school non-attendance, classroom behaviour) and school achievement (i.e. mathematics achievement, student progression). Findings show the efficacy of the intervention on promoting behaviour engagement and academic success without devaluing Gypsy people's culture. Gaps in education attainment between high and low achieving children in the primary school years are frequently evidenced in educational reports. Linked to social disadvantage, these gaps have detrimental long-term effects on learning. There is a need to close the gap in attainment by addressing barriers to learning and offering alternative contexts for education. There is increasing evidence for beneficial impacts of education delivered outdoors, yet most programmes are un-structured, and evidence is anecdotal and lacks experimental rigour. In addition, there is a wealth of social-emotional outcomes reported yet little in the way of educational attainment outcomes. The current study explores the educational impact of a structured curriculum-based outdoor learning programme for primary school children: Wilderness Schooling'. A matched-groups design: Wilderness Schooling (n=223) and conventional schooling (n=217), was used to compare attainment data in English reading, English writing and maths, collected at three time-points: Pre- (T1) and post-intervention (T2) and at a 6-week follow up (T3). Data show that children in the Wilderness Schooling group significantly improved their attainment in all three subjects compared to controls. Trajectories of impact indicated attainment continued to increase from baseline in the following weeks after the intervention concluded. These results allow the case to be made for the core curriculum to be conducted outdoors to improve children's learning. However, it is important to consider that there are likely to be various components of the intervention that could form a theory of change essential to reported outcomes. Despite a recent world-wide upsurge of academic interest in moral and character education, little is known about pupils' character development in schools, especially in the UK context. The authors used a version of the Intermediate Concept Measure for Adolescents, involving dilemmas, to assess an important component of charactermoral judgementamong 4053 pupils aged 14-15. Data were generated in 33 UK schools of varying types between February 2013 and June 2014. Results showed that compared with US samples, the pupils' scores were, on average, low, suggestive of tendencies towards self-interest', not getting involved' and conformity/loyalty to friends'. Judgements varied by subscales assessing action' and justification' choices; pupils more successfully identified good actions than good justifications, but generally struggled more to successfully identify poor actions and poor justifications. Highest scores were for a dilemma emphasising self-discipline' and lowest for honesty', with courage' in between. Overall average results were significantly and positively associated with being female, having (and practising) a religion and doing specific extra-curricular activities. Differences in schools were also noted, although the kinds of school (e.g. public/private, religious/secular) were unrelated to student scores. In recent decades, the belief has originated that data use contributes to more thought-out decisions in schools. The literature has suggested that fruitful data use is often the result of interactions among team members. However, up until now, most of the available research on data use has used collaboration' as an umbrella concept to describe very different types of interaction, without specifying the nature of collaboration or the degree of interdependency that takes place in interactions. Therefore, the current study investigates and describes Flemish teachers' individual, co-operative and collaborative data use. In doing so, the level of interdependency of teachers' interactive activities (storytelling, helping, sharing, joint work) is taken into account. The results of a qualitative study with semi-structured interviews show that teachers' data use is predominantly of an individual nature and that felt interdependencies among teachers are few. The study enhances knowledge and opens the conceptual debate about teachers' interactions in the context of data use. Allowing correlation to be local, ie, state-dependent, in multi-asset models allows better hedging by incorporating correlation moves in the Delta. When options on a basket - be it a stock index, a cross-foreign exchange rate or an interest rate spread - are liquidly traded, one may want to calibrate a local correlation to these option prices. Only two particular solutions have been suggested so far in the literature. Both impose a particular dependency of the correlation matrix on the asset values that one has no reason to undergo. They may also fail to be admissible, ie, positive semi-definite. We explain how, by combining the particle method presented in "The smile calibration problem solved" by Guyon and Henry-Labordere (2011) with a simple affine transform, we can build all the calibrated local correlation models. The two existing models appear as special cases (if admissible). For the first time, one can now choose a calibrated local correlation in order to fit a view on the correlation skew, or reproduce historical correlation, or match some exotic option prices, thus improving the pricing, hedging and risk-management of multi-asset derivatives. This technique is generalized to calibrate models that combine stochastic interest rates, stochastic dividend yield, local stochastic volatility and local correlation. Numerical results show the wide variety of calibrated local correlations and give insight into a difficult (still unsolved) problem: finding lower bounds/upper bounds on general multi-asset option prices given the whole surfaces of implied volatilities of a basket and its constituents. We provide a bound for the error committed when using a Fourier method to price European options, when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation (ODE) that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound and demonstrate the minimization of the bound to select parameters for a numerical Fourier transformation method in order to solve the option price efficiently. According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the estimation of CVA and its sensitivities for a portfolio of financial derivatives. We use the finite difference Monte Carlo (FDMC) method to measure exposure profiles and consider the computationally challenging case of foreign exchange barrier options in the context of the Black-Scholes as well as the Heston stochastic volatility model, with and without stochastic domestic interest rate, for a wide range of parameters. In the case of a fixed domestic interest rate, our results show that FDMC is an accurate method compared with the semi-analytic COS method and, advantageously, can compute multiple options on one grid. In the more general case of a stochastic domestic interest rate, we show that we can accurately compute exposures of discontinuous one-touch options by using a linear interpolation technique as well as sensitivities with respect to initial interest rate and variance. This paves the way for real portfolio level risk analysis. We propose an affine extension of the linear Gaussian term structure model (LGM) such that the instantaneous covariation of the factors is given by an affine process on semidefinite positive matrixes. We begin by setting up the model and presenting some important properties concerning the Laplace transform of the factors and the ergodicity of the model. Then, we present two main numerical tools for implementing the model in practice. First, we obtain an expansion of caplet and swaption prices around the LGM. Such a fast and accurate approximation is useful in assessing model behavior on the implied volatility smile. Second, we provide a second-order scheme for the weak error, which enables us to calculate exotic options by a Monte Carlo algorithm. These two pricing methods are compared with the standard method based on Fourier inversion. This paper proposes a model for forecasting scenarios from the perspective of a reverse stress test using interest rate (JGB10Y yield), equity (Nikkei 225) and foreign exchange (US$/(sic)) data. The model consists of (i) a constraint with error terms of dynamic conditional correlation-generalized autoregressive conditional heteroscedasticity (DCC-GARCH) for expressing risk factors (RFs) located in an acceptable range, where the acceptable range is determined by the Mahalanobis distance, which consists of error terms of DCC-GARCH and the correlation between one RF and another RF; and (ii) maximization of the objection function, which is the loss of market portfolio (ie, minimization of the difference of a market portfolio). I also show the following. (i) The forecasting scenarios identified by this model are valid in terms of expressing very stressful data, which documents that some financial institutions may be in default, and that there is a mostly distributed multivariate normal distribution. (ii) This model can be solved by formulating second-order cone programming, which is standard in the field of mathematical optimization programming. I expect this paper will be of interest to researchers and practitioners in the fields of stress testing and risk management. Discrete choice models of the probability of default (PD) have several applications in finance. In some applications, such as credit scoring, their value is in ranking applicants or customers by PD. Other applications, such as estimating losses as part of determining capital requirements under Basel, require accuracy for the estimated PD. There is a well-developed set of tests to assess models' ability to rank-order, and these are sometimes relied upon to assess the accuracy of models' probability estimates. This paper demonstrates that the rank-order tests are unreliable for assessing models to be used to predict probabilities. This is true even when estimated probabilities will only be used to assign observations to segments. There are other tests, such as the Hosmer-Lemeshow test, which assess magnitude as well as order. While there are some practical difficulties in applying these alternative tests to the data sets typically used for default model estimation, they can provide a better assessment of how well a model fits the data. Firm failure prediction is playing an increasingly important role in financial decision making. Ensemble methods have recently shown better classification performance than a single classifier, but the tree-based ensemble method for firm failure prediction has not been fully studied and remains to be further validated. Compared with other machine learning methods, it is more easily interpreted and requires little data preprocessing. In this paper, we employ a gradient-boosting decision-tree (GBDT) method to improve firm failure prediction and explain how to better analyze the relative importance of each financial variable. Because the GBDT deliberately adds new trees in order to correct errors made in previous steps, it has the potential to improve firm failure predictive performance. The influences of different parameters on model performance are analyzed in detail. Moreover, our proposed model is compared with four other popular ensemble methods. Our experimental results show that the GBDT outperforms these other methods in accuracy, precision, F - score and area under the curve. We therefore provide a full validation of GBDT, and believe that it is useful in controlling risk in financial risk management. In contrast to most of the existing literature that has concentrated empirically on the relationship between real estate investment trust (REIT) prices and the stock market, this paper directly models the effects of stock jumps on REIT returns associated with an alternative dynamic process. Three key features of the model are that it decomposes the impacts of stock jumps into two parts (the effect of jumps in the expected returns of REITs, and the influence of jumps in REIT volatility); it can efficiently describe the effect of stock returns on REITs according to Christoffersen's independence test during pre-and postcrisis periods; and the empirical results show that the magnitudes of jumps in expected returns and volatility are sensitive to the value-at-risk of REITs. Therefore, the effects of stock returns on expected returns and volatility of REITs cannot be neglected. It is well known that laypersons and practitioners often resist using complex mathematical models such as those proposed by economics or finance, and instead use fast and frugal strategies to make decisions. We study one such strategy: the recognition heuristic. This states that people infer that an object they recognize has a higher value of a criterion of interest than an object they do not recognize. We extend previous studies by including a general model of the recognition heuristic that considers probabilistic recognition, and carry out a mathematical analysis. We derive general closed-form expressions for all the parameters of this general model and show the similarities and differences between our proposal and the original deterministic model. We provide a formula for the expected accuracy rate by making decisions according to this heuristic and analyze whether or not its prediction exceeds the expected accuracy rate of random inference. Finally, we discuss whether having less information could be convenient for making more accurate decisions. This study shows that time-varying coefficients in the term structure of interest rates equation are correlated with the time-varying term premiums (TVTP) and expectation error (EE). Consistent with Froot (J Finance 44:283-305, 1989), TVTP and EE are the main factors that cause variations in the expectations hypothesis. Once the TVTP and the EE are appropriately incorporated into the model, the GARCH-M evidence fades away. This study documents that investors' sentiment and macroeconomic surprises are the main driving forces behind the TVTP and EE. Evidence of significant sentiment and its interacting with macroeconomic surprises shed some light on the bias due to behavioral variations. This study applies a new matching method to examine the old yet debatable idea that high corporate social responsibility (CSR) is associated with improved bank financial performance (FP). The conventional matching method focuses on one treatment effect. Thus, the old method is considered inappropriate when banks exhibit various degrees of CSR. To address this problem, we first apply the new multi-level matching method to multi-degree CSR and therefore contribute to studies that consider multi-degree CSR without adopting a multi-level matching method. We propose that "the more CSR, the better the FP," which implies that banks engaged in more CSR exhibit better FP. Results before and after the matching significantly differ. CSR insignificantly influences bank FP before matching, but CSR has a strongly positive influence on bank FP after matching. Such effect on bank FP is further strengthened when banks increase their CSR activities, which supports our argument. This paper documents that sellers who employ brokerage offices that list a large number of properties ("active brokerages") obtain higher selling prices, smaller negotiated discounts from the corresponding list prices, and shorter times on the market for their listed properties. Sellers who employ active brokerages list their properties at prices that are closer to our hedonic model's predicted prices. Interestingly, properties that are listed at discounts relative to their predicted prices are snapped up more quickly only if they are associated with brokerages that list a relatively small number of properties. In addition, properties listed by active brokerages are less likely to be listed "as is" and are more likely to have their defects repaired prior to being listed. Moreover, because the efficacy of brokerage services varies across brokerage offices, the results also suggest that the use of an indicator variable for the use of brokerage services is not sufficient to capture the complete impact of the use of a real estate broker on transaction outcomes. In addition, the Appendix discusses the concern for potential endogeneities between the number of brokerage listings and transaction outcomes. It documents that the Durban-Wu-Hausman test indicates that exogeneity cannot be rejected. This paper examines the investment performance of US ethical equity mutual funds relative to the market and their traditional counterparts using a survivorship-bias-free database. We detect selectivity and market timing performance of fund managers using two models. First, we use Treynor and Mazuy's (Harv Bus Rev 44:131-136, 1966) model to determine these performances from a quadratic regression of fund returns on market returns. Second, we use a comprehensive and integrated model derived by Bhattacharya and Pfleiderer (A note on performance evaluation. Technical Report 714, Stanford, California, Stanford University, Graduate School of Business, 1983) and Lee and Rahman (J Bus 63:261-278, 1990) to simultaneously capture stock selection and market timing skill of fund managers. This model extracts timing skill from the relationship between managers' forecast and realized market return. In addition, the R-2 approach developed by Amihud and Goyenko (Rev Financ Stud 26:667-694, 2013) for evaluating selectivity is also used in this paper. Our empirical results indicate that ethical funds perform no worse than their traditional counterparts, although ethical and traditional funds do not outperform the market. We find some evidence of superior security selection and/or market timing skill among a very small number of ethical and traditional funds. It appears that matching traditional funds have slightly more abnormal (superior as well as inferior) performance than ethical funds in our sample. Motivated by recent productivity-based theories of diversification, we argue that only conglomerates with an optimal degree of diversification can utilize their comparative advantages across various industries and achieve economies of scope by eliminating redundancies. Evidence from both corporate bond and equity markets suggests that optimally diversified conglomerates consist of either (1) approximately five equally weighted divisions, or (2) one large core business segment that roughly accounts for 75 % sales. Moreover, the relative size of divisions has a critical impact on how diversification affects credit spreads and excess values. Nonparity among divisions correlates with greater costs that increase with the number of divisions. We provide evidence consistent with the notion that prudent use of financial derivatives improves firms' information environment. We show that firms with sophisticated and comprehensive derivatives use policies display lower levels of uncertainty about future cash flows, volatility of future income and sales growth, and equity mispricing than those that do not use derivatives. However, we also show that policies that consist of large positions in a single type of derivative contract are not likely to produce similar benefits. These results remain intact even after accounting for the endogenous nature of derivatives use policy and information risk and mispricing. This study develops a methodology to incorporate industry and firm-specific factors into the Ohlson (Contemp Account Res 11:661-687, 1995). Residual income valuation model (RIM) and applies a time- series approach instead of the cross-sectional regression models used by existing studies. This method provides neglected valuation information when analysts' earnings forecasts are used. The results suggest that it improves the accuracy of stock value forecasting. The inclusion of the two factors increases the forecasting ability of RIM. Furthermore, the relative importance of the two factors varies across industries. Firm-specific factors are relevant to the accuracy of stock value forecasting for three large industries (finance and insurance, electronics, building, construction and materials), whereas industry factors play a dominant role in determining the accuracy for small industries (automobile and paper). These results imply that either industry or firm-specific factors serve as crucial determinants in stock value forecasting for the five industries. In practice, investors can consider one or both of the two factors when implementing RIM to forecast stock value. We examine the relationship between institutional ownership stability and real earnings management. Our findings indicate that firms held by more stable institutional owners experience lower real activities manipulation by limiting overproduction. We further examine how the stability in the shareholdings of pressure-sensitive and insensitive institutional investors affect target firms' use of real earnings management, respectively. Unlike pressure-sensitive institutional investors, the stability in the share ownership of pressure-insensitive institutional investors (i.e., investment advisors, pension funds and endowments) mitigates target firms' use of real earnings management. Overall, our results are consistent with the view that institutional investors presence acts as a monitor on target firms' use of real earnings manipulation activities. Health status is an important factor in household portfolio decision-making. We develop a theoretical framework to model how households make optimal asset allocation decisions in response to health risks. Our two- and three-asset models both suggest that the maximum utility is derived when households allocate a majority of their assets to human capital. When households experience acute illness shocks, their welfare and portfolio values reduce, and they need to increase their investment in human capital. When an expensive health catastrophe befalls member(s) of households, the optimal decision for asset-rich households is to undertake medical treatment, whereas for asset-poor households it is to forgo treatment. Asset-poor households in particular require public financial assistance to enable them to invest in human capital. I investigate the effect of requiring the audit engagement partner (EP) to sign the audit report on financial analysts' information environment in the United Kingdom (U.K.). To control for the effect of confounding concurrent events, I use a control sample of firms listed in France, Germany, and the Netherlands that had already implemented the EP signature requirement at least 2 years prior to the adoption date in the U.K. I find that, relative to this control sample, U.K. firms experienced a significant increase in analyst following and a significant decrease in analysts' absolute forecast errors and forecast dispersion from a 2-year pre- to 2-year post-signature period. The results are robust to a battery of sensitivity tests such as varying both pre- and post-signature windows and using different measurements of outcome variables. In general, my findings indicate that adopting the EP signature requirement leads to the improvement in analysts' information environment in the U.K., and this improvement is partially influenced by an improvement in audit quality. These results provide timely ex-ante empirical evidence to the ongoing debate over whether passing a similar requirement in the U.S., proposed by the Public Company Accounting Oversight Board, benefits investors and other financial statement users. Increased protein translation in cells and various factors in the tumor microenvironment can induce endoplasmic reticulum (ER) stress, which initiates the unfolded protein response (UPR). We have previously reported that factors released from cancer cells mounting a UPR induce a de novo UPR in bone marrow-derived myeloid cells, macrophages, and dendritic cells that facilitates protumorigenic characteristics in culture and tumor growth in vivo. We investigated whether this intercellular signaling, which we have termed transmissible ER stress (TERS), also operates between cancer cells and what its functional consequences were within the tumor. We found that TERS signaling induced a UPR in recipient human prostate cancer cells that included the cell surface expression of the chaperone GRP78. TERS also activated Wnt signaling in recipient cancer cells and enhanced resistance to nutrient starvation and common chemotherapies such as the proteasome inhibitor bortezomib and the microtubule inhibitor paclitaxel. TERS-induced activation of Wnt signaling required the UPR kinase and endonuclease IRE1. However, TERS-induced enhancement of cell survival was predominantly mediated by the UPR kinase PERK and a reduction in the abundance of the transcription factor ATF4, which prevented the activation of the transcription factor CHOP and, consequently, the induction of apoptosis. When implanted in mice, TERS-primed cancer cells gave rise to faster growing tumors than did vehicle-primed cancer cells. Collectively, our data demonstrate that TERS is a mechanism of intercellular communication through which tumor cells can adapt to stressful environments. The unfolded protein response (UPR) is an ancient cellular pathway that detects and alleviates protein-folding stresses. The UPR components X-box binding protein 1 (XBP1) and inositol-requiring enzyme 1 alpha (IRE1 alpha) promote type I interferon (IFN) responses. We found that Xbp1-deficient mouse embryonic fibroblasts and macrophages had impaired antiviral resistance. However, this was not because of a defect in type I IFN responses but rather an inability of Xbp1-deficient cells to undergo viral-induced apoptosis. The ability to undergo apoptosis limited infection in wild-type cells. Xbp1-deficient cells were generally resistant to the intrinsic pathway of apoptosis through an indirect mechanism involving activation of the nuclease IRE1 alpha. We observed an IRE1 alpha-dependent reduction in the abundance of the proapoptotic microRNA miR-125a and a corresponding increase in the amounts of the members of the antiapoptotic Bcl-2 family. The activation of IRE1 alpha by the hepatitis C virus (HCV) protein NS4B in XBP1-proficient cells also conferred apoptosis resistance and promoted viral replication. Furthermore, we found evidence of IRE1 alpha activation and decreased miR-125a abundance in liver biopsies from patients infected with HCV compared to those in the livers of healthy controls. Our results reveal a prosurvival role for IRE1 alpha in virally infected cells and suggest a possible target for IFN-independent antiviral therapy. Store-operated Ca2+ entry (SOCE) is critical for salivary gland fluid secretion. We report that radiation treatment caused persistent salivary gland dysfunction by activating a TRPM2-dependent mitochondrial pathway, leading to caspase-3-mediated cleavage of stromal interaction molecule 1 (STIM1) and loss of SOCE. After irradiation, acinar cells from the submandibular glands of TRPM2(+/+), but not those from TRPM2(-/-) mice, displayed an increase in the concentrations of mitochondrial Ca2+ and reactive oxygen species, a decrease in mitochondrial membrane potential, and activation of caspase-3, which was associated with a sustained decrease in STIM1 abundance and attenuation of SOCE. In a salivary gland cell line, silencing the mitochondrial Ca2+ uniporter or caspase-3 or treatment with inhibitors of TRPM2 or caspase-3 prevented irradiation-induced loss of STIM1 and SOCE. Expression of exogenous STIM1 in the salivary glands of irradiated mice increased SOCE and fluid secretion. We suggest that targeting the mechanisms underlying the loss of STIM1 would be a potentially useful approach for preserving salivary gland function after radiation therapy. Loss of nitric oxide (NO) bioavailability underlies the development of hypertensive heart disease. We investigated the effects of dietary nitrite on N-G-nitro-L-arginine methyl ester (L-NAME)-induced hypertension. Sprague-Dawley rats were divided into five groups: an untreated control group, an L-NAME-treated group, and three other L-NAME-treated groups supplemented with 10 mg/L or 100 mg/L of nitrite or 100 mg/L of captopril in drinking water. After the 8-week experimental period, mean arterial blood pressure was measured, followed by sampling of blood and heart tissue for assessment of nitrite/nitrate levels in the plasma and heart, the plasma level of angiotensin II (AT II), and the heart transcriptional levels of AT II type 1 receptor (AT(1)R), transforming growth factor-beta (TGF-beta 1), and connective tissue proteins such as type 1 collagen and fibronectin. Heart tissue was analyzed by histopathological morphomety, including assessments of ventricular and coronary vascular hypertrophy and fibrosis, as well as immunohistochemistry analyses of myocardial expression of AT(1)R. L-NAME treatment reduced the plasma nitrate level and led to the development of hypertension, with increased plasma levels of AT II and increased heart transcriptional levels of AT(1)R and TGF-beta 1-mediated connective tissue proteins, showing myocardial and coronary arteriolar hypertrophy and fibrosis. However, dietary nitrite supplementation inhibited TGF-beta 1-mediated cardiac remodeling by suppressing AT II and AT(1)R. These results suggest that dietary nitrite levels achievable via a daily high-vegetable diet could improve hypertensive heart disease by inhibiting AT II-AT(1)R-mediated cardiac remodeling. (C) 2017 Elsevier Inc. All rights reserved. Takotsubo cardiomyopathy (TCM) is characterized by transient left ventricular apical ballooning with the absence of coronary occlusion, which is an acute cardiac syndrome with substantial morbidity and mortality. It was reported that reduced endogenous hydrogen sulfide (H2S) levels may be related to various heart diseases. The present study investigated the mechanism by which H2S administration modulates and protects cardiac function in TCM rats. In order to establish a TCM model, Sprague Dawley (SD) rats were injected with a single dose of beta-adrenergic agonist isoprenaline (ISO). We found that ISO induced cardiac dysfunction, which was characterized by a significant decrease in left ventricular systolic pressure (LVSP), maximum contraction velocity (+dp/dtmax), maximum relaxation velocity (-dp/dtmax) and increased left ventricular end-diastolic pressure (LVEDP). Accordingly, we found that plasma and heart tissue H2S levels in TCM rats decreased significantly, and cardiac cystathionine gamma-Iyase (CSE) and 3-mercaptopyruvate sulfurtransferase (3-MST) expression were lower. Moreover, cardiac dysfunction in TCM was associated with oxidative stress response and reactive oxygen species (ROS) formation. NADPH Oxidase 4 (NOX4) and p67 protein expressions significantly increased in TCM cardiac tissues. In addition, Sodium hydrosulfide (NaHS) ameliorated ISO-induced cardiac dysfunction and reversed ISO-induced oxidative stress. This study revealed that H2S exerted cardioprotective effects by reducing NADPH oxidase, which reduced ROS formation and prevented oxidative stress. Our study provided novel evidence that H2S is protective in myocardial dysfunction in TCM rats and could be a therapeutic target for alleviating beta-adrenergic system overstimulation-induced cardiovascular dysfunction. (C) 2017 Elsevier Inc. All rights reserved. N-hydroxyamphetamine (AmphNHOH) is an oxidative metabolite of amphetamine and methamphetamine. It is known to form inhibitory complexes upon binding to heme proteins. However, its interactions with myoglobin (Mb) and hemoglobin (Hb) have not been reported. We demonstrate that the reactions of AmphNHOH with ferric Mb and Hb generate the respective heme-nitrosoamphetamine derivatives characterized by UV-vis spectroscopy. We have determined the X-ray crystal structure of the H64A Mb-nitrosoamphetamine complex to 1.73 angstrom resolution. The structure reveals the N-binding of the nitroso-d-amphetamine isomer, with no significant H-bonding interactions between the ligand and the distal pocket amino acid residues. 2017 Elsevier Inc. All rights reserved. A dual-color fluorescence imaging method for simultaneous monitoring of intra- and extracellular nitric oxide (NO) was developed. Assisted by confocal laser scanning microscope, the intra- and extracellular NO can be successfully visualized by using two selected probes, 4,4-difluoro-8-(3,4-diaminophenyl)-3,5-bis(4-methoxyphenyl)-4-bora-3a,4a-diaza-s-indacene (p-MOPS) and disodium 2,6-disulfonate-1,3-dimethyl-5-hexadecyl-8-(3,4-diaminophenyl)-4,4'-difluoro-4-bora-3a,4a-diaza-s-indacene (DSDMHDAB), which display distinct membrane permeability and show different colors of fluorescence after reaction with NO. Results indicated that intra- and extracellular NO could be fluorometrically detected without mutual interference. The applicability of the proposed method was validated by dual-color imaging of NO on both sides of the plasma membrane in RAW 264.7 murine macrophages and human vascular endothelial (ECV-304) cells. This multi-labeling approach using multi-laser excitation and multi-color fluorescence detection holds great promise for simultaneous analysis of NO as well as other gasotransmitters in living cells with subcellular resolution. (C) 2017 Elsevier Inc. All rights reserved. AS a part of our extensive structure -activity relationship study of anti-inflammatory heterocycles, a novel series of 67 polysubstituted 2-aminopyrimidines was prepared bearing one (at the C-4 position of the pyrimidine ring) or two (in the C-4 and C-6 positions) (hetero)aryl substituents attached directly through the C-C bond. The key synthetic steps involved either Suzuki-Miyaura or Stille cross-coupling reactions carried out on easily available 4,6-dichloropyrimidines. All prepared compounds, except one, were able to inhibit immune-activated production of nitric oxide (NO) significantly. Moreover, several compounds were found to be low micromolar dual inhibitors of NO and prostaglandin E-2 (PGE(2)) production. Although the exact mode of action of the prepared compounds remains to be elucidated, non-toxic dual inhibitors of NO and PGE2 production may have great therapeutic benefit in treatment of various inflammation diseases and deserve further preclinical evaluation. (C) 2017 Elsevier Inc. All rights reserved. Nitric oxide (NO) contributes to the central control of cardiovascular activity. The rostral ventrolateral medulla (RVLM) has been recognized as a pivotal region for maintaining basal blood pressure (BP) and sympathetic tone. It is reported that asymmetric dimethylarginine (ADMA), characterized as a cardiovascular risk marker, is an endogenous inhibitor of nitric oxide synthesis. The present was designed to determine the role of ADMA in the RVLM in the central control of BP in hypertensive rats. In Sprague Dawley (SD) rats, microinjection of ADMA into the RVLM dose-dependently increased BP, heart rate (HR), and renal sympathetic never activity (RSNA), but also reduced total NO production in the RVLM. In central angiotensin II (Ang II)-induced hypertensive rats and spontaneously hypertensive rat (SHR), the level of ADMA in the RVLM was increased and total NO production was decreased significantly, compared with SD rats treated vehicle infusion and WILY rats, respectively. These hypertensive rats also showed an increased protein level of protein arginine methyltransferasesl (PRMT1, which generates ADMA) and a decreased expression level of dimethylarginine dimethylaminohydrolases 1 (DDAH1, which degrades ADMA) in the RVLM. Furthermore, increased AMDA content and PRMT1 expression, and decreased levels of total NO production and DDAH1 expression in the RVLM in SHR were blunted by intracisternal infusion of the angiotensin II type 1 receptor (AT1R) blocker losartan. The current data indicate that the ADMA-mediated NO inhibition in the RVLM plays a critical role in involving in the central regulation of BP in hypertension, which may be associated with increased Ang II. (C) 2017 Elsevier Inc. All rights reserved. Background: Myocardial infarction remains the single leading cause of death worldwide. Upon reperfusion of occluded arteries, deleterious cellular mediators particularly located at the mitochondria level can be activated, thus limiting the outcome in patients. This may lead to the so-called ischemiaireperfusion (I/R) injury. Calpainsare cysteine proteases and mediators of caspase-independent cell death. Recently, they have emerged as central transmitters of cellular injury in several cardiac pathologies e.g. hypertrophy and acute I/R injury. Methods: Here we investigated the role of cardiac calpains in acute I/R in relation to mitochondrial integrity and whether calpains can be effectively inhibited by posttranslational modification by Snitrosation. Taking advantage of the a cardiomyocyte cell line (HL1), we determined S-nitrosation by the Biotin-switch approach, cell viability and intracellular calcium concentration after simulated ischemia and reoxygenation- all in dependence of supplementation with nitrite, which is known as an 'hypoxic nitric oxide (NO) donor'. Likewise, using an in vivo I/R model, calpain S-nitrosation, calpain activity and myocardial liR injury were characterized in vivo. Results: Nitrite administration resulted in an increased S-nitrosation of calpains, and this was associated with an improved cell-survival. No impact was detected on calcium levels. In line with these in vitro experiments, nitrite initiated calpain S-nitrosation in vivo and caused an infarct sparing effect in an in vivo myocardial liR model. Using electron microscopy in combination with immuno-gold labeling we determined that calpain 10 increased, while calpain 2 decreased in the course of I/R. Nitrite, in turn, prevented an liR induced increase of calpains 10 at mitochondria and reduced levels of calpain 1. Conclusion: Lethal myocardial injury remains a key aspect of myocardial I/R. We show that calpains, as key players in caspase-independent apoptosis, increasingly locate at mitochondria following I/R. Inhibitory post-translational modification by S-nitrosation of calpains reduces deleterious calpain activity in murine cardiomyocytes and in vivo. (C) 2017 Elsevier Inc. All rights reserved. By fusing donepezil and curcumin, a novel series of compounds were obtained as multitarget-directed ligands against Alzheimer's disease. Among them, compound 11b displayed potent acetylcholinesterase (AChE) inhibition (IC50 = 187 nM) and the highest BuChE/AChE selectivity (66.3). Compound lib also inhibited 45.3% A beta(1-42) self-aggregation at 20 mu M and displayed remarkable antioxidant effects. The metal-chelating property of compound lib was elucidated by determining the 1:1 stoichiometry for the 11b-Cu(II) complex. The excellent blood-brain barrier permeability of 11b also indicated the potential for the compound to penetrate the central nervous system. (C) 2017 Elsevier Ltd. All rights reserved. A new series of pyrazolo[3,4-dlpyrimidines tethered with nitric oxide (NO) producing functionality was designed and synthesized. Sulforhodamine B (SRB) protein assay revealed that NO releasing moiety in the synthesized compounds significantly decreased the cell growth more than the des-NO analogues. Compounds 7C and 7G possessing N-para-substituted phenyl group, released the highest NO concentration of 4.6% and 4.7% respectively. Anti-proliferative activity of synthesized compounds on HepG2 cell line identified compounds 7h, 7p, 14a and 14b as the most cytotoxic compounds in the series of IC50 = 3, 5, 3 and 5 mu M, respectively, compared to erlotinib as a reference drug (IC50 = 25 mu M). Flow cytometry studies revealed that 7 h arrested the cells in GO/G1 phase of cell cycle while 7p arrested the cells in S phase. Moreover, docking study of the synthesized compounds on EGFR (PDB code: 1M17) and cytotoxicity study indicated that N-1 phenyl para substitution, pyrazole C-3 alkyl substitution and tethering the nitrate moiety through butyl group had a significant impact on the activity. (C) Published by Elsevier Ltd. Cl domain-containing proteins, such as protein kinase C (PKC), have a central role in cellular signal transduction. Their involvement in many diseases, including cancer, cardiovascular disease, and immunological and neurological disorders has been extensively demonstrated and has prompted a search for small molecules to modulate their activity. By employing a diacylglycerol (DAG)-lactone template, we have been able to develop ultra potent analogs of diacylglycerol with nanomolar binding affinities approaching those of complex natural products such as phorbol esters and bryostatins. One current challenge is the development of selective ligands capable of discriminating between different protein family members. Recently, structure-activity relationship studies have shown that the introduction of an indole ring as a DAG-lactone substituent yielded selective Ras guanine nucleotide-releasing protein (RasGRP1) activators when compared to PKC alpha and PKC epsilon. In the present work, we examine the effects of ligand selectivity relative to the orientation of the indole ring and the nature of the DAG-lactone template itself. Our results show that the indole ring must be attached to the lactone moiety through the sn-2 position in order to achieve Ra5GRP1 selectivity. (C) 2017 Elsevier Ltd. All rights reserved. As a hot topic of epigenetic studies, histone deacetylases (HDACs) are related to lots of diseases, especially cancer. Further researches indicated that different HDAC isoforms played various roles in a wide range of tumor types. Herein a novel series of HDAC inhibitors with isatin-based caps and o-phenylenediamine-based zinc binding groups have been designed and synthesized through scaffold hopping strategy. Among these compounds, the most potent compound 9n exhibited similar if not better HDAC inhibition and antiproliferative activities against multiple tumor cell lines compared with the positive control entinostat (MS-275). Additionally, compared with MS-275 (IC50 values for HDAC1, 2 and 3 were 0.163, 0.396 and 0.605 mu M, respectively), compound 9n with IC50 values of 0.032, 0.256 and 0.311 mu M for HDAC1, 2 and 3 respectively, showed a moderate HDAC1 selectivity. (C) 2017 Published by Elsevier Ltd. Triple-negative breast cancers (TNBCs) lack the signature targets of other breast tumors, such as HER2, estrogen receptor, and progesterone receptor. These aggressive basal-like tumors are driven by a complex array of signaling pathways that are activated by multiple driver mutations. Here we report the discovery of 6 (KIN-281), a small molecule that inhibits multiple kinases including maternal leucine zipper kinase (MELK) and the non-receptor tyrosine kinase bone marrow X-linked (BMX) with single-digit micromolar IC(50)s. Several derivatives of 6 were synthesized to gain insight into the binding mode of the compound to the ATP binding pocket. Compound 6 was tested for its effect on anchorage-dependent and independent growth of MDA-MB-231 and MDA-MB-468 breast cancer cells. The effect of 6 on BMX prompted us to evaluate its effect on STAT3 phosphorylation and DNA binding. The compound's inhibition of cell growth led to measurements of survivin, Bcl-X-L, p21(WAF1/CIP1), and cyclin A2 levels. Finally, LC3B-II levels were quantified following treatment of cells with 6 to determine whether the compound affected autophagy, a process that is known to be activated by STAT3. Compound 6 provides a starting point for the development of small molecules with polypharmacology that can suppress TNBC growth and metastasis. (C) 2017 Published by Elsevier Ltd. A new family of multitarget molecules able to interact with acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE), as well as with monoamino oxidase (MAO) A and B, has been synthesized. Novel 3,4-dihydro-2(1H)-quinoline-O-alkylamine derivatives have been designed using a conjunctive approach that combines the JMC49 and donepezil. The most promising compound TM-33 showed potent and balance inhibitory activities toward ChE and MAO (eeAChE, eqBuChE, hMAO-A and hMAO-B with IC50 values of 0.56 mu M, 2.3 mu M, 0.3 mu M and 1.4 mu M, respectively) but low selectivity. Both kinetic analysis of AChE inhibition and molecular modeling study suggested that TM-33 binds simultaneously to the catalytic active site and peripheral anionic site of AChE. Furthermore, our investigation proved that TM-33 could cross the blood-brain barrier (BBB) in vitro, and abided by Lipinski's rule of five. The results suggest that compound TM-33, an interesting multi-targeted active molecule, offers an attractive starting point for further lead optimization in the drug-discovery process against Alzheimer's disease. (C) 2017 Elsevier Ltd. All rights reserved. In this article, synthetic studies around a pyridylacrylamide-based hit compound (1), utilizing structure based drug design guided by CDK8 docking models, is discussed. Modification of the pendant 4-fluorophenyl group to various heteroaromatic rings was conducted aiming an interaction with the proximal amino acids, and then replacement of the morpholine ring was targeted for decreasing potential of time dependent CYP3A4 inhibition. These efforts led to the compound 4k, with enhanced CDK8 inhibitory activity and no apparent potential for time-dependent CYP3A4 inhibition (CDK8 IC50: 2.5 nM; CYP3A4 TDI: 99% compound remaining). Compound 4k was found to possess a highly selective kinase inhibition profile, and also showed favorable pharmacokinetic profile. Oral administration of 4k (15 mg/kg, bid. for 2 weeks) suppressed tumor growth (TIC 29%) in an RPM18226 mouse xenograft model. (C) 2017 Elsevier Ltd. All rights reserved. Here, we report the effect of new non-classical bioisosteres of miltefosine on Leishmania amazonensis. Fifteen compounds were synthesized and the compound dhmtAc, containing an acetate anion, a side chain of 10 carbon atoms linked to N-1 and a methyl group linked to N-3, showed high and selective biological activity against L. amazonensis. On the intracellular amastigotes, stages of the parasite related to human disease, the IC50 values were near or similar to the 1.0 mu M (0.9, 0.8 and 1.0 mu M on L. amazonensis-WT, and two transgenic L. amazonensis expressing GFP and RFP, respectively), being more active than miltefosine. Furthermore, dhmtAc did not show toxic effects on human erythrocytes and macrophages (CC50 = 115.91 mu M) being more destructive to the intracellular parasites (selectivity index > 115). Promastigotes and intramacrophage amastigotes treated with dhmtAc showed low capacity for reversion of the effect of the compound. A study of the mechanism of action of this compound showed some features of metazoan apoptosis, including cell volume decreases, loss of mitochondrial membrane potential, ROS production, an increase in the intracellular lipid bodies, in situ labeling of DNA fragments by TUNEL labeling and phosphatidylserine exposure to the outerleaflet of the plasma membrane. In addition, the plasma membrane disruption, revealed by PI labeling, suggests cell death by necrosis. No increase in autophagic vacuoles formation in treated promastigotes was observed. Taken together, the data indicate that the bioisostere of miltefosine, dhmtAc, has promising antileishmanial activity that is mediated via apoptosis and necrosis. (C) 2017 Elsevier Ltd. All rights reserved. We recently reported that 4-epi-jaspine B exhibits potent inhibitory activity towards sphingosine kinases (SphKs). In this study, we investigated the effects of modifying the 2-alkyl group, as well as the functional groups on the THE ring of 4-epi-jaspine B using a diversity-oriented synthesis approach based on a late stage cross metathesis reaction. The introduction of a p-phenylene tether to the alkyl group was favored in most cases, whereas the replacement of a carbon atom with an oxygen atom led to a decrease in the inhibitory activity. Furthermore, the introduction of a bulky alkyl group at the terminus led to a slight increase in the inhibitory activity of this series towards SphKs compared with 4-epi-jaspine B (the Q values of compound 13 for SphK1 and SphK2 were 0.2 and 0.4, respectively). Based on this study, we identified two isoform selective inhibitors, including the m-phenylene derivative 4 [IC50 (SphK1) >= 30 mu M; IC50 (SphK2) = 2.2 mu M] and the methyl ether derivative 22 [IC50 (SphK1) = 4.0 mu M; IC50 (SphK2) >= 30 mu m]. (C) 2017 Elsevier Ltd. All rights reserved. The inhibition kinetics and stereospecificity of the chiral nerve agent derivative 0,S-diethylphenylphosphonothioate (DEPP) were examined for two forms of acetylcholinesterase (human and eel) and equine butyrylcholinesterase. Both S- and R-DEPP are poor inhibitors of eel AChE (IC50 150 mu M), consistent with a large, nondiscriminatory binding interaction in the active site of this enzyme. However, S-DEPP is active against human and equine AChE with low mu M IC(50)s. DEPP stereospecificities (S/R) toward these enzymes are moderate (20) relative to other cholinesterase-organophosphate (OP) systems. Pralidoxime, a common rescue agent, affects a modest recovery of both hAChE and eqBChE from treatment with S-DEPP. This result is consistent with expected chemical modification by DEPP and indicates that a measurable amount of the enzyme-phosphonate adduct does not undergo aging. Kinetic analysis of inhibition of both hAChE and eqBChE by S-DEPP yields K-1 values near 8 mu M and k(2) values of about 0.10 min(-1). In both cases, the reaction is practically irreversible. Second order rate constants calculated from these values are similar to those obtained previously using other thio-substituted OPs with bulky groups. Since BChE has a more accommodating acyl pocket than AChE, the similar behaviors of both enzymes toward S-DEPP is notable and is likely a reflection of the weakened potency of DEPP relative to chemical warfare agents. (C) 2017 Elsevier Ltd. All rights reserved. A series of new butadiene derivatives was synthesized and evaluated as tubulin polymerization inhibitors for the treatment of cancer. The optimal butadiene derivative, 9a, exhibited IC50 values of 0.056-0.089 mu M for five human cancer cell lines. This paper included a mechanistic study of the antiproliferative activity, including a tubulin polymerization assay, an examination of morphological alterations of cancer cells, an analysis of cell cycle arrest and an apoptosis assay. (C) 2017 Elsevier Ltd. All rights reserved. Aldose reductase (ALR2), a NADPH-dependent reductase, is the first and rate-limiting enzyme of the polyol pathway of glucose metabolism and is implicated in the pathogenesis of secondary diabetic complications. In the last decades, this enzyme has been targeted for inhibition but despite the numerous efforts made to identify potent and safe ALR2 inhibitors, many clinical candidates have been a failure. For this reason the research of new ALR2 inhibitors highly effective, selective and with suitable pharmacokinetic properties is still of great interest. In this paper some new N-(aroyl)-N-(arylmethyloxy)alanines have been synthesized and tested for their ability to inhibit ALR2. Some of the synthesized compounds exhibit IC50 in the low micromolar range and all have proved to be highly selective towards ALR2. The N-(aroyl)-N-(arylmethyloxy)-alpha-alanines are a promising starting point for the development of new ALR2 selective drugs with the aim of delaying the onset of diabetic complications. (C) 2017 Elsevier Ltd. All rights reserved. Histone acetylation is an extensively investigated post-translational modification that plays an important role as an epigenetic regulator. It is controlled by histone acetyl transferases (HATS) and histone deacetylases (HDACs). The overexpression of HDACs and consequent hypoacetylation of histones have been observed in a variety of different diseases, leading to a recent focus of HDACs as attractive drug targets. The natural product largazole is one of the most potent natural HDAC inhibitors discovered so far and a number of largazole analogs have been prepared to define structural requirements for its HDAC inhibitory activity. However, previous structure-activity relationship studies have heavily investigated the macrocycle region of largazole, while there have been only limited efforts to probe the effect of various zinc-binding groups (ZBGs) on HDAC inhibition. Herein, we prepared a series of largazole analogs with various ZBGs and evaluated their HDAC inhibition and cytotoxicity. While none of the analogs tested were as potent or selective as largazole, the Zn2+-binding affinity of each ZBG correlated with HDAC inhibition and cytotoxicity. We expect that our findings will aid in building a deeper understanding of the role of ZBGs in HDAC inhibition as well as provide an important basis for the future development of new largazole analogs with non-thiol ZBGs as novel therapeutics for cancer. (C) 2017 Elsevier Ltd. All rights reserved. Amplification of the gene encoding Myeloid cell leukemia-1 (Mcl-1) is one of the most common genetic aberrations in human cancer and is associated with high tumor grade and poor survival. Recently, we reported on the discovery of high affinity Mcl-1 inhibitors that elicit mechanism-based cell activity. These inhibitors are lipophilic and contain an acidic functionality which is a common chemical profile for compounds that bind to albumin in plasma. Indeed, these Mcl-1 inhibitors exhibited reduced in vitro cell activity in the presence of serum. Here we describe the structure of a lead Mcl-1 inhibitor when bound to Human Serum Albumin (HSA). Unlike many acidic lipophilic compounds that bind to drug site 1 or 2, we found that this Mcl-1 inhibitor binds predominantly to drug site 3. Site 3 of HSA may be able to accommodate larger, more rigid compounds that do not fit into the smaller drug site 1 or 2. Structural studies of molecules that bind to this third site may provide insight into how some higher molecular weight compounds bind to albumin and could be used to aid in the design of compounds with reduced albumin binding. (C) 2017 Published by Elsevier Ltd. A series of sixteen novel aromatic and heterocyclic bis-sulfonamide Schiff bases were prepared by conjugation of well known aromatic and heterocyclic aminosulfonamide carbonic anhydrase (CA, EC 4.2.1.1) inhibitor pharmacophores with aromatic and heterocyclic bis-aldehydes. The obtained bis-sulfonamide Schiff bases were investigated as inhibitors of four selected human (h) CA isoforms, hCA I, hCA II, hCA VII and hCA IX. Most of the newly synthesized compounds showed a good inhibitory profile against isoforms hCA II and hCA IX, also showing moderate selectivity against hCA I and VII. Several efficient lead compounds were identified among this bis-sulfonamide Schiff bases with low nanomolar to sub-nanomolar activity against hCA II (K(1)s ranging between 0.4 and 861.1 nM) and IX (Ks between 0.5 and 933.6 nM). Since hCA II and hCA IX are important drug targets (antiglaucoma and anti-tumor agents), these isoform-selective inhibitors may be considered of interest for various biomedical applications. (C) 2017 Elsevier Ltd. All rights reserved. G protein-coupled receptor 52 (GPR52) agonists are expected to improve the symptoms of psychiatric disorders. During exploration for a novel class of GPR52 agonists with good pharmacokinetic profiles, we synthesized 4-(3-(3-fluoro-5-(trifluoromethyl)benzy1)-5-methyl-1H-1,2,4-triazol-1-y1)-2-methylbenzamide (4u; half maximal effective concentration (EC50) = 75 nM, maximal response (E-max) = 122%) starting from a high-throughput screening hit 3 (EC50 = 470 nM, E-max = 56%). The structural features of a reported GPR52 agonist were applied to 3, led to design 4-azolylbenzamides as novel GPR52 agonists. A structure-activity relationship study of 4-azolylbenzamide resulted in the design of the 1,2,4-triazole derivative 4u, which demonstrated excellent bioavailability in rats (F = 53.8%). Oral administration of 4u (10 mg/kg) significantly suppressed methamphetamine-induced hyperlocomotion in mice. Thus, 4u is a promising lead compound for drug discovery research of GPR52 agonists. (C) 2017 Elsevier Ltd. All rights reserved. A new series of thirteen N-(carbobenzyloxy)-L-phenylalanine and N-(carbobenzyloxy)-L-aspartic acid-beta-benzyl ester compounds were synthesized and evaluated for antiproliferative activity against four different human cancer cell lines: cervical cancer (HeLa), lung cancer (A549), gastric cancer (MGC-803) and breast cancer (MCF-7) as well as topoisomerase I and Hot inhibitory activity. Compounds (5a, 5b, 5e, 8a, 8b) showed significant antiproliferative activity with low IC50 values against the four cancer cell lines. Equally, compounds 5a, 5b, 5e, 5f, 8a, 8d, 8e and 8f showed topoisomerase Hot inhibitory activity at 100 M with 5b, 5e, 8f exhibiting potential topoisomerase Hot inhibitory activity compared to positive control at 100 p.M and 201.1M, respectively. Conversely compounds 5e, 5f, 5g and 8a showed weaker topoisomerase I inhibitory activity compared to positive control at 100 M. Compound 5b exhibited the most potent topoisomerase Hot inhibitory activity at low concentration and better antiproliferative activity against the four human cancer cell lines. The molecular interactions between compounds 5a-5g, 8a-8f and the topoisomerase lice (PDB ID: 1ZXM) were further investigated through molecular docking. The results indicated that these compounds could serve as promising leads for further optimization as novel antitumor agents. (C) 2017 Elsevier Ltd. All rights reserved. A growing number of studies have demonstrated that interleukin (IL)-6 plays pathological roles in the development of chronic inflammatory disease and autoimmune disease by activating innate immune cells and by stimulating adaptive inflammatory T cells. So, suppression of IL-6 function may be beneficial for prevention and treatment of chronic inflammatory disease. This study reports that a series of synthetic derivatives of benzoxazole have suppressive effects on IL-6-mediated signaling. Among 16 synthetic derivatives of benzoxazole, the compounds 4, 6, 11, 15, 17, and 19 showed a strong suppressive activity against IL-6-induced phosphorylation of signal transducer and activator of transcription (STAT) 3 by 80-90%. While the cell viability was strongly decreased by compounds 11, 17, 19, the compounds 4, 6, and 15 revealed less cytotoxicity. We then examined the effects of the compounds on inflammatory cytokine production by CD4+ T cells. CD4+ T cells were induced to differentiate into interferon (IFN)-gamma, IL-17-, or IL-4-producing effector T cells in the presence of either the compound production by effector T cells, the active compound 4 strongly suppressed the production of inflammatory cytokines IFNI, and IL-17, and also inhibited allergic inflammatory cytokines IL-4, IL-5, and IL-13 produced by effector Th2 cells. These results suggest that a benzoxazole derivative, compound 4 effectively suppresses IL-6-STAT3 signaling and inflammatory cytokine production by T cells and provides a beneficial effect for treating chronic inflammatory and autoimmune disease. (C) 2017 Published by Elsevier Ltd. The effects of ten natural cadinane sesquiterpenoids isolated from Heterotheca inuloides on the pathways of the NF-kappa B, Nrf2 and STAT3 transcription factors were studied for the first time. The main constituent in this species, 7-hydroxy-3,4-dihydrocadalene (1), showed anti-NE-kappa B activity and activated the antioxidant Nrf2 pathway, which may explain the properties reported for the traditional use of the plant. In addition to the main metabolite, a structurally similar compound, 7-hydroxy-cadalene (2), also displayed anti-NE-kappa B activity. Thus, both natural compounds were used as templates for the preparation of a novel semi-synthetic derivative set, including esters and carbamates, which were evaluated for their potential in vitro antiproliferative activities against six human cancer cell lines. Carbamate derivatives 32 and 33 were found to exhibit potent activity against human colorectal adenocarcinoma and showed important selectivity in cancer cells. Among ester derivatives, compound 13 was determined to be a more potent NE-kappa B inhibitor and Nrf2 activator than its parent, 7-hydroxy-3,4-dihydrocadalene (1). Furthermore, this compound decreases levels of phospho-IKBa, a protein complex involved in the NE-kappa B activation pathway. Molecular simulations suggest that all active compounds interact with the activation loop of the IKKI3 subunit in the IKK complex, which is the responsible of IKBoc phosphorylation. Thus, we identified two natural, and one semi-synthetic, NF-kappa B and Nrf2 modulators and two new promising cytotoxic compounds. (C) 2017 Elsevier Ltd. All rights reserved. Two series of quinazoline derivatives bearing aryl semicarbazone scaffolds (9a-o and 10a-o) were designed, synthesized and evaluated for the IC50 values against four cancer cell lines (A549, HepG2, MCF-7 and PC-3). The selected compound 90 was further evaluated for the inhibitory activity against EGFR kinases. Four of the compounds showed excellent cytotoxicity activity and selectivity with the IC50 values in single-digit mu M to nanomole range. Two of them are equal to more active than positive control afatinib against one or more cell lines. The most promising compound 90 showed the best activity against A549, HepG2, MCF-7 and PC-3 cancer cell lines and EGFR kinase, with the IC50 values of 1.32 +/- 0.38 mu M, 0.07 +/- 0.01 mu M, 0.91 +/- 0.29 mu M and 4.89 +/- 0.69 mu M, which were equal to more active than afatinib (1.40 +/- 0.83 uM, 1.33 +/- 1.281.1M, 2.63 +/- 1.06 uM and 3.96 +/- 0.59 M), respectively. Activity of the most promising compound 90 (IC50 56 nM) against EGFR kinase was slightly lower to the positive compound afatinib (IC50 1.6 nM) but more active than reference staurosporine (IC50 238 nM). The result of flow cytometry, with the dose of compound 90 increasing, which indicated the compound 90 could induce remarkable apoptosis of A549 and cells in a dose dependent manner. Structure-activity relationships (SARs) and docking studies indicated that replacement of the cinnamamide group by aryl semicarbazone scaffolds slightly decreased the anti-tumor activity. The results suggested that hydroxy substitution at C-4 had a significant impact on the activity and replacement of the tetrahydrofuran group by methyl moiety was not beneficial for the activity. (C) 2017 Elsevier Ltd. All rights reserved. The emerging significance of recognition of cellular glycans by lectins for diverse aspects of pathophysiology is a strong incentive for considering development of bioactive and non-hydrolyzable glycoside derivatives, for example by introducing S/Se atoms and the disulfide group instead of oxygen into the glycosidic linkage. We report the synthesis of 12 bivalent thio-, disulfido- and selenoglycosides attached to benzene/naphthalene cores. They present galactose, for blocking a plant toxin, or lactose, the canonical ligand of adhesion/growth-regulatory galectins. Modeling reveals unrestrained flexibility and inter-head group distances too small to bridge two sites in the same lectin. Inhibitory activity was first detected by solid-phase assays using a surface-presented glycoprotein, with relative activity enhancements per sugar unit relative to free cognate sugar up to nearly 10fold. Inhibitory activity was also seen on lectin binding to surfaces of human carcinoma cells. In order to proceed to characterize this capacity in the tissue context monitoring of lectin binding in the presence of inhibitors was extended to sections of three types of murine organs as models. This procedure proved to be well-suited to determine relative activity levels of the glycocompounds to block binding of the toxin and different human galectins to natural glycoconjugates at different sites in sections. The results on most effective inhibition by two naphthalene-based disulfides and a selenide raise the perspective for broad applicability of the histochemical assay in testing glycoclusters that target biomedically relevant lectins. (C) 2017 Elsevier Ltd. All rights reserved. A series of dialkyl aryl phosphates and dialkyl arylalkyl phosphates were synthesized. Their inhibitory activities were evaluated against acetylcholinesterase (AChE) and butyrylcholinesterase (BChE). The di-n-butyl phosphate series consistently displayed selective inhibition of BChE over AChE. The most potent inhibitors of butyrylcholinesterase were di-n-butyl-3,5-dimethylphenyl phosphate (4b) [ICy = 1.0 0.4 LM] and di-n-butyl 2-naphthyl phosphate (5b) [K-t=1.9 0.4 I.L.M]. Molecular modeling was used to uncover three subsites within the active site gorge that accommodate the three substituents attached to the phosphate group. Phosphates 4b and 5b were found to bind to these three subsites in analogous fashion with the aromatic groups in both analogs being accommodated by the "lower region," while the lone pairs on the P=0 oxygen atoms were oriented towards the oxyanion hole. In contrast, din-butyl-3,4-dimethylphenyl phosphate (4a) [K = 9 111M], an isomer of 4b, was found to orient its aromatic group in the "upper left region" subsite as placement of this group in the "lower region" resulted in significant steric hindrance by a ridge-like region in this subsite. Future studies will be designed to exploit these features in an effort to develop inhibitors of higher inhibitory strength against butyrylcholinesterase. (C) 2017 Elsevier Ltd. All rights reserved. Non-substrate-like inhibitors of glycosyltransferases are sought after as chemical tools and potential lead compounds for medicinal chemistry, chemical biology and drug discovery. Here, we describe the discovery of a novel small molecular inhibitor chemotype for LgtC, a retaining a.,-1,4-galactosyltransferase involved in bacterial lipooligosaccharide biosynthesis. The new inhibitors, which are structurally unrelated to both the donor and acceptor of LgtC, have low micromolar inhibitory activity, comparable to the best substrate-based inhibitors. We provide experimental evidence that these inhibitors react covalently with LgtC. Results from detailed enzymological experiments with wild-type and mutant LgtC suggest the non-catalytic active site residue Cys246 as a likely target residue for these inhibitors. Analysis of available sequence and structural data reveals that non-catalytic cysteines are a common motif in the active site of many bacterial glycosyltransferases. Our results can therefore serve as a blueprint for the rational design of non-substrate-like, covalent inhibitors against a broad range of other bacterial glycosyltransferases. (C) 2017 Elsevier Ltd. All rights reserved. In this study, a series of novel pyridine and pyrimidine-containing derivatives were designed, synthesized and biologically evaluated for their c-Met inhibitory activities. In the biological evaluation, half of the target compounds exhibited moderate to potent c-Met inhibitory activities. Among which, it is noteworthy that compounds 13d not only showed most potent c-Met inhibitory potency but also displayed excellent anti-proliferative activity (IC50 = 127 nM against EBC-1 cell line) as well as an acceptable kinase selectivity profile. Moreover, the western blot assay indicated that 13d inhibited c-Met phosphorylation in EBC-1 cells in a dose-dependent manner, with complete abolishment at 0.1 mM. All these experimental results suggested that 13d could be served as a promising lead compound for the development of anticancer agents. (C) 2017 Elsevier Ltd. All rights reserved. New microtubule depolymerizing agents with potent cytotoxic activities have been prepared with a 5-cyano or 5-oximino group attached to a pyrrole core. The utilization of ortho activation of a bromopyrrole ester to facilitate successful Suzuki-Miyaura cross-coupling reactions was a key aspect of the synthetic methodology. This strategy allows for control of regiochemistry with the attachment of four completely different groups at the 2, 3, 4 and 5 positions of the pyrrole scaffold. Biological evaluations and molecular modeling studies are reported for these examples. (C) 2017 Elsevier Ltd. All rights reserved. African trypanosomiasis is still a threat to human health due to the severe side-effects of current drugs. We identified selective tubulin inhibitors that showed the promise to the treatment of this disease, which was based on the tubulin protein structural difference between mammalian and trypanosome cells. Further lead optimization was performed in the current study to improve the efficiency of the drug candidates. We used Trypanosoma brucei brucei cells as the parasite model, and human normal kidney cells and mouse macrophage cells as the host model to evaluate the compounds. One new analog showed great potency with an IC50 of 70 nM to inhibit the growth of trypanosome cells and did not affect the viability of mammalian cells. Western blot analyses reveal that the compound decreased tubulin polymerization in T. brucei cells. A detailed structure activity relationship (SAR) was summarized that will be used to guide future lead optimization. Published by Elsevier Ltd. A series of dimeric isoxazolyl-1,4-dihydropyridines (IDHPs) were prepared by click chemistry and examined for their ability to bind the multi-drug resistance transporter (MDR-1), a member of the ATP-binding cassette superfamily (ABC). Eight compounds in the present study exhibited single digit micromolar binding to this efflux transporter. One monomeric IDHP m-Br-1c, possessed submicromolar binding of 510 nM at MDR-1. Three of the dimeric IDHPs possessed <1.5 mu M activity, and 4b and 4c were observed to have superior binding selectivity compared to their corresponding monomers verses the voltage gated calcium channel (VGCC). The dimer with the best combination of activity and selectivity for MDR-1 was analog 4c containing an m-Br phenyl moiety in the 3-position of the isoxazole, and a tether with five ethyleneoxy units, referred to herein as Isoxaquidar. Two important controls, mono-triazole 5 and pyridine 6, also were examined, indicating that the triazole - incorporated as part of the click assembly as a spacer - contributes to MDR-1 binding. Compounds were also assayed at the allosteric site of the mGluR5 receptor, as a GPCR 7TM control, indicating that the p-Br IDHPs 4d, 4e and 4f with tethers of from n = 2 to 5 ethylenedioxy units, had sub-micromolar affinities with 4d being the most efficacious at 193 nM at mGluR5. The results are interpreted using a docking study using a human ABC as our current working hypothesis, and suggest that the distinct SARs emerging for these three divergent classes of biomolecular targets may be tunable, and amenable to the development of further selectivity. (C) 2017 Elsevier Ltd. All rights reserved. Neurodegenerative disorders, such as Parkinson's disease and Alzheimer's disease, threaten the lives of millions of people and the number of affected patients is constantly growing with the increase of the aging population. Small molecule neurotrophic agents represent promising therapeutics for the pharmacological management of neurodegenerative diseases. In this study, a series of caffeic acid amide analogues with variable alkyl chain lengths, including ACAF3 (C3), ACAF4 (C4), ACAF6 (C6), ACAF8 (C8) and ACAF12 (C12) were synthesized and their neurotrophic activity was examined by different methods in PC12 neuronal cells. We found that all caffeic acid amide derivatives significantly increased survival in PC12 neuronal cells in serum-deprived conditions at 25 mu M, as measured by the MU assay. ACAF4, ACAF6 and ACAF8 at 5 mu M also significantly enhanced the effect of nerve growth factor (NGF) in inducing neurite outgrowth, a sign of neuronal differentiation. The neurotrophic effects of amide derivatives did not seem to be mediated by direct activation of tropomyosin receptor kinase A (TrkA) receptor, since K252a, a potent TrkA antagonist, did not block the neuronal survival enhancement effect. Similarly, the active compounds did not activate TrkA as measured by immunoblotting with anti-phosphoTrkA antibody. We also examined the effect of amide derivatives on signaling pathways involved in survival and differentiation by immunoblotting. ACAF4 and ACAF12 induced ERK1/2 phosphorylation in PC12 cells at 5 and 25 mu M, while ACAF12 was also able to significantly increase AKT phosphorylation at 5 and 25 mu M. Molecular docking studies indicated that compared to the parental compound caffeic acid, ACAF12 exhibited higher binding energy with phosphoinositide 3-kinase (PI3K) as a putative molecular target. Based on Lipinski's rule of five, all of the compounds obeyed three molecular descriptors (HBD, HBA and MM) in drug-likeness test. Taken together, these findings show for the first time that caffeic amides possess strong neurotrophic effects exerted via modulation of ERK1/2 and AKT signaling pathways presumably by activation of PI3K and thus represent promising agents for the discovery of neurotrophic compounds for management of neurodegenerative diseases. (C) 2017 Elsevier Ltd. All rights reserved. 7-Ethyl-10-hydroxycamptothecin (SN38), as a highly active topoisomerase I inhibitor, is 200-2000-fold more cytotoxic than irinotecan (CPT-11) commercially available as Camptosar. However, poor solubility and low stability extensively restricted its clinical utility. In this report, dual SN38 phospholipid conjugate (Di-SN38-PC) prodrug based liposomes were developed in order to compact these drawbacks. Di-SN38-PC prodrug was first synthesized by inhomogeneous conjugation of two SN38-20-O-succinic acid molecules with L-a-glycerophosphorylcholine (GPC). The assembly of the prodrug was carried out without any excipient by using thin film method. Dynamic light scattering (DLS), transmission electron microscope (TEM) and cryogenic transmission electron microscopy (cyro-TEM) characterization indicated that Di-SN38-PC can form spherical liposomes with narrow particle size (<200 nm) and negatively charged surface (-21.6 +/- 3.5 mV). The loading efficiency of SN38 is 65.2 wt.% after a simple calculation. In vitro release test was further performed in detail. The results demonstrated that Di-SN38-PC liposomes were stable in neutral environment but degraded in a weakly acidic condition thereby released parent drug SN38 effectively. Cellular uptake studies reflected that the liposomes could be internalized into cells more significantly than SN38. In vitro antitumor activities were finally evaluated by MTT assay, colony formation assay, flow cytometry, RT-PCR analysis and Western Blot. The results showed that Di-SN38-PC liposomes had a comparable cytotoxicity with SN38 against MCF-7 and HBL-100, and a selective promotion of apoptosis of tumor cells. Furthermore, a pharmacokinetics test showed that Di-SN38-PC liposomes had a longer circulating time in blood compared with the parent drug. All the results indicate that Di-SN38-PC liposomes are an effective delivery system of SN38. (C) 2017 Elsevier Ltd. All rights reserved. This article discusses Plomet v Worgan, a case from a thirteenth-century legal record, concerning a medical man's use of a drug (dwoledreng) to obtain sex from a female patient. Issues which arise include: the nature of the drug in question; the nature of surgical practice in this early, provincial, setting; ideas about sexual consent and incapacity and the response of the legal system to such medical misconduct. The case shows the flexibility and complexity of ideas about sexual misbehaviour current in thirteenth-century law and society. It provides valuable material on medieval English medical practice and offers insights into the treatment of medical misconduct before the better-known development of the 'medical negligence' jurisdiction of actions on the case in the second half of the fourteenth century and the growth of professional regulation. This article compares the representations of jealousy in popular culture, medical and legal literature, and in the trials and diagnoses of men who murdered or attempted to murder their wives or sweethearts before being found insane and committed into Broadmoor Criminal Lunatic Asylum between 1864 and 1900. It is shown that jealousy was entrenched in Victorian culture, but marginalised in medical and legal discourse and in the courtroom until the end of the period, and was seemingly cast aside at Broadmoor. As well as providing a detailed examination of varied representations of male jealousy in late-Victorian Britain, the article contributes to understandings of the emotional lives of the working-class, and the causes and representations of working-class male madness. This paper analyses the medical activities of Hu Tingguang, an early nineteenth-century Chinese healer who specialized in treating traumatic injuries. Hu aimed to improve the state of medical knowledge about injuries by writing a comprehensive treatise titled Compilation of Teachings on Traumatology, completed in 1815. This work notably included a set of medical cases describing the experiences of Hu and his father, which Hu used to teach readers how to employ and adapt different therapies: bone setting, petty surgery, and drugs. By examining how Hu dealt with different forms of damage to the body's material form, this paper shows how manual therapies could be a focus of medical creativity and innovation. It also contributes to a growing corpus of scholarship exploring the way that awareness of and concern with the structure of the body historically shaped Chinese medical thought and practice. In 1917, physician Arthur F. Hurst began filming the peculiar tics and hysterical gaits of 'shell-shocked' soldiers under his care. Editions of Hurst's films from 1918 and 1940 survive. Cultural products of their time, I argue, the films engaged with contemporary ideas of class, gender and nation. The 1918 version reinforced class-based notions of disease and degeneracy while validating personal and national trauma and bolstering conceptions of masculinity and the nation that were critical to wartime morale and recovery efforts. The 1940 re-edit of the film engaged with the memory of the First World War by constructing a restorative narrative and by erasing the troubled years of gender crisis, 'shell shock' culture and class struggle to reassert masculine virtue and martial strength, essential for the prosecution of the Second World War. The traditional assumption that 'war is good for medicine' has generally been debated by examining specific medical innovations of the war years 1914-18. This paper focuses rather on the ways in which war affected the medical careers of those working in British microbiology before and after the Great War. Using a survey of the lives of medical scientists associated with The Lister Institute for Preventive Medicine, the British Pathological Society, and the Pathological Laboratory of St Bartholomew's Hospital, this paper argues the case for war-related medical research of 1914-19 as a driver both for the creation of a knowledge base for future research and for changes in career trajectory of a number of individuals who were subsequently important for the scientific development of different areas of epidemiology, microbiology, and nutrition science after 1920. This article uses a notorious criminal trial, that of John Donald Merrett for the murder of his mother, as a case study to explore forensic medicine's treatment of gunshot wounding in prewar Scotland. This topic, which has hitherto received little attention from historians, provides insight into two issues facing the discipline at this time. First, the competing attempts by prosecution and defence expert witnesses to recreate the wound in a laboratory setting, in order to determine the distance from which the shot had been fired, exposed the uncertainties surrounding the application of a well-known laboratory technique for which no fully agreed-upon protocol existed. Secondly, the case allows the examination of the working relationship of a medical expert and a gunsmith, in which disciplinary boundaries became indistinct and the wound a shared site of analysis, in a period before the separate profession of forensic science became institutionally grounded in Scotland. Since the 1940s, men's presence at childbirth has changed from being out of the question to not only very common but often presented as highly valuable. This article examines this shift, charting how many men were present at their children's births over recent decades, considering how medical practitioners influenced men's participation, and analysing what meanings parents gave to this experience. It suggests a number of factors led to the relatively rapid move towards the acceptance of men's presence in the delivery room, but highlights this was not a simple transformation as a first glance at the figures would suggest. It argues that men's involvement in home births became more usual before hospitals changed their policies about men's presence, and considers how the role of fathers related to the increasingly medicalised nature of childbirth as this period progressed. It also considers whether men's involvement is always positive or welcome for those involved. This paper explores the African American response to an interracial heart transplant in 1968 through a close reading of the black newspaper press. This methodological approach provides a window into African American perceptions of physiological difference between the races, or lack thereof, as it pertained to both personal identity and race politics. Coverage of the first interracial heart transplant, which occurred in apartheid South Africa, was multifaceted. Newspapers lauded the transplant as evidence of physiological race equality while simultaneously mobilising the language of differing 'black' and 'white' hearts to critique racist politics through the metaphor of a 'change of heart'. While interracial transplant created the opportunity for such political commentary, its material reality-potential exploitation of black bodies for white gain-was increasingly a cause for concern, especially after a contentious heart transplant from a black to a white man in May 1968 in the American South. The Vaccine Damage Payments Act 1979 provided a lump-sum social security benefit to children who had become severely disabled as a result of vaccination. It came in the wake of a scare over the safety of the whooping cough (pertussis) vaccine. Yet very little has been written about it. Existing literature focuses more on the public health and medical aspects of both the Act and the scare. This article uses material from the archives of disability organisations and official documents to show that this Act should be seen as part of the history of post-war British disability policy. By framing it thus, we can learn more about why the government responded in the specific way that it did, as well as shed new light on public attitudes towards vaccination and disability. BldD-(c-di-GMP) sits on top of the regulatory network that controls differentiation in Streptomyces, repressing a large regulon of developmental genes when the bacteria are growing vegetatively. In this way, BldD functions as an inhibitor that blocks the initiation of sporulation. Here, we report the identification and characterisation of BldO, an additional developmental repressor that acts to sustain vegetative growth and prevent entry into sporulation. However, unlike the pleiotropic regulator BldD, we show that BldO functions as the dedicated repressor of a single key target gene, whiB, and that deletion of bldO or constitutive expression of whiB is sufficient to induce precocious hypersporulation. Background: Most melanoma patients with BRAF(V600E) positive tumors respond well to a combination of BRAF kinase and MEK inhibitors. However, some patients are intrinsically resistant while the majority of patients eventually develop drug resistance to the treatment. For patients insufficiently responding to BRAF and MEK inhibitors, there is an ongoing need for new treatment targets. Cellular metabolism is such a promising new target line: mutant BRAF(V600E) has been shown to affect the metabolism. Methods: Time course experiments and a series of western blots were performed in a panel of BRAF(V600E) and BRAF(WT)/NRAS(mut) human melanoma cells, which were incubated with BRAF and MEK1 kinase inhibitors. siRNA approaches were used to investigate the metabolic players involved. Reactive oxygen species (ROS) were measured by confocal microscopy and AZD7545, an inhibitor targeting PDKs (pyruvate dehydrogenase kinase) was tested. Results: We show that inhibition of the RAS/RAF/MEK/ERK pathway induces phosphorylation of the pyruvate dehydrogenase PDH-E1 alpha subunit in BRAF(V600E) and in BRAF(WT)/NRAS(mut) harboring cells. Inhibition of BRAF, MEK1 and siRNA knock-down of ERK1/2 mediated phosphorylation of PDH. siRNA-mediated knock- down of all PDKs or the use of DCA (a pan-PDK inhibitor) abolished PDH-E1 alpha phosphorylation. BRAF inhibitor treatment also induced the upregulation of ROS, concomitantly with the induction of PDH phosphorylation. Suppression of ROS by MitoQ suppressed PDH-E1 alpha phosphorylation, strongly suggesting that ROS mediate the activation of PDKs. Interestingly, the inhibition of PDK1 with AZD7545 specifically suppressed growth of BRAF-mutant and BRAF inhibitor resistant melanoma cells. Conclusions: In BRAF(V600E) and BRAF(WT)/NRAS(mut) melanoma cells, the increased production of ROS upon inhibition of the RAS/RAF/MEK/ERK pathway, is responsible for activating PDKs, which in turn phosphorylate and inactivate PDH. As part of a possible salvage pathway, the tricarboxylic acid cycle is inhibited leading to reduced oxidative metabolism and reduced ROS levels. We show that inhibition of PDKs by AZD7545 leads to growth suppression of BRAF-mutated and-inhibitor resistant melanoma cells. Thus small molecule PDK inhibitors such as AZD7545, might be promising drugs for combination treatment in melanoma patients with activating RAS/RAF/MEK/ERK pathway mutations (50% BRAF, 25% NRAS(mut), 11.9% NF1(mut)). Direct analysis of circulating tumor cells (CTCs) can inform on molecular mechanisms underlying systemic spread. Here we investigated promoter methylation of three genes regulating epithelial-to-mesenchymal transition (EMT), a key mechanism enabling epithelial tumor cells to disseminate and metastasize. For this, we developed a single-cell protocol based on agaroseembedded bisulfite treatment, which allows investigating DNA methylation of multiple loci via a multiplex PCR (multiplexedscAEBS). We established our assay for the simultaneous analysis of three EMT-associated genes miR-200c/141, miR-200b/a/429 and CDH1 in single cells. The assay was validated in solitary cells of GM14667, MDA-MB-231 and MCF-7 cell lines, achieving a DNA amplification efficiency of 70% with methylation patterns identical to the respective bulk DNA. Then we applied multiplexedscAEBS to 159 single CTCs from 11 patients with metastatic breast and six with metastatic castration-resistant prostate cancer, isolated via CellSearch (EpCAM(pos)/CKpos/CD45(neg)/DAPI(pos)) and subsequent FACS sorting. In contrast to CD45(pos) white blood cells isolated and processed by the identical approach, we observed in the isolated CTCs methylation patterns resembling more those of epithelial-like cells. Methylation at the promoter of microRNA-200 family was significantly higher in prostate CTCs. Data from our single-cell analysis revealed an epigenetic heterogeneity among CTCs and indicates tumor-specific active epigenetic regulation of EMT-associated genes during blood-borne dissemination. Tumor spread along nerves, a phenomenon known as perineurial invasion, is common in various cancers including pancreatic ductal adenocarcinoma PDAC). Neural invasion is associated with poor outcome, yet its mechanism remains unclear. Using the transgenic Pdx1- Cre/KrasG12D/p53R172H (KPC) mouse model, we investigated the mechanism of neural invasion in PDAC. To detect tissue-specific factors that influence neural invasion by cancer cells, we characterized the perineurial microenvironment using a series of bone marrow transplantation (BMT) experiments in transgenic mice expressing single mutations in the Cx3cr1, GDNF and CCR2 genes. Immunolabeling of tumors in KPC mice of different ages and analysis of human cancer specimens revealed that RET expression is upregulated during PDAC tumorigenesis. BMT experiments revealed that BM-derived macrophages expressing the RET ligand GDNF are highly abundant around nerves invaded by cancer. Inhibition of perineurial macrophage recruitment, using the CSF-1R antagonist GW2580 or BMT from CCR2-deficient donors, reduced perineurial invasion. Deletion of GDNF expression by perineurial macrophages, or inhibition of RET with shRNA or a small-molecule inhibitor, reduced perineurial invasion in KPC mice with PDAC. Taken together, our findings show that RET is upregulated during pancreas tumorigenesis and its activation induces cancer perineurial invasion. Trafficking of BM-derived macrophages to the perineurial microenvironment and secretion of GDNF are essential for pancreatic cancer neural spread. Chronic inflammation is believed to have a crucial role in colon cancer development. MicroRNA (miRNA) deregulation is common in human colorectal cancers, but little is known regarding whether miRNA drives tumor progression by regulating inflammation. Here, we showed that miR-19a can promote colitis and colitis-associated colon cancer (CAC) development using a CAC mouse model and an acute colitis mouse model. Tumor necrosis factor-a (TNF-a) stimulation can increase miR-19a expression, and upregulated miR-19a can in turn activate nuclear factor (NF)-kappa B signaling and TNF-a production by targeting TNF alpha-induced protein 3 (TNFAIP3). miR-19a inhibition can also alleviate CAC in vivo. Moreover, the regulatory effects of miR-19a on TNFAIP3 and NF-kappa B signaling were confirmed using tumor samples from patients with colon cancer. These new findings demonstrate that miR-19a has a direct role in upregulating NF-kappa B signaling and that miR-19a has roles in inflammation and CAC. The cyclic AMP (cAMP) signaling pathway is critical in melanocyte biology for regulating differentiation. It is downregulated by phosphodiesterase (PDE) enzymes, which degrade cAMP itself. In melanoma evidence suggests that inhibition of the cAMP pathway by PDE type 4 (PDE4) favors tumor progression. For example, in melanomas harboring RAS mutations, the overexpression of PDE4 is crucial for MAPK pathway activation and proliferation induced by oncogenic RAS. Here we showed that PDE4D is overexpressed in BRAF-mutated melanoma cell lines, constitutively disrupting the cAMP pathway activation. PDE4D promoted melanoma invasion by interacting with focal adhesion kinase (FAK) through the scaffolding protein RACK1. Inhibition of PDE4 activity or inhibition of PDE4D interaction with FAK reduced invasion. PDE4D expression is increased in patients with advanced melanoma and PDE4D-FAK interaction is detectable in situ in metastatic melanoma. Our study establishes the role of PDE4D in BRAF-mutated melanoma as regulator of cell invasion, and suggests its potential as a target for preventing metastatic Somatic mutations that lead to constitutive activation of NRAS and KRAS proto-oncogenes are among the most common in human cancer and frequently occur in acute myeloid leukemia (AML). An inducible NRAS(V12)-driven AML mouse model has established a critical role for continued NRAS(V12) expression in leukemia maintenance. In this model genetic suppression of NRAS(V12) expression results in rapid leukemia remission, but some mice undergo spontaneous relapse with NRAS(V12)-independent (NRI) AMLs providing an opportunity to identify mechanisms that bypass the requirement for Ras oncogene activity and drive leukemia relapse. We found that relapsed NRI AMLs are devoid of NRAS(V12) expression and signaling through the major oncogenic Ras effector pathways, phosphatidylinositol-3-kinase and mitogen-activated protein kinase, but express higher levels of an alternate Ras effector, Ralb, and exhibit NRI phosphorylation of the RALB effector TBK1, implicating RALB signaling in AML relapse. Functional studies confirmed that inhibiting CDK5-mediated RALB activation with a clinically relevant experimental drug, dinaciclib, led to potent RALB-dependent antileukemic effects in human AML cell lines, induced apoptosis in patient-derived AML samples in vitro and led to a 2-log reduction in the leukemic burden in patient-derived xenograft mice. Furthermore, dinaciclib potently suppressed the clonogenic potential of relapsed NRI AMLs in vitro and prevented the development of relapsed AML in vivo. Our findings demonstrate that Ras oncogene-independent activation of RALB signaling is a therapeutically targetable mechanism of escape from NRAS oncogene addiction in AML. Melanoma tumors usually retain wild-type p53; however, its tumor-suppressor activity is functionally disabled, most commonly through an inactivating interaction with mouse double-minute 2 homolog (Mdm2), indicating p53 release from this complex as a potential therapeutic approach. P53 and the tumor-promoter insulin-like growth factor type 1 receptor (IGF-1R) compete as substrates for the E3 ubiquitin ligase Mdm2, making their relative abundance intricately linked. Hence we investigated the effects of pharmacological Mdm2 release from the Mdm2/p53 complex on the expression and function of the IGF-1R. Nutlin-3 treatment increased IGF-1R/Mdm2 association with enhanced IGF-1R ubiquitination and a dual functional outcome: receptor downregulation and selective downstream signaling activation confined to the mitogen-activated protein kinase/extracellular signal-regulated kinase pathway. This Nutlin-3 functional selectivity translated into IGF-1-mediated bioactivities with biphasic effects on the proliferative and metastatic phenotype: an early increase and late decrease in the number of proliferative and migratory cells, while the invasiveness was completely inhibited following Nutlin-3 treatment through an impaired IGF-1-mediated matrix metalloproteinases type 2 activation mechanism. Taken together, these experiments reveal the biased agonistic properties of Nutlin-3 for the mitogen-activated protein kinase pathway, mediated by Mdm2 through IGF-1R ubiquitination and provide fundamental insights into destabilizing p53/Mdm2/IGF-1R circuitry that could be developed for therapeutic gain. The bifunctional enzyme 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase-4 (PFKFB4) controls metabolic flux through allosteric regulation of glycolysis. Here we show that p53 regulates the expression of PFKFB4 and that p53-deficient cancer cells are highly dependent on the function of this enzyme. We found that p53 downregulates PFKFB4 expression by binding to its promoter and mediating transcriptional repression via histone deacetylases. Depletion of PFKFB4 from p53-deficient cancer cells increased levels of the allosteric regulator fructose-2,6-bisphosphate, leading to increased glycolytic activity but decreased routing of metabolites through the oxidative arm of the pentose-phosphate pathway. PFKFB4 was also required to support the synthesis and regeneration of nicotinamide adenine dinucleotide phosphate (NADPH) in p53-deficient cancer cells. Moreover, depletion of PFKFB4-attenuated cellular biosynthetic activity and resulted in the accumulation of reactive oxygen species and cell death in the absence of p53. Finally, silencing of PFKFB4-induced apoptosis in p53-deficient cancer cells in vivo and interfered with tumour growth. These results demonstrate that PFKFB4 is essential to support anabolic metabolism in p53-deficient cancer cells and suggest that inhibition of PFKFB4 could be an effective strategy for cancer treatment. As leukemic transformation of myeloproliferative neoplasms (MPNs) worsens the clinical outcome, reducing the inherent risk of the critical event in MPN cases could be beneficial. Among genetic alterations concerning the transformation, the frequent one is TP53 mutation. Here we show that retroviral overexpression of Jak2 V617F mutant into wild-type p53 murine bone marrow cells induced polycythemia vera (PV) in the recipient mice, whereas Jak2 V617F-transduced p53-null mice developed lethal leukemia after the preceding PV phase. The leukemic mice had severe anemia, hepatosplenomegaly, pulmonary hemorrhage and expansion of dysplastic erythroid progenitors. Primitive leukemia cells (c-kit+ Sca1+ Lin-(KSL) and CD34-CD16/32c- kit+ Sca1-Lin-(megakaryocyte-erythroid progenitor; MEP)) and erythroid progenitors (CD71+) from Jak2 V617F-transduced p53-null leukemic mice had leukemia-initiating capacity, however, myeloid differentiated populations (Mac-1+) could not recapitulate the disease. Interestingly, recipients transplanted with CD71+ cells rapidly developed erythroid leukemia, which was in sharp contrast to leukemic KSL cells to cause lethal leukemia after the polycythemic state. The leukemic CD71+ cells were more sensitive to INCB18424, a potent JAK inhibitor, than KSL cells. p53 restoration could ameliorate Jak2 V617F-transduced p53-null erythroleukemia. Taken together, our results show that p53 loss is sufficient for inducing leukemic transformation in Jak2 V617Fpositive MPN. Kruppel- like factor 4 (KLF4, GKLF) is a zinc-finger transcription factor involved in a large variety of cellular processes, including apoptosis, cell cycle progression, as well as stem cell renewal. KLF4 is critical for cell fate decision and has an ambivalent role in tumorigenesis. Emerging data keep reminding us that KLF4 dysregulation either facilitates or impedes tumor progression, making it important to clarify the regulating network of KLF4. Like most transcription factors, KLF4 has a rather short half-life within the cell and its turnover must be carefully orchestrated by ubiquitination and ubiquitin-proteasome system. To better understand the mechanism of KLF4 ubiquitination, we performed a genome-wide screen of E3 ligase small interfering RNA library based on western blot and identified SCF-FBXO32 to be a new E3 ligase, which is responsible for KLF4 ubiquitination and degradation. The F-box domain is critical for FBXO32-dependent KLF4 ubiquitination and degradation. Furthermore, we demonstrated that FBXO32 physically interacts with the N-terminus (1-60 aa) of KLF4 via its C-terminus (228-355 aa) and directly targets KLF4 for ubiquitination and degradation. We also found out that p38 mitogen-activated protein kinase pathway may be implicated in FBXO32-mediated ubiquitination of KLF4, as p38 kinase inhibitor coincidently abrogates endogenous KLF4 ubiquitination and degradation, as well as FBXO32-dependent exogenous KLF4 ubiquitination and degradation. Finally, FBXO32 inhibits colony formation in vitro and primary tumor initiation and growth in vivo through targeting KLF4 into degradation. Our findings thus further elucidate the tumor-suppressive function of FBXO32 in breast cancer. These results expand our understanding of the posttranslational modification of KLF4 and of its role in breast cancer development and provide a potential target for diagnosis and therapeutic treatment of breast cancer. Melanoma is the most lethal form of skin cancer and treatment of metastatic melanoma remains challenging. BRAF/MEK inhibitors show only temporary benefit due the occurrence of resistance and immunotherapy is effective only in a subset of patients. To improve patient survival, there is a need to better understand molecular mechanisms that drive melanoma growth and operate downstream of the mitogen activated protein kinase (MAPK) signaling. The Kruppel-like factor 4 (KLF4) is a zinc-finger transcription factor that plays a critical role in embryonic development, stemness and cancer, where it can act either as oncogene or tumor suppressor. KLF4 is highly expressed in post-mitotic epidermal cells, but its role in melanoma remains unknown. Here, we address the function of KLF4 in melanoma and its interaction with the MAPK signaling pathway. We find that KLF4 is highly expressed in a subset of human melanomas. Ectopic expression of KLF4 enhances melanoma cell growth by decreasing apoptosis. Conversely, knock-down of KLF4 reduces melanoma cell proliferation and induces cell death. In addition, depletion of KLF4 reduces melanoma xenograft growth in vivo. We find that the RAS/RAF/MEK/ERK signaling positively modulates KLF4 expression through the transcription factor E2F1, which directly binds to KLF4 promoter. Overall, our data demonstrate the pro-tumorigenic role of KLF4 in melanoma and uncover a novel ERK1/2-E2F1-KLF4 axis. These findings identify KLF4 as a possible new molecular target for designing novel therapeutic treatments to control melanoma growth. Despite remarkable progress in cutaneous melanoma genomic profiling, the mutational landscape of primary mucosal melanomas (PMM) remains unclear. Forty-six PMMs underwent targeted exome sequencing of 111 cancer-associated genes. Seventy-six somatic nonsynonymous mutations in 42 genes were observed, and recurrent mutations were noted on eight genes, including TP53 (13%), NRAS (13%), SNX31 (9%), NF1 (9%), KIT (7%) and APC (7%). Mitogen-activated protein kinase (MAPK; 37%), cell cycle (20%) and phosphatidylinositol 3-kinase (PI3K)-mTOR (15%) pathways were frequently mutated. We biologically characterized a novel ZNF767-BRAF fusion found in a vemurafenib-refractory respiratory tract PMM, from which cell line harboring ZNF767-BRAF fusion were established for further molecular analyses. In an independent data set, NFIC-BRAF fusion was identified in an oral PMM case and TMEM178B-BRAF fusion and DGKI-BRAF fusion were identified in two malignant melanomas with a low mutational burden (number of mutation per megabase, 0.8 and 4, respectively). Subsequent analyses revealed that the ZNF767-BRAF fusion protein promotes RAF dimerization and activation of the MAPK pathway. We next tested the in vitro and in vivo efficacy of vemurafenib, trametinib, BKM120 or LEE011 alone and in combination. Trametinib effectively inhibited tumor cell growth in vitro, but the combination of trametinib and BKM120 or LEE011 yielded more than additive anti-tumor effects both in vitro and in vivo in a melanoma cells harboring the BRAF fusion. In conclusion, BRAF fusions define a new molecular subset of PMM that can be targeted therapeutically by the combination of a MEK inhibitor with PI3K or cyclin-dependent kinase 4/ 6 inhibitors. In 11q23 leukemias, the N-terminal part of the mixed lineage leukemia (MLL) gene is fused to 460 different partner genes. In order to define a core set of MLL rearranged targets, we investigated the genome-wide binding of the MLL-AF9 and MLL-AF4 fusion proteins and associated epigenetic signatures in acute myeloid leukemia (AML) cell lines THP-1 and MV4-11. We uncovered both common as well as specific MLL-AF9 and MLL-AF4 target genes, which were all marked by H3K79me2, H3K27ac and H3K4me3. Apart from promoter binding, we also identified MLL-AF9 and MLL-AF4 binding at specific subsets of non-overlapping active distal regulatory elements. Despite this differential enhancer binding, MLL-AF9 and MLL-AF4 still direct a common gene program, which represents part of the RUNX1 gene program and constitutes of CD34(+) and monocyte-specific genes. Comparing these data sets identified several zinc finger transcription factors (TFs) as potential MLL-AF9 co-regulators. Together, these results suggest that MLL fusions collaborate with specific subsets of TFs to deregulate the RUNX1 gene program in 11q23 AMLs. Oncogene (2017) Space suits are a critical part of a human rated space vehicle. Final Frontier Design is working to push the design and functionality of space suits for the commercial space market. Spacecraft are partially- or fully-closed environments that demand environmental monitoring to protect the health of the crew and the vehicle. This article will detail the history of air and water monitoring on U.S. spacecraft from early in the human spaceflight era and will use the lessons learned to project the needs for commercial crew vehicles and exploration missions. Many sources of air and water pollutants exist on spacecraft (e.g., humans, material off-gas products, system chemicals, and experiments), which are scrubbed by various Environmental Control and Life Support Systems (ECLSS). Environmental monitoring is important because the rate of removal seldom, if ever, equals or exceeds the rate of generation. As an example, carbon dioxide levels are controlled in spacecraft, but the levels are never close to zero. Much of the recycled water comes from spacecraft air condensate; therefore, contaminants in the air can also have a dramatic effect on potable water reclamation/purification and resulting water quality. In addition, potential system leaks and spills can contaminate the air and/or water. The experiences of humans working in closed environments (spacecraft and submarines) show that engineering controls are not sufficient to completely protect the crew from environmental hazards. This article will highlight the fact that characteristics of environmental monitors go beyond being lower power and small instruments. How much monitoring is required and the characteristics of the monitors all depend on the mission scenario, particularly mission durations and the ability of the crews to return to Earth or seek safe haven in the event of environmental contamination of their primary vehicle. The pressurized space suit is an essential enabler for human exploration and eventual settlement of space. A key problem is reducing resistance of the suit to movements of the human occupant. In 2007, NASA sponsored the Astronaut Glove Centennial Challenge which sought non-traditional competitors who could create better gloves than NASA's own state-of-the-art. The winning gloves were developed and built by the author on his dining room table. This and later developments followed a process that is different from most aerospace programs in that it emphasized iterative hands-on build-test-learn cycles over more traditional design and analysis cycles, yet produced significant innovations in glove flexibility, bending resistance compensation, and ability to fabricate custom-patterned (bespoke) gloves with less effort than standard-sized gloves. Gloves incorporating this technology have demonstrated twice the flexibility of current designs which allows higher suit operating pressure and zero pre-breathe extravehicular activity. These developments are directly applicable to the larger joints of a space suit. The author is applying the knowledge learned to develop space suit pressure garments for the revolutionary rocket company SpaceX. Traditionally, the ecosystem of space was controlled by national activity and was a state-only playground. In the past several years, we have witnessed dramatic changes in global space activity toward greater involvement by the private sector. These changes come together under the overarching expression, New Space. New Space is drawing considerable attention by many in the space sector. Nevertheless, what does it mean? This question may be answered in many different ways: innovative technologies, entrepreneurial activity, new models for R&D, commercialization, financing, new frontiers and explorations, etc. Understanding the complexity of the changes in the ecosystem of space is important to better forecast its implications, opportunities, and challenges. This article provides an overview and analysis of the differences between the ecosystems of Old and New Space. The practice of launching multiple payloads on a single space launch vehicle is becoming increasingly popular, with the mean number of payloads per launch increasing from 1.45 payloads per launch in 2000 to 1.84 payloads per launch in 2013. A best-fit descending algorithm was used to reassign existing payloads to launch vehicles with the goal of reducing launch vehicle usage and wastage of payload capacity. Assigning the existing set of geosynchronous payloads to a minimum number of a single existing launch vehicle can reduce wastage to as low as 2% in some cases for a single type of launch vehicle, compared with a current wastage of 15%. An extension of this technique to a scenario with multiple types of available launch vehicles with minimizing total cost as an objective shows that savings of as much as 45% in cost per payload mass delivered to geosynchronous orbit are possible by rearranging current payloads and changing usage of current launch vehicles. In terms of the required energy and cost, reaching Earth orbit from the ground continues to be a fundamental obstacle hampering the use of space and its exploration at large. A strategic approach to overcoming this obstacle and making access to space affordable, both for orbital use and deep space exploration, is proposed and substantiated. The interlayer (IL) plays a vital role in hybrid white organic light-emitting diodes (WOLEDs); however, only a negligible amount of attention has been given to n-type ILs. Herein, the n-type IL, for the first time, has been demonstrated to achieve a high efficiency, high color rendering index (CRI), and low voltage trade-off. The device exhibits a maximum total efficiency of 41.5 lm W-1, the highest among hybrid WOLEDs with n-type ILs. In addition, high CRIs (80-88) at practical luminances (C1000 cd m(-2)) have been obtained, satisfying the demand for indoor lighting. Remarkably, a CRI of 88 is the highest among hybrid WOLEDs. Moreover, the device exhibits low voltages, with a turn-on voltage of only 2.5 V (>1 cd m(-2)), which is the lowest among hybrid WOLEDs. The intrinsic working mechanism of the device has also been explored; in particular, the role of n-type ILs in regulating the distribution of charges and excitons has been unveiled. The findings demonstrate that the introduction of n-type ILs is effective in developing high-performance hybrid WOLEDs. In this study, the effect of reduced graphene oxide (rGO) on interconnected Co3O4 nanosheets and the improved supercapacitive behaviors is reported. By optimizing the experimental parameters, we achieved a specific capacitance of similar to 1016.4 F g(-1) for the Co3O4/rGO/NF (nickel foam) system at a current density of 1 A g(-1). However, the Co3O4/NF structure without rGO only delivers a specific capacitance of similar to 520.0 F g(-1) at the same current density. The stability test demonstrates that Co3O4/rGO/NF retains similar to 95.5% of the initial capacitance value even after 3000 charge-discharge cycles at a high current density of 7 A g(-1). Further investigation reveals that capacitance improvement for the Co3O4/rGO/NF structure is mainly because of a higher specific surface area (similar to 87.8 m(2) g(-1)) and a more optimal mesoporous size (4-15 nm) compared to the corresponding values of 67.1 m 2 g(-1) and 6-25 nm, respectively, for the Co3O4/NF structure. rGO and the thinner Co3O4 nanosheets benefit from the strain relaxation during the charge and discharge processes, improving the cycling stability of Co3O4/rGO/NF. As a hole transport layer, PEDOT: PSS usually limits the stability and efficiency of perovskite solar cells (PSCs) due to its hygroscopic nature and inability to block electrons. Here, a graphene-oxide (GO)-modified PEDOT: PSS hole transport layer was fabricated by spin-coating a GO solution onto the PEDOT: PSS surface. PSCs fabricated on a GO-modified PEDOT: PSS layer exhibited a power conversion efficiency (PCE) of 15.34%, which is higher than 11.90% of PSCs with the PEDOT: PSS layer. Furthermore, the stability of the PSCs was significantly improved, with the PCE remaining at 83.5% of the initial PCE values after aging for 39 days in air. The hygroscopic PSS material at the PEDOT: PSS surface was partly removed during spin-coating with the GO solution, which improves the moisture resistance and decreases the contact barrier between the hole transport layer and perovskite layer. The scattered distribution of the GO at the PEDOT: PSS surface exhibits superior wettability, which helps to form a high-quality perovskite layer with better crystallinity and fewer pin holes. Furthermore, the hole extraction selectivity of the GO further inhibits the carrier recombination at the interface between the perovskite and PEDOT: PSS layers. Therefore, the cooperative interactions of these factors greatly improve the light absorption of the perovskite layer, the carrier transport and collection abilities of the PSCs, and especially the stability of the cells. Safe, sustainable, and green production of hydro gen peroxide is an exciting proposition due to the role of hydrogen peroxide as a green oxidant and energy carrier for fuel cells. The current work reports the development of carbon dot-impregnated waterborne hyperbranched polyurethane as a heterogeneous photo-catalyst for solar-driven production of hydrogen peroxide. The results reveal that the carbon dots possess a suitable band-gap of 2.98 eV, which facilitates effective splitting of both water and ethanol under solar irradiation. Inclusion of the carbon dots within the eco-friendly polymeric material ensures their catalytic activity and also provides a facile route for easy catalyst separation, especially from a solubilizing medium. The overall process was performed in accordance with the principles of green chemistry using bio-based precursors and aqueous medium. This work highlights the potential of carbon dots as an effective photo-catalyst. The electrochemical performance of a battery is considered to be primarily dependent on the electrode material. However, engineering and optimization of electrodes also play a crucial role, and the same electrode material can be designed to offer significantly improved batteries. In this work, Si-Fe-Mn nanomaterial alloy (Si/alloy) and graphite composite electrodes were densified at different calendering conditions of 3, 5, and 8 tons, and its influence on electrode porosity, electrolyte wettability, and long-term cycling was investigated. The active material loading was maintained very high (similar to 2 mg cm(-2)) to implement electrode engineering close to commercial loading scales. The densification was optimized to balance between the electrode thickness and wettability to enable the best electrochemical properties of the Si/alloy anodes. In this case, engineering and optimizing the Si/alloy composite electrodes to 3 ton calendering (electrode densification from 0.39 to 0.48 g cm(-3)) showed enhanced cycling stability with a high capacity retention of similar to 100% over 100 cycles. Stretchable electronic sensing devices are defining the path toward wearable electronics. High-performance flexible strain sensors attached on clothing or human skin are required for potential applications in the entertainment, health monitoring, and medical care sectors. In this work, conducting copper electrodes were fabricated on polydimethylsiloxane as sensitive stretchable microsensors by integrating laser direct writing and transfer printing approaches. The copper electrode was reduced from copper salt using laser writing rather than the general approach of printing with pre-synthesized copper or copper oxide nanoparticles. An electrical resistivity of 96 mu Omega cm was achieved on 40-mu m-thick Cu electrodes on flexible substrates. The motion sensing functionality successfully demonstrated a high sensitivity and mechanical robustness. This in situ fabrication method leads to a path toward electronic devices on flexible substrates. Metal-organic frameworks (MOFs) are of great interest as potential electrochemically active materials. However, few studies have been conducted into understanding whether control of the shape and components ofMOFs can optimize their electrochemical performances due to the rational realization of their shapes. Component control of MOFs remains a significant challenge. Herein, we demonstrate a solvothermal method to realize nanostructure engineering of 2D nanoflake MOFs. The hollow structures with Ni/Co-and Ni-MOF (denoted as Ni/Co-MOF nanoflakes and Ni-MOF nanoflakes) were assembled for their electrochemical performance optimizations in supercapacitors and in the oxygen reduction reaction (ORR). As a result, the Ni/CoMOF nanoflakes exhibited remarkably enhanced performance with a specific capacitance of 530.4 F g(-1) at 0.5 A g(-1) in 1 M LiOH aqueous solution, much higher than that of Ni-MOF (306.8 F g(-1)) and ZIF-67 (168.3 F g(-1)), a good rate capability, and a robust cycling performance with no capacity fading after 2000 cycles. Ni/Co-MOF nanoflakes also showed improved electrocatalytic performance for the ORRcompared to Ni-MOF and ZIF-67. The present work highlights the significant role of tuning 2D nanoflake ensembles of Ni/Co-MOF in accelerating electron and charge transportation for optimizing energy storage and conversion devices. Anisotropic materials, like carbon nanotubes (CNTs), are the perfect substitutes to overcomethe limitations of conventional metamaterials; however, the successful fabrication of CNT forest metamaterial structures is still very challenging. In this study, a new method utilizing a focused ion beam (FIB) with additional secondary etching is presented, which can obtain uniform and fine patterning of CNT forest nanostructures for metamaterials and ranging in sizes from hundreds of nanometers to several micrometers. The influence of the FIB processing parameters on the morphology of the catalyst surface and the growth of the CNT forest was investigated, including the removal of redeposited material, decreasing the average surface roughness (from 0.45 to 0.15 nm), and a decrease in the thickness of the Fe catalyst.The results showed that the combination of FIB patterning and secondary etching enabled the growth of highly aligned, high-density CNT forest metamaterials. The improvement in the quality of single-walled CNTs (SWNTs), defined by the very high G/D peak ratio intensity of 10.47, demonstrated successful fine patterning of CNT forest for the first time. With a FIB patterning depth of 10 nm and a secondary etching of 0.5 nm, a minimum size of 150 nm of CNT forest metamaterials was achieved. The development of the FIB secondary etching method enabled for the first time, the fabrication of SWNT forest metamaterials for the optical and infrared regime, for future applications, e.g., in superlenses, antennas, or thermal metamaterials. One-dimensional (1D, wire-and fiber-shaped) supercapacitors have recently attracted interest due to their roll-up, micrometer size and potential applications in portable or wearable electronics. Herein, a 1D wire-shaped electrode was developed based on Fe3O4 nanosheet arrays connected on the Fe wire, which was prepared via oxidation of Fe wire in 0.1 M KCl solution (pH 3) with O-2-rich environment under 70 degrees C. The obtained Fe3O4 nanosheet arrays displayed a high specific capacitance (20.8 mF cm(-1) at 10 mV s(-1)) and long cycling lifespan (91.7% retention after 2500 cycles). The excellent performance may attribute to the connected nanosheet structure with abundant open spaces and the intimate contact between the Fe3O4 and iron substrate. In addition, a wire-shaped asymmetric supercapacitor was fabricated and had excellent capacitive properties with a high energy density (9 lWh cm(-2)) at power density of 532.7 mu W cm(-2) and remarkable long-term cycling performance (99% capacitance retention after 2000 cycles). Considering low cost and earth-abundant electrode material, as well as outstanding electrochemical properties, the assembled supercapacitor will possess enormous potential for practical applications in portable electronic device. This paper charts the history of the Rockefeller Foundation's participation in the collection and long-term preservation of genetic diversity in crop plants from the 1940s through the 1970s. In the decades following the launch of its agricultural program in Mexico in 1943, the Rockefeller Foundation figured prominently in the creation of world collections of key economic crops. Through the efforts of its administrators and staff, the foundation subsequently parlayed this experience into a leadership role in international efforts to conserve so-called plant genetic resources. Previous accounts of the Rockefeller Foundation's interventions in international agricultural development have focused on the outcomes prioritized by foundation staff and administrators as they launched assistance programs and especially their characterization of the peoples and "problems'' they encountered abroad. This paper highlights instead how foundation administrators and staff responded to a newly emergent international agricultural concern-the loss of crop genetic diversity. Charting the foundation's responses to this concern, which developed only after agricultural modernization had begun and was understood to be produced by the successes of the foundation's own agricultural assistance programs, allows for greater interrogation of how the foundation understood and projected its central position in international agricultural research activities by the 1970s. Few of Stephen Jay Gould's accomplishments in evolutionary biology have received more attention than his hierarchical theory of evolution, which postulates a causal discontinuity between micro-and macroevolutionary events. But Gould's hierarchical theory was his second attempt to supply a theoretical framework for macroevolutionary studies-and one he did not inaugurate until the mid-1970s. In this paper, I examine Gould's first attempt: a proposed fusion of theoretical morphology, multivariate biometry and the experimental study of adaptation in fossils. This early "macroevolutionary synthesis'' was predicated on the notion that parallelism and convergence dominate the history of higher taxa, and moreover, that they can be explained in terms of adaptation leading to mechanical improvement. In this paper, I explore the origins and contents of Gould's first macroevolutionary synthesis, as well as the reasons for its downfall. In addition, I consider how various developments during the mid-1970s led Gould to identify hierarchy and constraint as the leading themes of macroevolutionary studies-and adaptation as a macroevolutionary red herring. The generation of animals was a difficult phenomenon to explain in the seventeenth century, having long been a problem in natural philosophy, theology, and medicine. In this paper, I explore how generation, understood as epigenesis, was directly related to an idea of rational nature. I examine epigenesis-the idea that the embryo was constructed part-by-part, over time-in the work of two seemingly dissimilar English philosophers: William Harvey, an eclectic Aristotelian, and Margaret Cavendish, a radical materialist. I chart the ways that they understood and explained epigenesis, given their differences in philosophy and ontology. I argue for the importance of ideas of harmony and order in structuring their accounts of generation as a rational process. I link their experiences during the English Civil war to how they see nature as a possible source for the rationality and concord sorely missing in human affairs. Historians and philosophers of twentieth-century life sciences have demonstrated that the choice of experimental organism can profoundly influence research fields, in ways that sometimes undermined the scientists' original intentions. The present paper aims to enrich and broaden the scope of this literature by analysing the career of unicellular green algae of the genus Chlorella. They were introduced for the study of photosynthesis in 1919 by the German cell physiologist Otto H. Warburg, and they became the favourite research objects in this field up to the 1960s. The paper argues that dealing with Chlorella's high metabolic flexibility was crucial for the emergence of a new conception of photosynthesis, as a plastic, integrated system of pathways. At the same time, it led to new collaborations between physiologists and phycologists, both of whom started to re-orient their studies in ecologically informed directions. Following Chlorella's trail, hence, not only elucidates how experimental organisms forced scientists to change their conceptual approaches and techniques, but also provides insight into the interaction of different lines of research of mid-twentieth century plant sciences. This paper builds on previous work that investigated anticancer drugs as 'informed materials', i.e., substances that undergo an informational enrichment that situates them in a dense relational web of qualifications and measurements generated by clinical experiments and clinical trials. The paper analyzes the recent transformation of anticancer drugs from 'informed' to 'informing material'. Briefly put: in the post-genomic era, anti-cancer drugs have become instruments for the production of new biological, pathological, and therapeutic insights into the underlying etiology and evolution of cancer. Genomic platforms characterize individual patients' tumors based on their mutational landscapes. As part of this new approach, drugs targeting specific mutations transcend informational enrichment to become tools for informing (and destabilizing) their targets, while also problematizing the very notion of a 'target'. In other words, they have become tools for the exploration of cancer pathways and mechanisms. While several studies in the philosophy and history of biomedicine have called attention to the heuristic relevance and experimental use of drugs, few have investigated concrete instances of this role of drugs in clinical research. Artificial insemination and other fertilization techniques are today considered central to the history of reproductive medicine. The medical treatment of infertile couples, however, constitutes just a small part of the whole story of artificial fertilization. Lazzaro Spallanzani (1729-1799) in particular, said to have been the inventor of artificial insemination, did not develop this method for medical purposes. He belonged to a generation of naturalists to whom artificial insemination was part of a heterogeneous series of investigations that were undertaken to explore the natural history of animal generation. Questions concerning conception, the role of the gametes, the definition of species, the production of hybrids or livestock breeding were all included in these investigations. Thus, no one strain of thought, nor single set of ideas or interests, entirely shaped the development of artificial fertilization. Johann Friedrich Blumenbach (1752-1840) is widely known as the father of German vitalism and his notion of Bildungstrieb, or nisus formativus, has been recognized as playing a key role in the debates about generation in German-speaking countries around 1800. On the other hand, Caspar Friedrich Wolff (1734-1794) was the first to employ a vitalist notion, namely that of vis essentialis, in the explanatory framework of epigenetic development. Is there a difference between Wolff's vis essentialis and Blumenbach's nisus formativus? How does this difference influence their overall understanding of the epigenetic process? The paper aims to provide an answer to these questions through the analysis of a little-known document, which contributes to shed light on a crucial chapter of the German life sciences in the late eighteenth-century, namely the decisive phase of the process that led to the formalization of biology as a unified field of inquiry at the beginning of the nineteenth century. The definition of (care) pathways, their advantages and disadvantages and the associated theories since the 1950s have been reported. Important in the definitions is that clinical care pathways are defined for a single examination process and that variances in clinical examination or clinical setting introduce variances in the clinical pathway. The objective of this study was to provide stochastic evidence necessary to establish a radiology care map that has the right people, doing the right things, in the right order, at the right time, in the right place, with the right outcome. Following a rigorous ethics approval process, data was collected from all consenting departments, radiographers (in their individual professional capacities) and a random sample of patients through document review, interview and observational research approaches. The outcome of the study supports blurring the scope of practice boundaries and timely execution of radiography examinations. However, there remains the need for further research to map care pathways for other radiology procedures and patients whose care is more variable and less standard. To be effective, nurse care coordination must be targeted at individuals who will use the service. The purpose of this study was to identify variables that predicted use of care coordination by primary care patients. Data on the potential predictor variables were obtained from patient interviews, the electronic health record, and an administrative database of 178 adults eligible for care coordination. Use of care coordination was obtained from an administrative database. A multivariable logistic regression model was developed using a bootstrap sampling approach. Variables predicting use of care coordination were dependence in both activities of daily living (ADL) and instrumental activities of daily living (IADL; odds ratio [OR] = 5.30, p = .002), independent for ADL but dependent for IADL (OR = 2.68, p = .01), and number of prescription medications (OR = 1.12, p = .002). Consideration of these variables may improve identification of patients to target for care coordination. This study examined the effects of an educative, self-regulation intervention on blood pressure self-efficacy, self-care outcomes, and blood pressure control in adults receiving hemodialysis. Simple randomization was done at the hemodialysis unit level. One hundred eighteen participants were randomized to usual care (n = 59) or intervention group (n = 59). The intervention group received blood pressure education sessions and 12 weeks of individual counseling on self-regulation of blood pressure, fluid, and salt intake. There was no significant increase in self-efficacy scores within (F = .55, p = .46) or between groups at 12 weeks (F = 2.76, p = .10). Although the intervention was not successful, results from the total sample (N = 118) revealed that self-efficacy was significantly related to a number of self-care outcomes including decreased salt intake, lower interdialytic weight gain, increased adherence to blood pressure medications, and fewer missed hemodialysis appointments. Increased blood pressure self-efficacy was also associated with lower diastolic blood pressure. The purpose of this study was to examine the effect of a self-efficacy intervention on primiparous mothers' breastfeeding behaviors. Participants were recruited from an antenatal clinic at a university-affiliated hospital. Seventy-five primiparous mothers were recruited from November 2013 to February 2014 for the control group, and 75 primiparous mothers were recruited from March to June 2014 for the intervention group. The intervention group participated in a 1-hr prenatal breastfeeding workshop and a 1-hr breastfeeding counseling session within 24 hr after delivery. The Breastfeeding Self-Efficacy Scale-Short Form and the infant feeding method were assessed at hospital discharge, as well as 4 and 8 weeks postpartum. The breastfeeding support program was found to be effective and beneficial to mothers. Nurses should incorporate breastfeeding self-efficacy interventions into their routine care to support new mothers and to increase their breastfeeding self-efficacy and the duration of their breastfeeding exclusivity. This purpose of this article is to describe how we adhere to the Patient-Centered Outcomes Research Institute's (PCORI) methodology standards relevant to the design and implementation of our PCORI-funded study, the PAINRelieveIt Trial. We present details of the PAINRelieveIt Trial organized by the PCORI methodology standards and components that are relevant to our study. The PAINRelieveIt Trial adheres to four PCORI standards and 21 subsumed components. The four standards include standards for formulating research questions, standards associated with patient centeredness, standards for data integrity and rigorous analyses, and standards for preventing and handling missing data. In the past 24 months, we screened 2,837 cancer patients and their caregivers; 874 dyads were eligible; 223.5 dyads consented and provided baseline data. Only 55 patients were lost to follow-upa 25% attrition rate. The design and implementation of the PAINRelieveIt Trial adhered to PCORI's methodology standards for research rigor. Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P<.02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P<.02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P<.002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team. Nurses comprise the largest segment of the healthcare workforce. As such, their perceptions of any new technology are important to understand, as it may ultimately mean the difference between acceptance and rejection of a product. The three-stage meaningful use program is intended to help improve and standardize data capture and advance clinical processes to improve patient and population outcomes in the US. With more than 471 000 healthcare providers having already received meaningful use incentive payments totaling more than $20 billion as of June 2015, it is critical to understand how these technologies are being viewed and utilized in practice. Understanding nurses' attitudes toward healthcare technology may help drive acceptance, as well as maximize the inherent potential of the new technologies toward improving patient care. Thus, the purpose of this integrative reviewis to highlight what is known about nurses' attitudes toward meaningful use technologies. Heart failure is a chronic condition where symptom recognition and between-visit communication with providers are critical. Patients are encouraged to track disease-specific data, such as weight and shortness of breath. Use of a Web-based tool that facilitates data display in graph form may help patients recognize exacerbations and more easily communicate out-of-range data to clinicians. The purposes of this study were to (1) design a Web-based tool to facilitate symptom monitoring and symptom recognition in patients with chronic heart failure and (2) conduct a usability evaluation of the Web site. Patient participants generally had a positive view of the Web site and indicated it would support recording their health status and communicating with their doctors. Clinician participants generally had a positive view of the Web site and indicated it would be a potentially useful adjunct to electronic health delivery systems. Participants expressed a need to incorporate decision support within the site and wanted to add other data, for example, blood pressure, and have the ability to adjust font size. A few expressed concerns about data privacy and security. Technologies require careful design and testing to ensure they are useful, usable, and safe for patients and do not add to the burden of busy providers. The purpose of this study was to develop a Web video designed to promote regular shoulder joint exercise on a continuous basis among patients with shoulder joint disease. This is a methodological research. A shoulder joint exercise video was developed through the five stages of the ADDIE model: analysis, design, development, implementation, and evaluation. The video demonstrates exercises that stretch and strengthen the joints and muscles of the shoulders. Stretching exercises include the pendulum, forward elevation, outer rotation, crossover arm stretch, inner rotation, and the sleeper; strengthening exercises include dumbbell exercises, a chair exercise, wall push-ups, and rowing. This Web exercise video can be used as an educational resource for preventing shoulder joint diseases by middle-aged and elderly people and those seeking to restore shoulder joint function damaged by shoulder joint diseases. Drug dosage calculation skill is critical for all nursing students to ensure patient safety, particularly during clinical practice. The study purpose was to evaluate the effectiveness of Web-based instruction on improving nursing students' arithmetical and drug dosage calculation skills using a pretest-posttest design. A total of 63 nursing students participated. Data were collected through the Demographic Information Form, and the Arithmetic Skill Test and Drug Dosage Calculation Skill Test were used as pre and post-tests. The pretest was conducted in the classroom. A Web site was then constructed, which included audio presentations of lectures, quizzes, and online posttests. Students had Web-based training for 8 weeks and then they completed the posttest. Pretest and posttest scores were compared using the Wilcoxon test and correlation coefficients were used to identify the relationship between arithmetic and calculation skills scores. The results demonstrated that Web-based teaching improves students' arithmetic and drug dosage calculation skills. There was a positive correlation between the arithmetic skill and drug dosage calculation skill scores of students. Web-based teaching programs can be used to improve knowledge and skills at a cognitive level in nursing students. Purpose: The purpose of this paper is to report the results of the study carried out to examine the effects of hospital board governance and managerial competencies on accountability in the health care systems in Uganda. Design/methodology: This study is cross-sectional and correlational. This study utilizes multiple regression models based on a sample of 52 government hospitals. The study's unit of inquiry is hospital directors and accountants. Findings: The correlation results indicate a significant positive relationship between managerial competencies and accountability. The study further finds that board governance is not significantly correlated with accountability of government hospitals. In terms of hospital governance dimensions; board composition is positively and significantly related with accountability unlike board structure and board independence. Research limitations/implications: The measurements used in all the predictor variables may not perfectly represent all the dimensions although they have been defined as precisely as possible by drawing upon relevant literature. Therefore, further research on other factors that explain the variance in accountability in the health sector is needed. Originality/value: In this paper, we provide the effects of hospital board governance and managerial competencies on accountability in a single study. In this study, managerial competency has a higher authority in explaining accountability than board governance. The findings have important insinuations for effective accountability in the health sector. The maximum down range trajectory optimization problem with multiple phases and multiple constraints corresponding to the flight of a boost-glide vehicle is considered. The longitudinal motion model was built as a multiphase optimization problem under constraints. Legendre-Gauss-Radau collocation points were used to transcribe the optimization problem into a finite-dimensional nonlinear programming problem, and the maximization down range trajectory was obtained based on adaptive mesh refinement pseudospectral methods. However, sometimes it is difficult to find interior points without position constraints. A novel optimization strategy based on dynamic programming theory was proposed to search the free interior points more accurately and quickly, which resulted in almost the same optimized trajectory while producing a small mesh. The results of numerical examples showed that the boost-glide vehicle trajectory optimization problem is solved using the adaptive mesh refinement pseudospectral methods. Artificial neural networks are an established technique for constructing non-linear models of multi-input-multi-output systems based on sets of observations. In terms of aerospace vehicle modeling, however, these are currently restricted to either unmanned applications or simulations, despite the fact that large amounts of flight data are typically recorded and kept for reasons of safety and maintenance. In this paper, a methodology for constructing practical models of aerospace vehicles based on available flight data recordings from the vehicles' operational use is proposed and applied on the Jetstream G-NFLA aircraft. This includes a data analysis procedure to assess the suitability of the available flight databases and a neural network-based approach for modeling. In this context, a database of recorded landings of the Jetstream G-NFLA, normally kept as part of a routine maintenance procedure, is used to form training datasets for two separate applications. A neural network-based longitudinal dynamic model and gust identification system are constructed and tested against real flight data. Results indicate that in both cases, the resulting models' predictions achieve a level of accuracy that allows them to be used as a basis for practical real-world applications. In impulsive orbital maneuvers, thrust vector misalignment from the center of mass is the serious source of disturbance torque. A high capacity attitude control system is needed to compensate the mentioned large exogenous disturbance. In this paper a new retrofiring control method is proposed and studied which is based on the combination of a 1DoF gimbaled thrust vector control and spin-stabilization method. Spin-axis stabilization and disturbance rejection are considered as two important attitude control objectives. The nonlinear two-body dynamics of a small spacecraft is derived in which dynamical interaction between the nozzle and the body is significant. Reaction control system is not used and the only active control part is a 1DoF gimbal. The spacecraft design efficiency is very important; therefore, the H performance and control gain norm are chosen as two conflicting cost function in the Pareto front multiobjective optimization. Many Pareto fronts are given for some ranges of two favorable parameters: (1) spin rate and (2) spin-axis moment of inertia. Optimization variables are the closed-loop system poles. Moreover, poles region constraint is employed to obtain a well-damped transient response. From the perspective of performance and design efficiency, the optimization results give many attractive outcomes. The resulting system is an efficient design for a small spacecraft. Furthermore, numerical simulations are included to confirm the optimization results and illustrate the superiority of the proposed method compared to the only spin-stabilization. The influence of end-wall roughness on the performance of the axial compressor stage was investigated with different values of roughness added to the hub and shroud surfaces of transonic compressor stage, NASA Stage 35. Firstly, the numerical code was validated against the experimental data, which were available from the open literature. Afterwards the model was applied to simulate the effect of end-wall roughness with different amplitudes of dimensionless sand-grain roughness height. Numerical results indicated that the increment of end-wall roughness caused the deterioration of compressor stage performance. To understand the mechanism behind, the distributions of loadings, losses and the detail flow situations near end-wall region were analyzed and discussed. The results show that the overall performance drop is mainly due to the thickened end-wall boundary layer. Near the hub region, the rough hub surface induces larger corner stall and the shock wave moves upstream. Meanwhile, the casing roughness leads to the slight increment of tip leakage mass flow and extension of blockage in the circumferential direction. With the development of science and technology, the performance of an aero-engine has been given more rigorous requirements. Seal device is an important component part of an aero-engine, and the improvement in its performance may be an efficient way to further improve the performance of an aero-engine. Finger seal is a flexible seal and has higher performance price ratio, therefore it gets more attention and research recently. The phenomenon of noncontact state converting to contact state will occur in every working cycle of finger seal that inevitably lead to the finger seal bearing the impact effect of rotor. But so far, the influence of impact on the finger seal performance has not been discussed and researched. To overcome this shortcoming, the stress-strain curves of C/C composite under different impact velocities are obtained by the Gleeble3500 thermo-simulator system in the paper, and then the elastic modulus of C/C composite in three directions is calculated by experimental data. The effects of impact velocity and impact damping on the impact force are analyzed by means of the impact theory. The new structural stiffness of finger seal and the impact displacement excitation of the rotor are built through impact effect analysis. On this basis, the equivalent dynamic model of C/C composite finger seal with distributed mass is established to evaluate the impact effect. By the model, the difference of calculated results is analyzed under whether considering the impact effect or not. And the effect of impact velocity and coefficient of restitution on the dynamic performance of the finger seal is also analyzed under considering the impact effect, respectively. The above results show that the impact effect has significant influence on the leakage and wear of finger seal, therefore when the performance of the finger seal was analyzed, it is necessary to consider the impact effect. A lunar roving vehicle is of great significance in the processing of manned lunar exploration for long-duration space exploration missions. The mechanical model of the lunar roving vehicle's wheel directly affects the mobility performance. However, the wheel of lunar roving vehicle is a special one with the metal mesh surface, and the soil can penetrate through its surface. In this paper, the mechanical model of a rigid normal wheel is deduced, and four stress correction coefficients are introduced to obtain the mechanical model of the lunar roving vehicle's wheel. These four stress correction coefficients are identified in several groups of experiment of the lunar roving vehicle's wheel, and using the functional coefficients can have a high precision for the mechanical model of the lunar roving vehicle's wheel. An experimental study was conducted in the present study to investigate the effects of upstream probe disturbance on the compressor cascade performance. The experiments were carried out using a plane cascade test facility where the aerodynamic coupling mechanisms between the inner probe and compressor were particularly evaluated in terms of the complex reversed pressure gradient. Moreover, the influence of probe install position near the cascade leading edge on the downstream flow characteristic was evaluated in detail. Results show that the presence of probe at cascade leading edge can reduce the total pressure recovery coefficient, raise the wake loss at cascade outlet and deteriorate the periodicity of conventional cascade flow field distribution. An optimum circumferential installing position at the upstream of the cascade is found with a minimum disturbance, which is not affected by the varying Mach numbers significantly. The probe installed at the intermediate of flow passage has a smaller disturbance on the downstream cascade performance than that installed in front of cascade leading edge. Electro-hydrostatic actuator is generally regarded as the preferred solution for more electrical aircraft actuation systems. It is of importance to optimize the weight, efficiency and other key design parameters, during the preliminary design phase. This paper describes a multi-objective optimization preliminary design method of the electro-hydrostatic actuator with the objectives of optimizing the weight and efficiency. Models are developed to predict the weight and efficiency of the electro-hydrostatic actuator from the requirements of the control surface. The models of weight prediction are achieved by using scaling laws with collected data, and the efficiency is calculated by the static energy loss model. The multi-objective optimization approach is used to find the Pareto-front of objectives and relevant design parameters. The proposed approach is able to explore the influence of the level length of linkage, displacement of pump and torque constant of motor on the weight and efficiency of the electro-hydrostatic actuator, find the Pareto-front designs in the defined parameter space and satisfy all relevant constraints. Using an electro-hydrostatic actuator for control surface as a test case, the proposed methodology is demonstrated by comparing three different conditions. It is also envisaged that the proposed prediction models and multi-objective optimization preliminary design method can be applied to other components and systems. A lunar sampler plays the critical role and is of great importance in lunar explorations. In this paper, an error model was established for a novel flexible lunar sampler which has low weight, small volume, large workspace, and low power consumption. Based on its specific configuration, the forward and inverse kinematic models and the kinetostatic model are developed to formulate the volumetric error model of the mechanism. The error model is built by considering three classes of error sources: flexibility-induced errors, structural parameter-induced errors, and joint clearance errors. The flexible errors are compensated according to the experimental data; the second class of errors are modeled based on complete differential-coefficient theory, while the third class of errors are modeled based on the deterministic method. For the third class in error modeling, contact modes are introduced to build a joint clearance-induced error model. The relationship between different error sources and the output pose error of the sampling head is obtained. Finally, the error distribution in the workspace is evaluated. A rapid and decoupled three-degree-of-freedom trajectory planning approach is presented for maneuvering entry vehicles. A maneuver coefficient is defined to describe the lateral motion of a three-degree-of-freedom trajectory so that the designs for the longitudinal and lateral trajectories are decoupled. The longitudinal drag profile is planned according to the flight range that considers the maneuver coefficient. The lateral trajectory is controlled by an adjustable heading error corridor to achieve the required maneuver coefficient. The three-degree-of-freedom trajectory is finally generated based on the drag profile and the adjustable corridor. The trajectory planning algorithm is tested on two entry vehicles. Results indicate that this algorithm is capable of planning three-degree-of-freedom trajectories for nominal and maneuvering missions. The feasible region of the maneuver coefficient is investigated for each vehicle. The wide region demonstrates that the proposed planning algorithm is applicable and insensitive to estimation errors of the maneuver coefficient. In this paper, a new approach based on block-oriented nonlinear models for the modeling and identification of aircraft nonlinear dynamics is proposed. Some of the block-oriented nonlinear models are regarded as flexible structures, which are suitable for the identification of widely applicable dynamic systems. These models are able to approximate a wide range of system dynamics. In general, aircraft flight dynamics is considered as a nonlinear and coupled system whose dynamicsin addition to pilot control inputsdepend on the flight conditions such as Mach number and altitude, which cause the aircraft dynamics to have various operational points. In this study, three types of block-oriented models, namely the Hammerstein, Wiener, and Hammerstein-Wiener models with different nonlinear functions, have been used and compared in order to identify and model the aircraft nonlinear dynamics. These models have been employed in three forms of single-input single-output, multi-input multi-output, and multi-input single-output (MISO); of which, multi-input single-output has been recognized to have fewer errors in aircraft nonlinear dynamics identification. Thus, it has been demonstrated that six separate multi-input single-output models (with three inputs and one output), which have been trained with experimental flight test data, can model the coupled nonlinear six-degree-of-freedom dynamics of a highly maneuverable aircraft. Rotor noise is one of the most important issues for helicopter designer, and high-speed impulsive noise is particularly intense among the various rotor noise sources due to compressibility. Based on Computational Fluid Dynamics/Ffowcs Williams and Hawkings equations with Penetrable Data Surface (CFD/FW-Hpds) methods and hybrid optimization technique, a new optimization design procedure for rotor blade planform with low high-speed impulsive noise characteristics is established. First, in order to accurately capture the unsteady aerodynamic characteristics of rotor, based on the moving-embedded grid methodology, a CFD simulation is developed by solving the compressible Reynolds-average Navier-Stokes equations with Baldwin-Lomax turbulence model. The low dissipation Roe-Monotone Upwind-centered Scheme for Conservation Laws (MUSCL) scheme and highly efficient implicit lower-upper symmetric Gauss-Seidel scheme are used for spatial and temporal discretization, respectively. Second, taking the CFD results as sound pressure information input, the high-speed impulsive noise characteristics generated by transonic rotor are analyzed through a robust numerical method based on FW-Hpds equations. Third, the genetic algorithm and surrogated model based on radial basis function are combined as a hybrid optimization technique; during the optimization process, the blade grids are generated by a highly efficient parameterized method. Aiming at the minimization of the sound pressure level of rotor in forward flight, the parametric effect analyses of blade-tip shapes on transonic noise have been conducted first. Then, optimization analyses based on the rotor blade with double-swept and tapered tip have been accomplished with the aerodynamic performance as constraints. Compared with the baseline blade, it shows that the sound pressure level of rotor with optimized blade-tip shape can be decreased obviously at the present calculating condition due to its weaker transonic delocalization phenomenon in the region of blade tip. In the rotor plane, absolute peak value of sound pressure produced by the optimized blade planform is reduced about 59.1% of that by the baseline one, and the reduced value in sound pressure level is up to 5.6dB. The Republic of Korea plans to launch a lunar orbiter and lander by 2020. There are several ways to enter lunar orbit: direct transfer, phasing loop transfer, weak stability boundary transfer, and spiral transfer trajectory. In this study, trajectory optimization is investigated for a lunar orbiter using a pattern search method that minimizes the required delta-V for direct lunar transfer. This method generates neighborhood points near the initial condition and then determines whether there is a new point that can reduce the value of the objective function. Classical methods require the gradient and acceleration of the objective function, but pattern search does not. Six poll methods and nine search methods are chosen; thus, 54 combinations of poll and search methods are available. The pattern search method can reduce the required delta-V on average by a few meters per second for a time of flight of five days and more than 10m/s for a time of flight of four or six days, regardless of whether translunar injection is performed at the ascending or descending node. A pulse detonation engine with the inner diameter of 50mm and total length of 1500mm was designed. Gasoline was used as the fuel and air as the oxidizer. The Shchelkin spiral was used as the deflagration to detonation transition accelerator. The direct-connected test of the pulse detonation engine was conducted to find out the detonation initiation and propulsion performance at several different operating frequencies. The experimental results indicated that detonation waves were fully initiated in the pulse detonation engine at the operating frequency range of 1-35Hz. As the operating frequency increased, the pulse detonation engine average thrust increased nearly linearly, while the optimum equivalence ratio decreased gradually. The volume-specific impulse and mixture-specific impulse had the similar increasing trend at the increased operating frequency. As the operating frequency increased from 0 to 25Hz, the fuel-specific impulse increased dramatically while the specific fuel consumption decreased quickly. The fuel-specific impulse and specific fuel consumption changed slightly when the operating frequency exceeded 25Hz. In addition, two groups of computational value of mixture-specific impulse and fuel-specific impulse were obtained by Winternberger model and Yan model considering the two-phase effect. Compared with the experimental results, similar variation trends relative to the operating frequency were achieved by the two computational models. However, the computational values of mixture-specific impulse and fuel-specific impulse by Winternberger model were much higher than that of experimental values. When two-phase effect was considered, the computational values of mixture-specific impulse and fuel-specific impulse were close to that of experimental values. The complex structure and small-batch nature of aircraft landing gear parts lead to complex machining error propagation mechanism and manufacturing complexity. An Extended Machining Error Propagation Network model is presented to quantitatively analyze the complex coupling relationship in the Small-batch Multistage Machining Process of aircraft landing gear parts. Firstly, to depict the coupling relationship quantitatively, the Quality Features are defined to describe the machining precision information of Machining Form Features, and the State Elements are defined to describe the running state information of Machining Elements. Then, Machining Form Features, Machining Elements, Quality Features, and State Elements are identified as different network nodes, and the coupling relationships (such as evolving, locating, machining, and attribute) among these nodes are mapped into network edges. Based on the in-process measuring and sensing data of each machining stage, the topological and physical metrics of the network are explored to analyze the error propagation characteristics. Finally, the machining process of an outer cylinder part from aircraft landing gear is studied to verify the proposed methods. The synthesis, characterization, and pharmacological evaluation of new aryloxyaminopropanol compounds based on substituted (4-hydroxyphenyl)ethanone with alterations in the alkoxymethyl side chain in position 2 and with 2-methoxyphenylpiperazine in the basic part of the molecule are reported. For the in vitro pharmacological evaluation, isolated aorta and atria from normotensive Wistar rats were used. Compared to naftopidil, compounds with ethoxymethyl, propoxymethyl, butoxymethyl, and methoxyethoxymethyl substituent displayed similar (1)-adrenolytic potency. Compounds with methoxymethyl, ethoxymethyl, and propoxymethyl substituent caused a significant decrease in both spontaneous and isoproterenol-induced beating of isolated rat atria. Naftopidil and the tested substances containing a butoxymethyl and methoxyethoxymethyl substituent had no effect on the spontaneous or isoproterenol-induced beating. The tested substance that had the most pronounced effect was the compound with a propoxymethyl substituent. Its antihypertensive efficacy was investigated in vivo on spontaneously hypertensive rats (SHRs). The systolic blood pressure was found to be significantly lower in SHRs subjected to the treatment for 2 weeks than in untreated SHRs. Naftopidil had no significant effect. In an effort to develop new fluoroquinolones, we synthesized eight compounds and tested them against a panel of bacteria. The design of these compounds was guided by the introduction of the isothiazoloquinolone motif. The three most active compounds in this series, 8-10, demonstrated good antibacterial activity against methicillin-sensitive Staphylococcus aureus and healthcare-acquired methicillin-resistant Staphylococcus aureus (MIC 0.62-6.3 mu g/mL). Further, when these three active compounds were tested for their inhibitory effects on bacterial enzymes, compound 9 was the most effective agent exhibiting IC50 values of 33.9 and 116.5M in the S. aureus deoxyribonucleic acid (DNA) gyrase supercoiling and topoisomerase IV decatenation assays, respectively. (Arylalkyl)azoles are a class of antiepileptic compounds including nafimidone, denzimol, and loreclezole (LRZ). Nafimidone and denzimol are thought to inhibit voltage-gated sodium channels (VGSCs) and enhance -aminobutyric acid (GABA)-mediated response. LRZ, a positive allosteric modulator of A-type GABA receptors (GABA(A)Rs), was reported to be sensitive to Asn265 of the 2/3 subunit. Here, we report new N-[1-(4-chlorophenyl)-2-(1H-imidazol-1-yl)ethylidene]hydroxylamine esters showing anticonvulsant activity in animal models, including the 6-Hz psychomotor seizure test, a model for therapy-resistant partial seizure. We performed molecular docking studies for our active compounds using GABA(A)R and VGSC homology models. They predicted high affinity to the benzodiazepine binding site of GABA(A)R in line with the experimental results. Also, the binding mode and interactions of LRZ in its putative allosteric binding site of GABA(A)R is elucidated. Three series of imidazolidinium ligands (NHC precursors) substituted with 4-vinylbenzyl, 2-methyl-1,4-benzodioxane, and N-propylphthalimide were synthesized. N-Heterocyclic carbene (NHC) precursors were prepared from N-alkylimidazoline and alkyl halides. The novel NHC precursors were characterized by H-1 NMR, C-13 NMR, FTIR spectroscopy, and elemental analysis techniques. The enzymes inhibition activities of the NHC precursors were investigated against the cytosolic human carbonic anhydrase I and II isoenzymes (hCA I and II) and the acetylcholinesterase (AChE) enzyme. The inhibition parameters (IC50 and K-i values) were calculated by spectrophotometric method. The inhibition constants (K-i) were found to be in the range of 166.65-635.38nM for hCA I, 78.79-246.17nM for hCA II, and 23.42-62.04nM for AChE. Also, the inhibitory effects of the novel synthesized NHCs were compared to acetazolamide as a clinical CA isoenzymes inhibitor and tacrine as a clinical cholinergic enzymes inhibitor. A novel pregabalin derivative named as pregsal ((S,E)-3-(((2-hydroxybenzylidene)amino)methyl)-5-methylhexanoic acid) was synthesized by a simple imination reaction between pregabalin and salicylaldehyde and was evaluated in the in vivo testing paradigms. The compound was characterized by UV, IR, H-1, C-13 NMR, HR ESI-MS, and elemental analysis. It was screened (30, 50, 75, and 100mg/kg) for antinociceptive, anti-inflammatory, and antipyretic activities in relation to pregabalin. The synthesized compound significantly attenuated the tonic acetic acid-induced nociceptive pain (30mg/kg (P<0.05), 50mg/kg (P<0.01), 75 and 100mg/kg (P<0.001)), and thermal-induced hyperalgesia (P<0.001). These activities were succinctly antagonized (P<0.05, P<0.01, P<0.001) by naloxone and pentylenetetrazole, implicating the involvement of opioidergic and GABAergic mechanisms. The compound also inhibited the temporal inflammatory response and alleviated the yeast-induced pyrexia (P<0.05, P<0.01, and P<0.001). These findings suggest that the synthesized compound possessed prospective pain, inflammation, and pyrexia relieving propensities and therefore may serve as a potential drug candidate for the therapeutic management of chronic pain conditions. Macrocyclic diterpenes were previously found to be able to modulate the efflux pump activity of Candida albicans multidrug transporters. Most of these compounds were jatrophanes, but only a few number of lathyrane-type diterpenes was evaluated. Therefore, the aim of this study was to evaluate the ability of nineteen structurally-related lathyrane diterpenes (1-19) to overcome the drug-efflux activity of Cdrlp and Mdrlp transporters of C. albicans, and get some insights on their structure-activity relationships. The transport assay was performed by monitoring Nile Red (NR) efflux in a Saccharomyces cerevisiae strain overexpressing the referred efflux pumps from C albicans. Moreover, a chemosensitization assay was performed in order to evaluate the type of interaction between the inhibitory compounds and the antifungal drug fluconazole. Compounds 1-13 were previously isolated from Euphorbia boetica or obtained by derivatization, and compounds 14-19 were prepared by chemical transformations of compound 4. In the transport assays, compounds 14-19 revealed the strongest inhibitory activity of the Cdrlp efflux pump, ranging from 65 to 85%. Concerning Mdr1p efflux pump, the most active compounds were 1, 3, 6, 8, and 12 (75-85%). When used in combination with fluconazole, epoxyboetirane K (2) and euphoboetirane N (18) revealed synergistic effects in the AD-CDR1 yeast strain, overexpressing the Cdrlp transporter, through their ability to reduce the effective concentration of the antifungal drug by 23- and 52-fold, respectively. (C) 2017 Elsevier Ltd. All rights reserved. 1,2,3-Triazolo linked benzo[d]imidazo[2,1-13]thiazole conjugates (5a-v) were designed, synthesized and evaluated for their cytotoxic potency against some human cancer cell lines like DU-145 (prostate), HeLa (cervical), MCF-7 (breast) HepG2 (liver) and A549 (lung). Preliminary results revealed that some of these conjugates like 5f and 5k exhibited significant antiproliferative effect against human breast cancer cells (MCF-7) with IC50 values of 0.60 and 0.78 pM respectively. Flow cytometric analysis of the cell cycle demonstrated an increase in the percentage of cells in the G(2)/M phase which was further authenticated by elevation of cyclin B1 protein levels. Immunocytochemistry revealed loss of intact microtubule structure in cells treated with 5f and 5k, and western blot analysis revealed that these conjugates accumulated more tubulin in the soluble fraction. Moreover, the conjugates caused apoptosis of the cells that was confirmed by mitochondrial membrane potential and Annexin V-FITC assay. Molecular docking studies indicated that these conjugates occupy the colchicine binding site of the tubulin protein. (C) 2017 Elsevier Ltd. All rights reserved. With the aim to find a novel long-lasting potassium-competitive acid blocker (P-CAB) that would perfectly overcome the limitations of proton pump inhibitors (PPIs), we tried various approaches based on pyrrole derivative lb as a lead compound. As part of a comprehensive approach to identification of a new drug, we explored excellent compounds that have low lipophilicity by introducing a polar hetero-aromatic group at position 5 of the pyrrole ring. Among the compounds synthesized, fluoropyrrole derivative 37c, which has a 2-F-3-Py group at the fifth position, lower pKa, and much lower Clog P and log D values than lb dose, showed potent gastric-acid suppressive action resulting from gastric H+,K+-ATPase inhibition in animal models. Its maximum intragastric pH elevation effect was strong in rats, and its duration of action was much longer than that of either lansoprazole or lead compound lb in dogs. Therefore, compound 37c can be considered a promising new P-CAB with long duration of action. (C) 2017 Elsevier Ltd. All rights reserved. Phosphodiesterases are important enzymes regulating signal transduction mediated by second messenger molecules cAMP or cGMP. PDE10A is a unique member in the PDE family because of its selective expression in medium spiny neurons. It is recognized as anti-psychotic drug target. Based on the structural similarity between our previous chemistry work on 8-aminoimidazo[1,2-alpyrazines and the PDE10A inhibitors reported by Bartolome-Nebreda et al., we initialized a project for developing PDE10A inhibitors. After several rounds of optimization, we were able to obtain a few compounds with good PDE10A enzymatic activity. And after further PDE enzymatic selectivity study, metabolic stability assay and in vivo pharmacological tests we identified two inhibitors as interesting lead compounds with the potential for further PDE10A lead optimizatioin. (C) 2017 Elsevier Ltd. All rights reserved. We previously reported that 4-(pyrrolidin-1-yl)benzonitrile derivative lb was a selective androgen receptor modulator (SARM) that exhibited anabolic effects on organs such as muscles and the central nervous system (CNS), but neutral effects on the prostate. From further modification, we identified that 4-(5-oxopyrrolidine-1-yl)benzonitrile derivative 2a showed strong AR binding affinity with improved metabolic stabilities. Based on these results, we tried to enhance the AR agonistic activities by modifying the substituents of the 5-oxopyrrolidine ring. As a consequence, we found that 4-[(2S,3S)-2-ethyl3-hydroxy-5-oxopyrrolidin-l-y1]-2-(trifluoromethyl)benzonitrile (2f) had ideal SARM profiles in Hershberger assay and sexual behavior induction assay. Furthermore, 2f showed good pharmacokinetic profiles in rats, dogs, monkeys, excellent nuclear selectivity and acceptable toxicological profiles. We also determined its binding mode by obtaining the co-crystal structures with AR. (C) 2017 Elsevier Ltd. All rights reserved. A versatile conjugatable/bioreduction-responsive protecting group for phosphodiester moieties was designed, synthesized and incorporated into oligonucleotide strands. Subsequently, controlled pore glass-supported oligonucleotides were conjugated to a variety of functional molecules using a copper catalyzed azide-alkyne cycloaddition reaction. The functionalized protecting groups were deprotected by a nitroreductase/NADH reduction system to give "naked" oligonucleotides. This method allowed the synthesis of oligonucleotide prodrugs bearing the functionalized protecting group at the desired sites and desired residues on oligodeoxyribonucleotide (ODN) backbones. (C) 2017 Elsevier Ltd. All rights reserved. A series of new artemisinin-derived hybrids which incorporate cholic acid moieties have been synthesized and evaluated for their antileukemic activity against sensitive CCRF-CEM and multidrug-resistant CEM/ADR5000 cells. The new hybrids 20-28 showed IC50 values in the range of 0.019 M-0.192 M against CCRF-CEM cells and between 0.345 JAM and 7.159 M against CEM/ADR5000 cells. Amide hybrid 25 proved the most active compound against both CCRF-CEM and CEM/ADR5000 cells with IC50 value of 0.019 +/- 0.001 mu M and 0.345 +/- 0.031 mu M, respectively. A relatively low cross resistance to hybrids 20-28 in the range of 5.7-fold to 46.1-fold was measured. CEM/ADR5000 cells showed higher resistance than CCRF-CEM to all the tested compounds. Interestingly, the lowest cross resistance to 23 was observed (5.7-fold), whereas hybrid 25 showed 18.2-fold cross-resistant to CEM/ADR5000 cells. Hybrid 25 which proved even more potent than clinically used doxorubicin against CEM/ADR5000 cells may serve as a promising antileukemic agent against both sensitive and multidrug-resistant cells. (C) 2017 Elsevier Ltd. All rights reserved. Extensive chromatographic separations performed on the basic (pH = 8-10) chloroform soluble fraction of Aconitum heterophyllum resulted in the isolation of three new diterpenoid alkaloids, 6 beta-Methoxy, 9 beta-dihydroxylheteratisine (1), 1 alpha,11,13 beta-trihydroxylhetisine (2), 6,15 beta-dihydroxylhetisine (3), and the known compounds iso-atisine (4), heteratisine (5), hetisinone (6), 19-epi-isoatisine (7), and atidine (8). Structures of the isolated compounds were established by means of mass and NMR spectroscopy as well as single crystal X-ray crystallography. Compounds 1-8 were screened for their antioxidant and enzyme inhibition activities followed by in silico studies to find out the possible inhibitory mechanism of the tested compounds. This work is the first report demonstrating significant antioxidant and anti cholinesterase potentials of diterpenoid alkaloids isolated from a natural source. (C) 2017 Elsevier Ltd. All rights reserved. Alzheimer's disease (AD) destroys brain function, especially in the hippocampus, and is a social problem worldwide. A major pathogenesis of AD is related to the accumulation of amyloid beta (A beta) peptides, resulting in neuronal cell death in the brain. Here, we isolated four saponins (1-4) and elucidated their structures from 1D and 2D NMR and HRFABMS spectral data. The structures of 1 and 2 were determined as new saponins which have cochalic acid as the aglycon, and 3 was determined as a new saponin with oleanolic acid as the aglycon. Compound 4 was confirmed as the known saponin chikusetsusaponin V (=ginsenoside R-0). Isolated saponins (1-4) and six previously reported saponins (5-10) were tested for their inhibitory effects of All aggregation and their protective effects on SH-SY5Y cells against A beta-associated toxicity. As the results, compounds 3 and 4 showed inhibitory effect of AB aggregation and compounds 5-8 exerted the protective effects on SH-SY5Y cells against A beta-associated toxicity. (C) 2017 Elsevier Ltd. All rights reserved. In order to obtain enantiomerically pure csi receptor ligands with a 2-benzopyran scaffold an Oxa-PictetSpengler reaction with the enantiomerically pure 2-phenylethanol derivatives (R)-4 and (S)-4 was envisaged. The kinetic resolution of racemic alcohol (+/-)-4 using Amano Lipase PS-C II and isopropenyl acetate in tert-butyl methyl ether led to the (R)-configured alcohol (R)-4 in 42% yield with an enantiomeric excess of 99.6%. The (S)-configured alcohol (S)-4 was obtained by Amano Lipase PS-C II catalyzed hydrolysis of enantiomerically enriched acetate (S)-5 (76.9% ee) and provided (S)-4 in 26% yield and 99.7% ee. The absolute configuration of alcohol (R)-4 was determined by exciton coupled CD spectroscopy of the bis (bromobenzoate) (R)-7. The next important step for the synthesis of 2-benzopyrans 2 and 3 was the Oxa-Pictet-Spengler reaction of the enantiomerically pure alcohols (R)-4 and (S)-4 with piperidone ketal 8 and chloropropionaldehyde acetal 12. The conformationally restricted spirocyclic 2-benzopyrans 2 revealed higher sigma(1) affinity than the more flexible aminoethyl derivatives 3. The (R)- and (R,R)-configured enantiomers (R)-2 and (R,R)-3 represent the eutomers of this class of compounds with eudismic ratios of 4.8 (2b) and 4.5 (2c). High sigma(1)/sigma(2) selectivity (>49) was found for the most potent sigma(1) ligands (R)-2b, (R)-2c, (R)-2d, and (S)-2d (K-i(sigma(i)) 9-15 nM). (C) 2017 Elsevier Ltd. All rights reserved. The overproduction of nitric oxide (NO) plays an important role in a variety of pathophysiological processes, including inflammation. Therefore, the suppression of NO production is a promising target in the design of anti-inflammatory agents. In the present study, a series of phthalimide analogs was synthesized, and their anti-inflammatory activities were evaluated using lipopolysaccharide (LPS)-stimulated NO production in cultured murine macrophage RAW264.7 cells. A structure-activity relationship study showed that the free hydroxyl group at C-4 and C-6 and the bulkiness of the N-substituted alkyl chain are associated with biological activity. Among the series of phthalimide derivatives, compound IIh exhibited potent inhibitory activity, with an IC50 value of 8.7 ggirnL. Further study revealed that the inhibitory activity of compound IIh was correlated with the down-regulation of the mRNA and protein expression of LPS-stimulated inducible nitric oxide synthase (iNOS). Compound IIh also suppressed the induction of the pro-inflammatory cytokines tumor necrosis factor-cc and interleukin-113 in LPS-stimulated RAW 264.7 cells. The anti-inflammatory activity of compound IIh was also found to be associated with the suppression of the Toll-like receptor (TLR)4 signaling pathway by down-regulating the activation of interferon regulatory factor 3 (IRF-3) and interferon-13 and signal transducer expression. These findings demonstrate that novel phthalimides might be potential candidates for the development of anti-inflammatory agents. (C) 2017 Elsevier Ltd. All rights reserved. Herein we described the design, synthesis and evaluation of a novel series of benzo[d]thiazole derivatives toward an orally active EPi antagonist. Lead generation studies provided benzo[d]thiazole core from the four designed scaffolds. Optimization of this scaffold in terms of EP1 antagonist potency and ligandlipophilicity efficiency (LLE; pIC(50)-clogP) led to a 1,2,3,6-tetrahydropyridyl-substituted benzo[d]thiazole derivative, 7r (IC50 1.1 nM; LLE 4.7), which showed a good pharmacological effect when administered intraduodenally in a 17-phenyl trinor-PGE2 (17-PTP)-induced overactive bladder model in rats. (C) 2017 Elsevier Ltd. All rights reserved. It was shown that water-soluble network polymers composed of polyhedral oligomeric silsesquioxane (POSS) had hydrophobic spaces inside the network because of strong hydrophobicity of the cubic silica cage. In this study, the water-soluble POSS network polymers connected with triphenylamine derivatives (TPA-POSS) were synthesized, and their functions as a sensor for discriminating the geometric isomers of fatty acids were investigated. Accordingly, in the photoluminescence spectra, different time-courses of intensity and peak wavelengths of the emission bands were detected from the TPA-POSS-containing solution in the presence of cis- or trans-fatty acids during incubation. Furthermore, variable time-dependent changes were obtained by changing coexisting ratios between two geometric isomers. From the mechanistic investigation, it was implied that these changes could be originated from the difference in the degree of interaction between the POSS networks and each fatty acid. Our data could be applicable for constructing a sensing material for generation and proportion of trans-fatty acids in the oil. (C) 2017 Elsevier Ltd. All rights reserved. DNA and DNA-related enzymes are one of the most effective and common used intracellular anticancer targets in clinic and laboratory studies, however, most of DNA-targeting drugs suffered from toxic side effects. Development of new molecules with good antitumor activity and low side effects is important. Based on computer aided design and our previous studies, a series of novel azaacridine derivatives were synthesized as DNA and topoisomerases binding agents, among which compound 9 displayed the best antiproliferative activity with an IC50 value of 0.57 mu M against U937 cells, which was slightly better than m-AMSA. In addition, compound 9 displayed low cytotoxicity against human normal liver cells (QSG-7701), the IC50 of which was more than 3 times lower than m-AMSA. Later study indicated that all the compounds displayed topoisomerases II inhibition activity at 50 mu M. The representative compound 9 could bind with DNA and induce U937 apoptosis through the exogenous pathway. (C) 2017 Elsevier Ltd. All rights reserved. With the aim to discover a novel excellent potassium-competitive acid blocker (P-CAB) that could perfectly overcome the limitations of proton pump inhibitors (PPIs), we tested various approaches based on pyrrole derivative 1 as a lead compound. As part of a comprehensive approach to identify a new effective drug, we tried to optimize the duration of action of the pyrrole derivative. Among the compounds synthesized, fluoropyrrole derivative 20j, which has a 2-F-3-Py group at position 5, fluorine atom at position 4, and a 4-Me-2-Py sulfonyl group at the first position of the pyrrole ring, showed potent gastric acid suppressive action and moderate duration of action in animal models. On the basis of structural properties including a slightly larger C logP value (1.95), larger log D value (0.48) at pH 7.4, and fairly similar pKa value (8.73) compared to those of the previously optimized compound 2a, compound 20j was assumed to undergo rapid transfer to the stomach and have a moderate retention time there after single administration. Therefore, compound 20j was selected as a new promising P-CAB with moderately long duration of action. (C) 2017 Elsevier Ltd. All rights reserved. N-Benzyl-N-(4-phenoxyphenyl)benzenesulfonamide derivatives were developed as a novel class of non steroidal glucocorticoid receptor (GR) modulators, which are promising drug candidates for treating immune-related disorders. Focusing on the similarity of the GR and progesterone receptor (PR) ligand-binding domain (LBD) structures, we adopted our recently developed PR antagonist 10 as a lead compound and synthesized a series of derivatives. We found that the N-(4-phenoxyphenyl)benzenesulfonamide skeleton serves as a versatile scaffold for GR antagonists. Among them, 4-cyano derivative 14m was the most potent, with an ICso value of 1.43 p.M for GR. This compound showed good selectivity for GR; it retained relatively weak antagonistic activity toward PR (IC50 for PR: 8.00 mu M; 250-fold less potent than 10), but showed no activity toward AR, ERc or ER(3. Interestingly, the 4-amino derivative 15a exhibited transrepression activity toward NF--KB in addition to GR-antagonistic activity, whereas 14m did not. The structure-activity relationship for transrepression was different from that for GR-antagonistic activity. Computational docking simulations suggested that 15a might bind to the ligand-binding pocket of GR in a different manner from 14m. These findings open up new possibilities for developing novel nonsteroidal GR modulators with distinctive activity profiles. (C) 2017 Elsevier Ltd. All rights reserved. Recently we reported the discovery of a potent and selective CK2 alpha inhibitor CAM4066. This compound inhibits CK2 activity by exploiting a pocket located outside the ATP binding site (alpha D pocket). Here we describe in detail the journey that led to the discovery of CAM4066 using the challenging fragment linking strategy. Specifically, we aimed to develop inhibitors by linking a high-affinity fragment anchored in the alD site to a weakly binding warhead fragment occupying the ATP site. Moreover, we describe the remarkable impact that molecular modelling had on the development of this novel chemical tool. The work described herein shows potential for the development of a novel class of CK2 inhibitors. (C) 2017 Published by Elsevier Ltd. With the rising incidences of cancer cases, the quest for new metal based anticancer drugs has led to extensive research in cancer biology. Zinc complexes of amino acid residue side chains are well recognized for hydrolysis of phosphodiester bond in DNA at faster rate. In the presented work, a Zn(II) complex of cyclen substituted with two L-tryptophan units, Zn(II)-Cyclen-(Trp)2 has been synthesized and evaluated for antiproliferative activity. Zn(II)-Cyclen-(Trp)(2) was synthesized in 70% yield and its DNA binding potential was evaluated through QM/MM study which suggested good binding (G = 9.426) with B-DNA. The decrease in intensity of the positive and negative bands of CT-DNA at 278 nm and 240 nm, respectively demonstrated an effective unwinding of the DNA helix with loss of helicity. The complex was identified as an antiproliferative agent against U-87 MG cells with 5 fold increase in apoptosis with respect to control (2 h post incubation, IC50 25 LiM). Electrophoresis and comet assay studies exhibited an increase in DNA breakage after treatment with complex while caspase-3/B-actin cleavage established a caspase-3 dependent apoptosis pathway in U-87 MG cells after triggering DNA damage. In vivo tumor specificity of the developed ligand was validated after radiocomplexation with 99mTc (>98% radiochemical yield and specific activity of 2.56 GBqh.imol). Avid tumor/muscle ratio of >6 was depicted in biodistribution and SPECT imaging studies in U-87 MG xenograft model nude mice. (C) 2017 Elsevier Ltd. All rights reserved. Dopamine D-3 receptor-mediated networks have been associated with a wide range of neuropsychiatric diseases, drug addiction and food maintained behavior, which makes D-3 a highly promising biological target. The previously described dopamine D-3 receptor ligand FAUC 329 (1) showed protective effects against dopamine depletion in a MPTP mouse model of Parkinson's disease. We used the radioligand [F-18]2, a [(18)]fluoroethoxy substituted analog of the lead compound 1 as a molecular tool for visualization of D-3-rich brain regions including the islands of Calleja. Furthermore, structural modifications are reported leading to the pyrimidylpiperazine derivatives 3 and 9 displaying superior subtype selectivity and preference over serotonergic receptors. Evaluation of the lead compound 1 on cocaine-seeking behavior in non-human primates showed a substantial reduction in cocaine self-administration behavior and food intake. (C) 2017 Elsevier Ltd. All rights reserved. Utilizing a pharmacophore hybridization approach, a novel series of substituted indolin-2-one derivatives were designed, synthesized and evaluated for their in vitro biological activities against p21-activated kinase 4. Compounds 11b, 12d and 12g exhibited the most potent inhibitory activity against PAK4 (IC50 = 22 nM, 16 nM and 27 nM, respectively). Among them, compound 12g showed the highest antiproliferative activity against A549 cells (IC50 = 0.83 mu M). Apoptosis analysis in A549 cells suggested that compound 12g delayed cell cycle progression by arresting cells in the G2/M phase of the cell cycle, retarding cell growth. Further investigation demonstrated that compound 12g strongly inhibited migration and invasion of A549 cells. Western blot analysis indicated that compound 12g potently inhibited the PAK4/ LIMK1/cofilin signalling pathways. Finally, the binding mode between compound 12g with PAK4 was proposed by molecular docking. A preliminary ADME profile of the compound 12g was also drawn on the basis of QikProp predictions. (C) 2017 Elsevier Ltd. All rights reserved. During the screening of natural anti-inflammatory agent, we identified some C21-steroidal pregnane sapogenins or the derivatives to inhibit TLR2, TLR3, and TLR4-initiated inflammatory responses respectively. Treatment with active compounds 10, 2j and 3p failed to impact tumor necrosis factor-ot (TNF-a) induced nucleus translocation of NF-kappa B p65 subunit. However, these compounds regulated distinct canonical or non-canonical NF-kappa B family members. Ectopic expression of TNF receptor associated factor 6 (TRAF6) abrogated the inhibitory activity of the compounds on production of pro-inflammatory cytokines downstream of TLR4. These results suggested that compounds 10, 2j, and 3p suppressed TLRinitiated innate immunity through TRAF6 with differential regulation of NF-KB family proteins. (C) 2017 Elsevier Ltd. All rights reserved. We report the kinetic properties and sulfonamide inhibition profile of an a-carbonic anhydrase (CA, EC 4.2.1.1), named CruCA4, identified in the red coral Corallium rubrum. This isoform is involved in the biomineralization process leading to the formation of a calcium carbonate skeleton. Experiments performed on the recombinant protein show that the enzyme has a "moderate activity" level. Our results are discussed compared to values obtained for other CA isoforms involved in biomineralization. This is the first study describing the biochemical characterization of an octocoral CA. (C) 2017 Elsevier Ltd. All rights reserved. A strategy by integrating biological imaging into early stages of the drug discovery process can improve our understanding of drug activity during preclinical and clinical study. In this article, we designed and synthesized coumarin-based nonsteroidal type fluorescence ligands for drug-target binding imaging. Among these synthesized compounds, 3e, 3f and 3h showed potent ER binding affinity and 3e (IC50 = 0.012 mu M) exhibited excellent ERo: antagonistic activity, its antiproliferative potency in breast cancer MCF-7 cells is equipotent to the approved drug tamoxifen. The fluorescence of compounds 3e and 3f depended on the solvent properties and showed significant changes when mixed with Elta or ER/3 in vitro. Furthermore, target molecule 3e could cross the cell membrane, localize and image drug-target interaction in real time without cell washing. Thus, the coumarin-based platform represents a promising new ER-targeted delivery vehicle with potential imaging and therapeutic properties. (C) 2017 Elsevier Ltd. All rights reserved. Tumor cells switch glucose metabolism to aerobic glycolysis by expressing the pyruvate kinase M2 isoform (PKM2) in a low active form, providing glycolytic intermediates as building blocks for biosynthetic processes, and thereby supporting cell proliferation. Activation of PKM2 should invert aerobic glycolysis to an oxidative metabolism and prevent cancer growth. Thus, PKM2 has gained attention as a promising cancer therapy target. To obtain novel PKM2 activators, we conducted a high-throughput screening (HTS). Among several hit compounds, a fragment-like hit compound with low potency but high ligand efficiency was identified. Two molecules of the hit compound bound at one activator binding site, and the molecules were linked based on the crystal structure. Since this linkage succeeded in maintaining the original position of the hit compound, the obtained compound exhibited highly improved potency in an in vitro assay. The linked compound also showed PKM2 activating activity in a cell based assay, and cellular growth inhibition of the A549 cancer cell line. Discovery of this novel scaffold and binding mode of the linked compound provides a valuable platform for the structure-guided design of PKM2 activators. (C) 2017 Elsevier Ltd. All rights reserved. In the last years, inhibition of carbonic anhydrase (CA) has emerged as a promising approach for pharmacologic intervention in a variety of disorders such as glaucoma, epilepsy, obesity, and cancer. As a consequence, the design of CA inhibitors (CAls) is a highly dynamic field of medicinal chemistry. Due to the therapeutic potential of thiadiazoles as CAIs, new 1,3,4-thiadiazole derivatives were synthesized and investigated for their inhibitory effects on hCA I and hCA II. Although the tested compounds did not carry a sulfonamide group, an important pharmacophore for CA inhibitory activity, it was a remarkable finding that most of them were more effective on hCAs than acetazolamide (AAZ), the reference agent. Among these compounds, N'4(5-(4-chlorophenyl)furan-2-Amethylene)-24(5-(phenylamino)-1,3,4-thiadiazol-2-yethio)acetohydrazide (3) was found to be the most effective compound on hCA I with an IC50 value of 0.14 nM, whereas N'4(5-(2-chlorophenyl)furan-2-yl)methylene)-2-((5-(phenylamino)-1,3,4-thiadiazol-2-yl)thio)acetohydrazide (1) was found to be the most potent compound on hCA II with an IC50 value of 0.15 nM. According to molecular docking studies, all compounds exhibited high affinity and good amino acid interactions similar to AAZ on the both active sites of hCA I and hCA II enzymes. (C) 2017 Elsevier Ltd. All rights reserved. A new (3-class carbonic anhydrase (CA, EC 4.2.1.1) has been cloned, purified and characterized in the genome of the pathogenic bacterium Francisella tularensis responsible of the febrile illness tularemia. This enzyme, FtuilCA, showed a kcal of 9.8 x 10(5) s(-1) and a kcac/KM of 8.9 x 10(7) M-1 s(-1) for the CO2 hydration, physiological reaction, being one of the most effective p-CAs known to date, with a catalytic activity only 1.68-times lower than that of the human(h) isoform hCA II. A panel of 39 simple aromatic and heterocyclic sulfonamides, as well as clinically used drugs incorporating sulfonamide/sulfamate zinc-binding groups, was used to investigate the inhibition profile of Ftu beta CA with these classes of derivatives. The enzyme generally showed a weaker affinity for these inhibitors compared to other alpha- and beta-CAs investigated earlier, with only acetazolamide and its deacetylated precursor having inhibition constant <1 mu M. Indeed, the two compounds acetazolamide AAZ and its deacetylated precursor 13 K(1)s of 655-770 nM), as well as metanilamide and methazolamide K(1)s of 2.53-2.92 I.LM), were the best Ftuf3CA inhibitors detected so far. As the physiological role of bacterial beta-CAs is poorly understood for the virulence/life cycle of these pathogens, the present study may constitute a starting point for the design of effective pathogenic bacteria CA inhibitors with potential use as antiinfectives. (C) 2017 Elsevier Ltd. All rights reserved. With the aim of finding a better xanthine oxidase inhibitor with potential anti-gout properties, the studies on the fruit of Stauntonia brachyanthera were carried out, which led to the isolation of 12 glycosides, including 4 new nor-oleanane triterpenoids. Their structures were determined by comprehensive spectroscopic (NMR and HR MS) analysis. Two compounds (4 and 11) exhibited significant inhibitory activities on xanthine oxidase with IC50 values of 5.22 and 1.601.1M, respectively. Another five compounds (1, 2, 3, 8 and 10) showed qualified activities. The results suggested that the existences of nor-oleanane triterpenoids and flavonoids in the fruits were responsible for the inhibitory activity on xanthine oxidase that could cut off the production of uric acid. Nor-oleanane triterpenoids, a new leading XO inhibitor, is worthy of further studies on molecular biology level for its mechanisms. (C) 2017 Elsevier Ltd. All rights reserved. A novel series of acyl selenoureido benzensulfonamides was evaluated as carbonic anhydrase (CA, EC 4.2.1.1) inhibitors against the human (h) isoforms hCA I, II, VII and IX, which are involved in a variety of diseases such as glaucoma, retinitis pigmentosa, epilepsy and tumors etc. These compounds showed excellent inhibitory activity for these isoforms, with several low nanomolar derivatives identified against all of them. Furthermore, the selenoureido group may provide an antioxidant activity to these enzyme inhibitors. (C) 2017 Elsevier Ltd. All rights reserved. Photoinduced electron transfer (PeT)-based hybridization probe is a linear and quencher-free oligonucleotide (ON) probe for DNA or RNA detection. In this report, we designed and synthesized novel adenosine analogues for PeT-based hybridization probe. In particular, the analogue containing a piperazinomethyl moiety showed effective quenching property under physiological conditions. When the probe containing the analogue was hybridized with a complementary DNA or RNA, the fluorescence increased 3- or 4-fold, respectively, compared to the single-stranded state. (C) 2017 Elsevier Ltd. All rights reserved. A series of N-substituted saccharins incorporating aryl, alkyl and alkynyl moieties, as well as some ring opened derivatives were prepared and investigated as inhibitors of the metalloenzyme carbonic anhydrase (CA, EC 4.2.1.1). The widespread cytosolic isoforms CA I and II were not inhibited by these sulfonamides whereas transmembrane, tumor-associated ones were effectively inhibited, with K(i)s in the range of 22.1-481 nM for CA IX and of 3.9-245 nM for hCA XII. Although the inhibition mechanism of these tertiary secondary sulfonamides is unknown for the moment, the good efficacy and especially selectivity for the inhibition of the tumor-associated over the cytosolic, widespread isoforms, make these derivatives of considerable interest as enzyme inhibitors with various pharmacologic applications. (C) 2017 Elsevier Ltd. All rights reserved. High-quality Ba2NdFeNb4O15-based nanocomposite films were grown on Pt/MgO(100) single crystalline substrates. The c-oriented epitaxial films have the unit cell rotated in the plane by +/- 18.5 degrees with respect to that of the substrate. The room temperature ferroelectric properties of the films were demonstrated at both the macroscopic and the microscopic scales, attesting that the macroscopic ferroelectricity was conserved down to the nanoscale. The presence of a small amount of nanocrystalline BaFe12O19 as a magnetic secondary phase was confirmed by measuring ferromagnetic hysteresis loops. The coexistence of ferroelectric and ferromagnetic phases attested the achievement of nanocomposite films, exhibiting room temperature multiferroic properties. (C) 2017 Published by Elsevier Ltd on behalf of Acta Materialia Inc. Tailoring the fraction and stability of retained austenite in the medium manganese steel has always been an issue of great industrial interest. A novel cyclic austenite reversion treatment (ART) is proposed to obtain considerable amount of retained austenite in a Fe-0.21C-4.53Mn (wt%) steel, and it is found to be more efficient by comparison with the conventional ART. Transformation kinetics, alloying element partitioning and microstructure evolution during the cyclic and conventional austenite reversion process is discussed. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A correlative method based on electron back scattered diffraction and focused ion-beam-digital image correlation slit milling technique was used to quantitatively determine spatially resolved stress profiles in the vicinity of grain boundaries in pure titanium. Measured local stress gradients were in good agreement with local average misorientation and experimentally calculated geometrically necessary dislocation densities. Stress profiles within few hundred to thousand nanometers near the grain boundary display a local minimum, followed by a typical Hall-Petch type variation of "one over square root of distance". The observed trends allude to local stress relaxation mechanisms active near grain boundaries. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A new high-entropy alloy, AlCoCr1.5Fe1.5NiTi0.5, with dual-phase body-centered cubic structure, exhibits excellent compression strength and large plasticity upon quasi-static and dynamic loadings. Positive strain-rate sensitivity on the yielding strength is available, and the work-hardening behavior weakens with increasing the strain rates due to the adiabatic heating effect upon dynamic loadings. Considering the adiabatic temperature rise converted by plastic deformation work at high strain rates, the flow behavior is characterized by employing the modified Johnson-Cook model. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Room temperature elastic constants of non-modulated (NM) tetragonal martensite of a Mn-rich Ni-Mn-Ga alloy were determined by ultrasonic methods. The results are in good qualitative agreement with ab-initio predictions and confirm that NM martensite exhibits strong elastic anisotropy with shear instability related to the soft acoustic phonons mediating the reverse transition. The geometric arrangement of the softest shearing modes is shown to be identical as for the stress-induced fct martensite of the Fe-31.2 at%Pd alloy. However, it markedly differs from the arrangement of the soft shearing modes in parent Ni-Mn-Ga austenite phase. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Amorphous FeCrNi/a-C:H coatings are deposited by pulsed magnetron sputtering of austenitic stainless steel in argon/acetylene atmosphere. High-resolution transmission electron microscopy, electron energy loss spectroscopy and energy dispersive X-ray mapping reveal a pronounced nanotubular structure consisting of metallic cores that thread along the film growth direction and are encapsulated by amorphous carbon shells in a cream-roll fashion. The coatings exhibit excellent mechanical, tribological, and anti-corrosion properties. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. We describe herein a facile synthetic approach to CoFe2O4 (CFO)/lead lanthanum zirconate titanate (PLZT) multiferroic composites. The composites were prepared by synthesizing mesoporous CFO through a novel temple-free strategy and subsequent generating PLZT inside mesoporous CFO. The coexistence of both CFO and PLZT phases in the composites was confirmed by XRD. The porosity and pore structures were determined from N-2 physisorption analysis and TEM. The composites were found to possess multiferroicity at room temperature and a maximum magnetoelectric voltage coefficient of 36.7 mV/cm Oe. This work is anticipated to pave the way for enhancing the magnetoelectric coupling of multiferroic composite materials. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Density functional theory was used to predict the diffusion and precipitation of Cr in Cu. The energy barrier of Cr diffusion in Cu is comparable to the self-diffusion of Cr, and higher than the energy barrier of Cu self-diffusion. The simulations support the experimentally measured hardness of 30 nm thick nanolaminate layers of Cr/Cu-3.4%Cr. As-deposited films had a hardness of 6.25 GPa; annealing at 373 K decreased hardness to 5.95 GPa, while annealing at 573 IC increased the hardness to 6.6 GPa. Transmission electron microscopy indicates there is significant local strain due to precipitation, in agreement with theoretical predictions. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Excellent mechanical properties of oxide-dispersion-strengthened (ODS) alloys arise from a high density of Y-Ti nano-oxides finely dispersed in the matrix. Characteristics of this precipitation can strongly be influenced by oxygen contamination during milling. We studied an as-received and annealed (1 h 1150 degrees C) oxygen-enriched ferritic ODS alloy by High Resolution Transmission Electron Microscopy. The as-received sample has unknown b.c.c nano-oxides, while the annealed one has orthorhombic nanoparticles. We propose a phase relaxation during the annealing, confirmed by the observation of a particle where both cubic and orthorhombic structures coexist. The influence of oxygen on the particle structure is also discussed. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. We examined the microstructure and texture evolution during the recrystallization annealing of cold rolled Mg-1Y and Mg-0.2Zn-1Y alloys. At the early annealing stage, new grains are nucleated on sub-units with 200-300 nm surrounded by dislocation walls within shear bands. The Zn and Y addition reduced the stacking fault energy which results in the high activity of non-basal deformation mechanisms and formation of segregation zones along the stacking faults. This causes the orientation of recrystallization nuclei to be effectively retained so that the texture weakens during the recrystallization without reducing the basal pole split into the sheet transverse direction. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Phase transitions in alpha'-Ti martensite driven by high pressure torsion (HPT) as well as alpha' -> omega transformations in Ti Fe alloys were observed for the first time. The as cast alloys transformed into alpha'-Ti martensite after annealing in the beta-(Ti,Fe)solid solution region and subsequent quenching. Lattice parameters of alpha'-Ti martensite decreased with increasing iron content, similar to the lattice parameter of beta-Ti. During HPT, ce-Ti martensite transformed partly into co -Ti. At the same time, the lattice parameters of remaining alpha'-Ti phase increased towards those for iron-free omega-Ti. These processes included an increased mass transfer of iron atoms out of alpha'-Ti. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Novel structures of TiC/TiAl2/carbon nanofiber (CNF) hybrid reinforcements with attached magnesium oxide (MgO) particles were successfully fabricated in the matrix of the Mg alloy AZ91 (Mg 9 wt%, Al 1 wt%, Zn) composite. This was done using a novel liquid pressing process, coupled with in situ reactions between titanium dioxide (TiO2)-coated CNFs and the Mg alloy. Homogeneously dispersed hybrid reinforcement with strong interfacial bonding through TiC/FiAl(2),1ed to dramatic improvement in the mechanical properties of AZ91 alloys, with the assistance of an anchoring effect by MgO particles chemically attached to the coating layer. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Highly textured Ti2AlN ceramic was successfully fabricated by edge-free spark plasma sintering (EFSPS) of Ti2AlN discs synthesized by thermal explosion (TE). The orientation degree of as-sintered Ti2AlN ceramic was evaluated by X-ray diffraction and electron backscattered diffraction. It was found that the preferred orientation of Ti2AlN grains paralleled to the SPS-loading direction was along the c-axis and the Lotgering orientation factor on the textured top surface was as high asf(001) = 0.80. Due to the highly textured microstructure, the obtained Ti2A1N ceramic showed anisotropic mechanical and physical properties. The results highlight the advantages of EFSPS combined with TE technique. (C) 2017 Acta Materialia Inc. Published byElsevier Ltd. All rights reserved. The fatigue damage behavior of a novel nanotwinned stainless steel consisting of nanotwinned austenitic grains, nanograins and dislocation structures was investigated. It is found that the nanotwinned grains can effectively suppress the generation of persistent-slip-bands-like shear bands and nucleation of fatigue cracks in comparison with the other two deformed regions. Such better fatigue damage resistance of nanotwinned grains is mainly ascribed to the higher microstructural stability of them without obviously structural coarsening. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The properties of nano-scale interstitial dislocation loops under the coupling effect of stress and temperature are studied using atomistic simulation methods and experiments. The decomposition of a loop by the emission of smaller loops is identified as one of the major mechanisms to release the localized stress induced by the coupling effect, which is validated by the TEM observations. The classical conservation law of Burgers vector cannot be applied during such decomposition process. The dislocation network is formed from the decomposed loops, which may initiate the irradiation creep much earlier than expected through the mechanism of climb-controlled glide of dislocations. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. It is common wisdom that metallic glasses (MGs) exhibit excellent oxidative resistance due to their disordered structure and lack of defects and grain boundaries. A more interesting fact is that among normal MGs, aluminiferous MGs exhibit unsuspected and excessive strong antioxidant ability. However, the mechanism of antioxidation of aluminiferous MGs at very thin surface remains unknown. We report the direct atomic level observations on the dynamic process of Al2O3 forming at aluminiferous MGs surface by the high resolution transmission electron microscopy. The Al2O3 layer at surface serves as a compacted protective layer to slow down the further oxidation of MGs. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. To increase the peak temperature during solution heat treatment of a single crystal superalloy over the solidus, a remelting solution heat treatment is investigated. It contains two parts: a) homogenizing at mushy zone temperatures, and ignoring partial incipient melting; b) eliminating incipient melting structure by secondary solution heat treatment. After remelting solution heat treatment the refining gamma' particles uniformly distributed and the residual segregations are less than those after traditional solution heat treatment. Therefore, the rupture life at 1100 degrees C/150 MPa is improved from 105.3 h after traditional solution heat treatment to 141.1 h. (C) 2017 Published by Elsevier Ltd on behalf of Acta Materialia Inc. Molecular dynamics simulations were conducted to study the formation and destruction-of stacking fault tetrahedron (SFT) in fcc metals. The stacking fault energy, the size of vacancy cluster and temperature were found to play a significant role in the formation of a perfect SFT. Also, it was found that the compressive stress can unzip the perfect SFT to a truncated one, and can facilitate the destruction of SFT by transforming the faulted Frank loop to the unfaulted full dislocation loop. We provided the atomic details of how the unfaulting occurs using molecular dynamics method. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A double-layer heterostructure with embedded into single-crystal silicon matrix nanocrystals (NCs) of gallium antimonide (GaSb) was grown. The NCs were formed by solid phase epitaxy method using 1.6-nm-thick Ga-Sb stoichiometric mixture and annealing at a temperature range of 200-500 degrees C. The embedded NCs have a concentration of about 5.4 x 10(10) cm(-2), a mean height of 8.6 nm and a mean lateral dimension of 19.2 nm. A stress induced inside the NCs owing to lattice mismatch between Si and GaSb was fully relaxed by edge dislocations at Si/GaSb interface. All the NCs have identical epitaxial relationship: GaSb(111)parallel to Si(111), GaSb[1 (1) over bar0]parallel to Si[1 (1) over bar0]. (C) 2017 Published by Elsevier Ltd on behalf of Acta Materialia Inc. Cu and Ta are co-deformed at 673 K by the accumulative roll-bonding technique up to 8 passes. At an equivalent von Mises strain (epsilon(vm)) of similar to 2 the deformation is bifurcated into shear accommodated by the 'soft' Cu, and plane-strain by the 'hard-but-ductile' Ta. This is attributed to transitions occurring collectively at epsilon(vm) similar to 2 in crystallographic texture, partitioning of recovery and recrystallization, and the nature of interfaces as elucidated by orientation relations, Hall-Petch behavior and manifestation of instabilities at higher strain. (C) Published by Elsevier Ltd on behalf of Acta Materialia Inc. In this study, the performance of hot compressive deformation was investigated in two newly developed Ni-Co based superalloys having different stacking fault energy (SFE). It is interesting to discover that nanograins (NGs) can be produced in superalloy deformed at high temperature and relatively low strain rate. With the decrease of SFE and strain rate, the formation of NGs will be promoted obviously, leading to the negative strain rate sensitivity of compressive stress in superalloys. The NGs, generated through the fragmentation of subgrains by microtwins, may become a candidate choice for dynamically strengthening superalloys at high temperature. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Within this transmission electron microscopy study, we explore the atomic structure of Fe-Pd thin films with and without supporting single crystalline (001) MgO substrate. We observe a phase structural transition depending strongly on spatial limited stresses. While around the film-substrate interface edge dislocations form to minimize stresses due to the lattice misfit between MgO and Fe-Pd, preferring the face-centered cubic phase, upper sample regions show twinning structures with the face- and body-centered tetragonal phase, respectively. In freestanding films, bct and bcc phases are observed. Compared to unprocessed lift-off films, no hierarchical structure was existent anymore, based on changed stress distributions. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The effects of electrochemical hydrogen charging on the stress relaxation and strain-aging of < 111 > - and < 001 > - oriented single crystals of Hadfield steel were studied under tension at room temperature. An orientation dependence of stress relaxation and strain-aging-assisted yield-drop phenomena was observed for hydrogen-free < 111 > - and < 001 > - oriented single crystals. Hydrogen charging up to five hours increased the stress relaxation rate for slip-associated deformation in < 001 > - oriented single crystals and decreased it for twinning-assisted deformation in < 111 > - oriented specimens. The present results demonstrate that different mechanisms dominate the interaction of hydrogen with dislocations, stacking faults and twins. (C) 2017 Acta Materialia Inc Published by Elsevier Ltd. All rights reserved. Columnar-grained Cu-Al-Mn shape memory alloy (SMA) exhibits excellent superelastic fatigue properties at high strain amplitude, and its functional fatigue life reaches above 10(3) cycles at tensile strain of 4%-10%. As the increase of loading-unloading cycle number, the transformation stress and superelastic recovery rate decrease. The decay coefficient of recovery rate is about 1.4. Fatigue cycles for Cu-Al-Mn SMAs can be divided into four stages according to the significant changes of superelasticity: plateaus stage, rapid attenuation earlier stage, rapid attenuation later stage, functional incapacitation stage. The columnar-grained Cu-Al-Mn SMA shows larger range of stable superelasticity than other polycrystalline counterparts. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A rapid solidification and spark plasma sintering technique was applied to fabricate Bi-Sb-Te bulk thermoelectric materials. Although the grain boundaries in sintered samples were primarily composed of large-angle boundaries, a large fraction of small-angle boundaries was found at a high sintering temperature. As the sintering temperature increased, the Seebeck coefficient was almost unchanged while the electrical resistivity gradually decreased. The thermal conductivity gently increased till 420 degrees C, and then abruptly increased. As a result, the largestZTvalue of 1.1 was achieved for the sample sintered at 400 degrees C, which is increased by 44.7% compared with that prepared from mechanically alloyed powders. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The fracture behavior of the unpoled 0.94(Na1/2Bi1/2)TiO3-0.06BaTiO(3) relaxor ferroelectric was investigated. Previous studies indicated that a metastable ferroelectric long-range order can be induced by mechanical stresses, which could lead to a crack tip process zone and increasing crack resistance during crack growth. Crack propagation in compact tension samples yielded a constant crack resistance as function of crack length. This is consistent with ex situ x-ray diffraction experiments, where a remanent induced process zone on the fracture surface could not be detected. We suggest the very high transformation stress, determined via macroscopic stress-strain measurements, is responsible for the absence of toughening. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The use of high strength Al-Zn-Mg-Cu alloys in automotive components is restricted by their low formability. Warm forming in the under-aged state improves formability, but induces a strain-dependent microstructure evolution due to dynamic precipitation. We present a new methodology to quantitatively evaluate the dynamic precipitation strain dependence which is applicable to strain rates relevant to forming. The plastic strain is varied using a tapered tensile sample and the precipitation state is measured in terms of size and volume fraction using spatially resolved small-angle X-ray scattering, showing its evolution during straining is dependent on the imposed strain level and rate. Crown Copyright (C) 2017 Published by Elsevier Ltd on behalf of Acta Materialia Inc. All rights reserved. The formation of martensite (epsilon and alpha') in metastable austenitic Fe-18Cr-(10-11.5)Ni alloys was investigated in situ during cooling. High-energy X-rays were used to study the bulk of the alloys. Both grain-averaged and single grain data was acquired. s played an important role in the formation of a' with an indistinguishable difference in the martensite start temperature. The single-grain data indicated that stacking faults appear as precursors to a An analogy can be made with deformation-induced martensitic transformation, where the generation of nucleation sites would significantly lower the driving force required to overcome the energy barrier in low stacking fault energy Fe-Cr-Ni alloys. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. A new recrystallization mechanism characterized by the formation of a coherent gamma shell around a primary gamma' particle has been reported in superalloys [Charpagne et al.J. Alloy Compd. 688 (Part B) (2016) 685-694]. In the present work, the thermodynamics of nucleation by this mechanism are investigated, considering the stored deformation energy, interfacial energy, and elastic misfit. It is demonstrated that under realistic conditions, growth of any nucleus formed on a primary gamma' particle is thermodynamically favorable regardless of size. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. The kinetics of the isothermal product formed below the M-s temperature in 0.32C-1.78Mn-0.64Si-1.75Al-1.20Co (wt%) steel has been analyzed. It has been shown that the diffusion controlled growth rate of pearlite using Zener-Hillert model is too slow to explain the observed kinetics at such a low temperature. The isothermal product cannot be identified as the eutectoid phases. However, an existing model of carbide precipitation from supersaturated ferrite correctly predicts the precipitation observed in bainitic ferrite plate. The results indicate that bainitic ferrite is supersaturated with carbon when it first forms and therefore indicated a displacive mechanism of transformation. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Improving the efficiency of gas turbine engines requires the development of new materials capable of operating at higher temperatures and stresses. Here, we report on a new polycrystalline nickel-base superalloy that has exceptional strength and thermal stability. These properties have been achieved through a four-element composition that can form both gamma prime and gamma double prime precipitates in comparable volume fractions, creating an unusual dual-superlattice microstructure. Alloying studies have shown that further property improvements can be achieved, and that with development such alloys may be suitable for future engine applications. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. Zirconia-based ceramics are normally sintered at high temperatures to obtain dense compacts that can deliver good mechanical and/or electrical performance. We recently developed a cold sintering process to achieve dense ceramics at incredibly low temperatures, or an improved densification after a second-step conventional sintering. Here we briefly review the current progress on its relevance to zirconia ceramics as exemplified by yttria-doped zirconia. We will contrast the processing of compositions for a structural-ceramic and an electroceramic. Together with enhanced densification, the processing-structure-property relationship is outlined. The cold sintering technique can provide a cost-effective and energy-saving processing for fabrication of zirconia-based materials. (C) 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Well-shaped mesoporous silica-based nanotubes with different framework compositions, such as organic groups (ethylene, phenylene), carbon/silica hybrids, could be controllably synthesized with the inner diameter of less than 10 nm and the surface areas of about 400-900 m(2) g(-1). Through an impregnation-reduction process, palladium (Pd) nanoparticles have been uniformly dispersed in the channels of these different nanotubes, which were confirmed by electron microscopy analysis. The catalytic performance of these silica-based nanotubes loaded Pd nanoparticles was evaluated by the aerobic oxidation of benzyl alcohol and enantioselective hydrogenation of alpha,beta-unsaturated carboxylic acid, respectively. Due to one-dimensional nanotube structures and tunable hydrophilic/hydrophobic properties, the organosilica nanotubes supported Pd nanoparticles could afford >99% conversion of benzyl alcohol and 89% selectivity of benzaldehyde within 2 h. Importantly, carbon/silica hybrid nano tubes may have a positive effect on the reaction, which gave about 95% selectivity of benzaldehyde. The catalysts could be reused without an obvious decrease in both conversion and selectivity. Furthermore, the silica-based nanotubes supported Pd nanoparticles were also efficient for asymmetric hydrogenation of alpha,beta-unsaturated carboxylic acids, which showed a highest conversion of 99% with 48% enantioselectivity. (C) 2017 Elsevier Inc. All rights reserved. Transition metal oxides containing different metal cations, also called as mixed metal oxides (MMOs), have confirmed improved electrochemical activities in comparison with single metal oxides (SMOs, containing single metal cations). In this study, for the first time, we have synthesized the hollow (Co0.62Fe1.38)FeO4/NiCo2O4 nanoboxes by simple and cost effective chemical precipitation method and investigated its lithium storage property. The uniqueness of this composite material is the hollow nano structure with a very thin porous shell, which has rarely reported previously. The observed surface area of nanoboxes is 21.8 m(2) g(-1) with average pore size of 4 nm. As a results, the (Co0.62Fe1.38)FeO4/NiCo2O4 nanoboxes manifests a high reversible capacity of around 835.5 and 676.2 mAh g(-1) over 350 cycles at a current densities of 200 and 500 mA g(-1), respectively. The nano-dimention with hollow structure not only benefited electron and Li-ion transportation, it also provided large electrode electrolyte contact area. Furthermore, the high reversible capacity in (Co0.62Fe1.38)FeO4/NiCo2O4 nanoboxes electrodes is most likely attributed to the synergistic electrochemical activity of both the phases, (Co0.62Fe1.38)FeO4/NiCo2O4. Hence, based on high reversible capacity as well as an outstanding rate performance, the (Co0.62Fe1.38)FeO4/NiCo2O4 nanoboxes electrode sheds light on commercial applications as an alternative lithium-ion battery anode material. (C) 2017 Elsevier Inc. All rights reserved. In this study, isomerization of styrene oxide to phenyl acetaldehyde was investigated over a series of TS-1 catalysts with different crystal sizes and post-treatment methods under a gas-phase atmosphere free of solvents. The physicochemical properties of the samples were characterized by a combination of N-2 adsorption, XRD, NH3-TPD, UV vis, FT-IR and SEM. By the characterization of catalysts and investigation of their catalytic performances, results indicated that nano size TS-1 exhibited better anti-coking ability and phenyl acetaldehyde selectivity than micro size TS-1. Additionally, TPAOH treatment led to the development of considerable mesoporosity without significant destruction of its intrinsic zeolite properties. The results highlighted that the existence of well-developed hierarchical pore systems in TS-I-O could reduce diffusion path length and enhance transport of phenyl acetaldehyde out of the zeolite crystals, thus markedly improving catalytic stability and selectivity. However, upon NaOH treatment, the micropore structures were irreversibly destroyed accompaning with the amorphization of the zeolite crystals. (C) 2017 Elsevier Inc. All rights reserved. Herein, a new antitumor active polyoxometalate, (TBA)(4)H-3[GeW9V3O40], has been introduced as an inorganic drug with enormous influence on brain cancer cells and its outstanding results against U87 cells have been described. Post-functionalization of this polyoxometalate produces a drug delivery vehicle which comprises both kinds of inorganic and organic drugs. This system is consisted of mesoporous silica nanoparticles as organic drug carrier, along with redox-responsive disulfide bonds for drug release into the cell at the same time. Moreover, a fluorescence dye has been attached to the polyoxometalate that, allows for tracking the drug into the cell environment. This dual-functionalized polyoxometalate was utilized in a designed multidrug delivery system, and worked greatly against U87 cells and knocked down these cancer cells up to near 70% in 48 h. Due to its unique properties, this multidrug delivery vehicle is potent to be developed and used for various applications in cancer therapy. (C) 2017 Elsevier Inc. All rights reserved. Hierarchically structured ZSM-5 zeolites (HSZ) were synthesized by a mesoporogen-free procedure and subsequently modified with varied amounts of phosphorus (1-3 wt%) through impregnation of phosphoric acid solution. Materials characterization using various techniques showed that the hierarchical structures of HSZ were well preserved after phosphorus modification, and more interestingly, their hydrothermal stability were improved significantly and the main textural properties kept almost unchanged even after hydrothermally treated at 750 degrees C for 4 h in 100% steam. The strong acid sites of HSZ were found to be gradually eliminated by the phosphorus induced dealumination of tetrahedral framework aluminum (TFAL), however, weak acid sites remained almost intact. In the 1-butene cracking reactions, benefitting from the auxiliary mesopores and phosphorus modification, P-modified HSZ showed remarkably improved selectivity (similar to 52%) and yield (similar to 43%) of propylene as well as superior anti deactivation ability. All these properties of P-modified HSZ made it a promising catalyst for industrial application. (C) 2017 Elsevier Inc. All rights reserved. This work evaluates two different zeolite-based humidity sensors. In the first case, interdigital capacitive sensors (IDC-S) were fabricated on the surface of Al2O3 ceramic substrates, using electrode gaps of 20 gm, and were coated with films of LTA-type (Lind Type A) zeolite with Si/AI ratio of 1.28. Complex impedance spectroscopy (IS) was used to measure the sensor response, which was related to the change in capacitance of the interdigital electrodes and, in turn, to the electrical properties of the zeolites. The zeolite-based sensors were characterized in terms of the effects of changes in humidity and temperature on the sensor response. The results showed that this sensor provided detectable capacitance changes at very low water contents (up to 300 ppmv of H2O in N-2), at temperatures ranging from 25 to 100 degrees C, and was therefore suitable for moisture trace measurements. In the second part of the work, evaluation was made of a humidity sensor based on ZSM-5 (Zeolite Socony Mobil - 5) zeolite. Interdigital capacitive sensors were fabricated on the surface of Al2O3 ceramic substrates, with electrode gaps of 20 gm, and were coated with films of ZSM-5. The results showed that the sensor was capable of good performance (detection limit of similar to 732% RH) and was suitable for use under a broader range of environmental conditions (similar to 39% RH - 96% RH), compared to sensors based on other materials such as polyimide (detection limit of 20% RH) and TiO2 (detection limits from 10% to 30% RH). 2017 Elsevier Inc. All rights reserved. A liquid vapor deposition (LVD) method, in which vapor from a liquid silicon precursor, cyclopentasilane (CPS), is utilized, allows homogeneous incorporation of silicon into mesopores of monodispersed star burst carbon spheres (MSCS). The amount of silicon incorporated can be precisely controlled just by changing the amount of CPS. MSCS consists of carbon nanorods and their surfaces are shown to be coated with silicon. The Si/MSCS is tested for an anode of LIB. It is found to retain a capacity of more than 2000 mAh/g after 100 charging cycles when an appropriate amount of silicon is incorporated into MSCS. (C) 2017 Elsevier Inc. All rights reserved. For the first time, a systematic, consistent domain exploration has been conducted on mono- and bicationic exchanged X and Y faujasites for xylene separation using a combinatorial approach. In total, a large, diverse library of 68 faujasites exchanged with alkali cations (Na+, K+, Cs+) and alkali earth cations (Ca2+, Ba2+) was prepared and tested in the presence of three distinct mixture conditions of relevance to the separation process. From the measurements of the 68 x 3 breakthrough curves, we calculated separation performances for the three xylene isomers (p-, m-, o-X), EB, and PDEB. The set of performance properties generated for each adsorbent was analyzed by statistical methods enabling sorting into different classes of selective adsorbents. A rational mapping of exchanged faujasites for xylene separation was performed. (C) 2017 Elsevier Inc. All rights reserved. In this study, chemical composition, structural and physical properties of expanded perlite (EP) were determined by particle size, apparent density, SEM-EDS and XRD measurements. Electrorheological (ER) properties of EP particles dispersed in silicone oil (SO) were fully investigated as a novel dry-based ER fluid. Thus, ER response of EP/SO dispersions was revealed as a function of electric field strength (E), volume fraction, shear rate, shear stress, frequency and temperature. As a result, the EP/SO ER fluid was observed to sensitive to external E, exhibiting a typical shear thinning non-Newtonian viscoelastic behavior. The correlation between the yield stress (tau(y)) and E was deviated from polarization model (m < 2). Further, antisedimentation stabilities of the mesoporous EP particles in SO medium were determined to be perfectly suitable for potential ER applications under various temperatures. (C) 2017 Elsevier Inc. All rights reserved. A facile and sustainable approach for preparation of multiple-shelled hollow periodic-mesoporous organosilicas (PMOs) nanospheres with adjustable shells thickness has been established by a dual template method without any other extra additions. Triblock copolymer Pluronic P123 and cationic and anionic mixed surfactant (CTAB/SDS) were employed as dual templates and 1,4-Bis(triethoxysilyl) benzene (BTEB) was used as the organicsilica precursor. In the synthesis, single and multiple-shelled hollow nanospheres were synthesized with different CO2 pressures. Moreover, the shell thickness of hollow PMOs nanospheres could be tuned simply by adjusting the CO2 pressure. TENT, SAXS, N-2 adsorption-desorption, solid-state NMR, and FTIR were employed to character the structure and component of the prepared PMOs. For their unique multiple-shelled structure, the obtained PMOs nanospheres were used as the drug carrier to demonstrate the high uploading and controlled release of an anti-cancer drug doxorubicin. The versatility of this method was demonstrated with the preparation of hollow PMO nanospheres from other surfactants. (C) 2017 Elsevier Inc. All rights reserved. Size-controllable monodispersed carbon@silica core-shell microspheres and hollow silica microspheres were prepared in a simple homemade T-type mixer by polymerization of furfuryl alcohol (FA) and hydrolysis of TEOS in H2SO4 water phase microdroplets to obtain polyfurfuryl alcohol (PFA)@silica microspheres, followed by carbonization and calcination. The FA and TEOS diffuse into the water phase from an oil phase. The flow rates of oil and water phase were 4 and 2 ml h(-1), respectively. It was found that the concentration of FA has a more significant effect on the diameter of carbon@silica core-shell microspheres than TEOS due to the template effect of the PFA core. However, the diameter of the hollow silica microspheres was influenced by the concentration of TEOS more significantly. The obtained core-shell microspheres and hollow silica microspheres have large surface area of 555 and 769 m(2) g(-1), respectively. The hollow silica microspheres have both microporous and mesoporous structure, and the percentage of mesoporous volume was as high as 89%. In addition, based on the study results, a rational formation process of the carbon@silica core-shell microsphere and hollow silica microspheres was assumed. (C) 2017 Elsevier Inc. All rights reserved. A novel heterojunction of mesoporous titanosilicate/graphitic carbon nitride (TSCN) inorganic-organic hybrid composite has been developed. The synthesized mesoporous TSCN nanocomposites were characterized by various analytical techniques for structural and chemical properties. Finely distributed porous titanosilicate was observed on the surface of the g-C3N4 from the FE-SEM and TEM analysis. More significantly, the TSCN nanocomposite exhibited enhanced photocatalytic activity in the degradation of Rhodamine B (RhB) under sunlight irradiation. The optimum photocatalytic activity of TSCN10 at 10 wt% of g-C3N4 under visible light is almost 5 and 3 fold higher than pure titanosilicate (TS) and pure g-C3N4 (CN) respectively. The synthesized photo-catalysts are highly stable even after five successive experimental runs. The improved photocatalytic performance of the TSCN10 hybrid nanocomposite photo-catalysts under visible light irradiation was due to the high surface area and synergistic effect. Therefore, TSCN10 hybrid photo-catalyst is a promising material for energy conversion and environmental remediation. (C) 2017 Elsevier Inc. All rights reserved. Understanding the mechanism of enzyme immobilization in porous designed matrices is important issue to develop biosensors with high performance. Mesoporous carbon ceramic materials with conductivity and appropriated textural characteristics are promising candidates in this area. In this work, carbon ceramic materials were synthesized using the sol-gel method by planning the experimental conditions to obtain materials with different pore size, from 7 to 21 nm of diameter. The study of the influence of pore size in the biomacromolecules immobilization capacity was performed using glucose oxidase enzyme as probe. The influence of textural characteristics of material in the amount of enzyme immobilized, as well as, its performance as biosensor, was studied. On the surface of highest pore size matrix, it was possible to immobilize the highest amount of enzyme, resulting in better electrochemical response. With this simple material, composed only by silica, graphite and enzyme, which was improved by the amount of immobilized enzyme through the enlargement of matrix pore size, it was possible to prepare an electrode to be applied as biosensor for glucose determination. This electrode presents good reproducibility, sensitivities of 033 and 4.44 mu A mM(-1) cm(-2) and detection limits of 0.93 and 0.26 mmol L-1, in argon and oxygen atmosphere, respectively. Additionally, it can be easily reused by simple polishing its surface. (C) 2017 Elsevier Inc. All rights reserved. Nanocrystallite self-assembled hierarchical ZSM-5 zeolite microspheres (NSHZ) were prepared by a simple hydrothermal synthesis procedure in the presence of 3-glycidoxypropyltrimethoxysilane (KH-560). Moreover, the HZSM-5 zeolite (Commercial ZSM-5) and MSHZ zeolite (Mesoporous HZSM-5 synthesized without addition of KH-560) were also introduced as reference samples. The textural and acid properties of all fresh catalysts (HZSM-5, MSHZ, NSHZ) were characterized using XRD, SEM, TEM, ICP, N-2 adsorption-desorption, NH3-TPD, Pyridine adsorption IR spectra (Py-IR) and FT-IR techniques. The results showed that uniform NSHZ zeolite microspheres possessed higher crystallinity, smaller crystal size, higher BET surface and pore volume. At the same time, the NSHZ zeolite also had more strong acid sites and proper B/L ratio (The ratio of the amount of Bronsted acid sites to that of Lewis acid sites). Benefiting from these merits, the NSHZ zeolite exhibited higher catalytic lifetime and selectivity of light aromatics (benzene (B), toluene (T) and xylene (X)). The desorption measurements of isooctane showed that NSHZ zeolite had superior diffusion performance, which could effectively promote the fast removal of heavier molecules. In addition, TG analysis of all used catalysts confirmed that NSHZ zeolite had higher coke capability and lower average rate of coke formation. (C) 2017 Elsevier Inc. All rights reserved. A simple synthetic method was developed to produce holey graphene with in-plane nanopores by a fast thermal expansion of graphene oxide (GO) in air and further thermal reduction in N-2 flow at 900 degrees C. The as-synthesized holey graphene nanosheets (HGN) shows meso-macroporous structure and higher surface area than chemically reduced graphene oxide (CRGO) by using hydrazine hydrate. The catalysts of HGN-900 supported Pt nanoparticles (Pt/HGN-900) were further prepared through in-situ chemical co-reduction and applied in the electro-oxidation of methanol. The electrocatalytic performance of catalysts was investigated by cyclic voltammetry (CV) and chronoamperometry (CA) analysis. The results indicate that the catalytic activity of Pt/HGN-10min-900 (377.5 mA mg(pt)(-1)) is bout 1.83 and 2.77 times higher than that of Pt/CRGO-900 (206.1 mA mg(pt)(-1)) and Pt/XC-72 (136.2 mA mg(pt)(-1)) catalysts in 0.5 mol L-1 H2SO4 and 1.0 mol L-1 CH3OH, and Pt/HGN-900 catalysts show higher stable current density compared to that of Pt/XC-72 and Pt/CRGO-900 catalysts. (C) 2017 Elsevier Inc. All rights reserved. In this study, bilayer mixed matrix membranes based polyether block amide containing ZIF-8 metal organic nanoparticles, as dispersed particles within the polymer matrix, were synthesized to separate carbon dioxide from methane. To prevent nanocomposite membrane selectivity from a drastic reduction at high loading, the particles were modified by APTMS, APTES. The modified nanoparticles were identified and examined using XRD, BET, DLS, and FTIR. Then the permeability tests were performed with carbon dioxide and methane. A 40 mu m thick PES membrane was produced as the base and the PEBA/ZIF-8 MMM with the thickness of about 4 mu m as the thin, selectively permeable layer. The APTES-modified ZIF-8 nanoparticles increased forces between the particle surface and polymer chains leading to increased permeability without any significant change in the selectivity. At the loading of 40 wt percentage, the permeability of carbon dioxide significantly improved to 6.7 x 10(-8) molm(-2)s(-1)pa(-1) and selectivity remained about 16. (C) 2017 Elsevier Inc. All rights reserved. The influences of pore confinement and topology effect on ethylene dimerization over a series of zeolite, like HMOR, HZSM-5, HBEA and HMCM-22, have been systematically studied by density functional theory including dispersion interaction (DFT-D). Both of the stepwise and the concerted mechanisms are considered. Compared to the corresponding 8T models, the calculated activation energies are considerably reduced in the large models of zeolite due to the pore confinement effect. Moreover, the manners of ethylene dimerization are found to be related to the pore structure of zeolite catalyst. In stepwise mechanism, HMCM-22 shows better activity as suggested by the calculated activation energies. In concerted mechanism, the energy barriers in HBEA and HMCM-22 are much lower than that in HMOR and HZMS-5, indicting a better catalytic performance. By comparing the two mechanisms happened in one zeolite, we suggest that the concerted mechanism is preferable in HBEA, HMCM-22 and HMOR zeolite with large pores, while two mechanisms compete to each other in HZSM-5 with medium-size pores. (C) 2017 Elsevier Inc. All rights reserved. Sn-zeolites are important solid Lewis acid catalysts with wide applications to the conversions of various biomass-derived carbohydrates. As the catalytic center, framework Sn of Sn-zeolites can catalyze glucose isomerization to fructose and epimerization to mannose. In practical use, the main obstacle to the application of Sn-zeolites is the lengthy crystallization. In present work, we developed a rapid synthesis route to Sn-zeolites by incorporating Sn into the germanosilicate framework of BEC zeolite via direct hydrothermal procedure. The synthesis time required by Sn-BEC is tenfold shortened than that by the traditional Sn-zeolites like Sn-Beta. The locations of framework Sn atoms of Sn-BEC were investigated by F-19 MAS NMR and computational modeling, which indicates that the framework Sn sites of Sn-BEC adopt a uniquely homogenous distribution at the T-1 sites. Sn-BEC exhibits high reaction activity and single isomerization selectivity in the glucose conversion in methanol, while Sn-Beta shows both isomerization and epimerization selectivity. The single isomerization selectivity of Sn-BEC suggests the presence of single catalytic center, which is probably caused by the homogenous distribution of framework Sn sites. (C) 2017 Elsevier Inc. All rights reserved. A well-known drawback of sol-gel materials is their tendency to crack because of the high capillary pressure supported during drying. We have pioneered a facile and low-cost route to obtain monolithic xerogels, from a silica precursor and a surfactant, mixed under ultrasonic agitation. This route presents a clear interest for practical application at industrial scale. In this paper, a model to explain the formation of silica monoliths in the presence of the surfactant is presented. It is demonstrated that a stable microemulsion of water in the silica oligomer media is produced due to the combined effect of surfactant, producing inverse micelles, and ultrasonic agitation. The model proposed, suggests that the water is encapsulated in the surfactant micelles that act as nanoreactors, producing silica primary particles. The growth of these silica seeds continues outside the micelles until the formation of the constituent particles of the xerogel. Next, the particles are packed, and mesopores are produced from the interparticle spaces. This mesoporosity prevents xerogel cracking because it reduces capillary pressure during gel drying. An in-depth investigation of the structure of the xerogels revealed that they are effectively composed of silica nanoparticles of nearly uniform size values that could match with the size of the surfactant inverse micelles. Finally, it was demonstrated that surfactant and water content present a significant effect on the final structure of the xerogels. An increase of surfactant content produces a reduction in particle size, whereas an increase of water produces an opposite effect. (C) 2017 Elsevier Inc. All rights reserved. Three anthraquinone functionalized zeolite imidazolate framework-67 (ZIF-67) hybrids, EAQ@ZIF-67-6 (ZE-6), EAQ@ZIF-67-10 (ZE-10) and TBAQ@ZIF-67-10 (ZT-10) (EAQ = 2-ethylanthraquinone and TBAQ = 2-tert-butylanthraquinone), have been synthesized by one-pot strategy. The successful encapsulation of guests in hybrids has been characterized and confirmed by PXRD, SEM, FT-IR, UV-vis and N-2 adsorption. The loading amounts of 11.87 wt% EAQ for ZE-6, 23.11% EAQ for ZE-10, and 23.92% TBAQ for ZT-10 were determined by UV-vis absorption spectroscopy. Their detailed electrochemical studies were performed by using cyclic voltammetry at hybrids modified glassy carbon electrode (GCE) and carbon paste electrode (CPE). The cyclic voltammogram of each hybrid exhibits three pairs of separated redox peaks owing to both electrochemical properties of AQs and ZIF-67. Moreover, the ZE-10 functionalized GCE and CPE have a detection limit of 0.05 mM for H2O2 reduction. These hybrids were the first attempt of electrochemical active molecules embedded into a porous conductive framework. (C) 2017 Elsevier Inc. All rights reserved. Porous carbon can be widely applied in energy ranging from lithium-ion batteries to supercapacitors. We here propose a novel hierarchical porous graphitic carbon (HPGC) monolith to replace conventional activated carbon for achieving excellent electrochemical performance. In this monolith structure made from the cross-linking of lignosulfonate without any templating agent, the nanoscale core is composed of porous amorphous carbon, while the microscale shell is formed by graphitic carbon and generated within mesoporous wall of HPGC. As evidenced by cyclic voltammetry, the abundant porosity and the high surface area not only offer sufficient reaction sites to store electrical charge physically, but also accelerate the liquid electrolyte to penetrate the electrode and the ions to reach the reacting sites. Of special interest is the fact that HPGC monolith maintains mesoporous without extrinsic template agent. (C) 2017 Elsevier Inc. All rights reserved. In the present study, a series of manganese (Mn) containing 3D cubic mesoporous KIT-6 materials with different Si/Mn ratio (100, 50 25 & 10) having laid space group were synthesized for the first time by a facile one-pot hydrothermal process. The synthesized materials were methodically characterized by various analytical techniques including XRD, N-2 sorption, HR-TEM, diffuse reflectance UV vis (DRS-UVvis), EPR and FT-IR. The well ordered mesoporous nature of Mn incorporated KIT-6 materials are realized by XRD, N-2 adsorption-desorption isotherm and HR-TEM analysis. Surface area, pore volume and pore size of Mn-KIT-6 with different Si/Mn ratio are in the range of 451-786 m(2)/g, 0.57-0.9 cm(3)/g and 4.5-4.9 nm respectively. The incorporation of Mn3+ in the framework and fine dispersion of Mn2+ species over silica matrix of Mn-KIT-6 are confirmed by the results of UV-DRS, Fr-IR and EPR. Further, the presence of higher amount of Mn2+ extra-framework species in Mn-KIT-6(10) is realized by the presence of high amplitude signal and the contraction in H1 hysteresis loop as shown in EPR and BET isotherm respectively. Catalytic behavior of Mn-KIT-6 material was evaluated for the epoxidation of styrene with various oxidants in which t-butyl hydro peroxide (TBHP) promotes the desired formation of styrene oxide under mild liquid phase conditions. Mn-KIT-6 shown to be highly stable and active which mainly depends on the framework substituted Mn3+ sites. (C) 2017 Elsevier Inc. All rights reserved. A hybrid material [VO(mpamp)]-Y (where, mpamp = 2,2'-((1E,l'E)-((methylenebis-(4,1- phenylene)) bis(azanylylidene))bis(methanylylidene))diphenol)) has been synthesized by the fabrication of metal complex inside the nanopores of zeolite-Y via Flexible Ligand approach. Elemental analysis, ICP-OES, BET, FT-IR, UV Vis, TGA/DTG, SEM, and XRD confirm the immobilization and well-distribution of metal complex, as well as the preservation of zeolite-Y framework after formation of the guest complex inside cavities of the host. Hybrid material [VO(mpamp)]-Y have been tested as heterogeneous catalyst in the Baeyer-Villiger (B-V) oxidation of cyclic ketones and demonstrates worthy catalytic performance by providing high TONs. Reaction parameters were also checked to enhance the catalytic performance for the maximum oxidation. The probable mechanism for the cyclopentanone oxidation has been proposed with the help of in-situ IR spectroscopy. Moreover, in four consecutive reaction cycles, the catalyst recovery was in excess of 81%, while the substrate conversion slightly decreases. (C) 2017 Elsevier Inc. All rights reserved. Previous clinical studies have demonstrated the antifungal effectiveness of Ageratina pichinchensis extracts when topically administered to patients with dermatomycosis. The objective of this study was to evaluate the effectiveness and tolerability of a 7% standardized extract of A. pichinchensis (intravaginal) in patients with vulvovaginal candidiasis. The extract was standardized in terms of its encecalin content and administered during 6days to patients with Candida albicans-associated vulvovaginitis. The positive control group was treated with Clotrimazole (100mg). On day 7 of the study, a partial evaluation was carried out; it demonstrated that 94.1% of patients treated with Clotrimazole and 100% of those treated with the A. pichinchensis extract referred a decrease or absence of signs and symptoms consistent with vulvovaginal candidiasis. In the final evaluation, 2weeks after concluding administration, 86.6% of patients in the control group and 81.2% (p=0.65) of those treated with the A. pichinchensis extract demonstrated therapeutic success. Statistical analysis evidenced no significant differences between the two treatment groups. With the results obtained, it is possible to conclude that the standardized extract from A. pichinchensis, intravaginally administered, showed therapeutic and mycological effectiveness, as well as tolerability, in patients with vulvovaginal candidiasis, without noting statistical differences in patients treated with Clotrimazole. Copyright (c) 2017 John Wiley & Sons, Ltd. Although auraptene, a prenyloxy coumarin from Citrus species, was known to have anti-oxidant, anti-bacterial, antiinflammatory, and anti-tumor activities, the underlying anti-tumor mechanism of auraptene in prostate cancers is not fully understood to date. Thus, in the present study, we have investigated the anti-tumor mechanism of auraptene mainly in PC3 and DU145 prostate cancer cells, because auraptene suppressed the viability of androgen-independent PC3 and DU145 prostate cancer cells better than androgen-sensitive LNCaP cells. Also, auraptene notably increased sub-G1 cell population and terminal deoxynucleotidyl transferase dUTP nick end labeling-positive cells as features of apoptosis in two prostate cancer cells compared with untreated control. Consistently, auraptene cleaved poly(ADP-ribose) polymerase, activated caspase-9 and caspase-3, suppressed the expression of anti-apoptotic proteins, including Bcl-2 and myeloid cell leukemia 1 (Mcl-1), and also activated pro-apoptotic protein Bax in both prostate cancer cells. However, Mcl-1 overexpression reversed the apoptotic effect of auraptene to increase sub-G1 population and induce caspase-9/3 in both prostate cancer cells. Taken together, the results support scientific evidences that auraptene induces apoptosis in PC3 and DU145 prostate cancer cells via Mcl-1-mediated activation of caspases as a potent chemopreventive agent for prostate cancer prevention and treatment. Copyright (c) 2017 John Wiley & Sons, Ltd. Migraine is a common neurological disorder with a serious impact on quality of life. The aim of this study was to explore the effect of baicalin on nitroglycerin-induced migraine rats. We carried out a behavioral research within 2h post-nitroglycerin injection, and blood samples were drawn for measurements of nitric oxide (NO), calcitonin gene-related peptide, and endothelin (ET) levels. Immunohistochemistry was adopted to detect the activation of C-fos immunoreactive neurons in periaqueductal gray. The number, area size, and integrated optical density of C-fos positive cells were measured using Image-Pro Plus. As a result, baicalin administration (0.22mm/kg) alleviated pain responses of migraine rats. It profoundly decreased NO and calcitonin gene-related peptide levels, increased ET levels, and rebuilt the NO/ET balance in migraine rats. Besides, baicalin pretreatment significantly reduced the number, the stained area size, and integrated optical density value of C-fos positive cells. In brief, this paper supports the possibility of baicalin as a potential migraine pharmacotherapy. Copyright (c) 2017 John Wiley & Sons, Ltd. The multidrug resistance (MDR) phenotype is considered as a major cause of the failure in cancer chemotherapy. The acquisition of MDR is usually mediated by the overexpression of drug efflux pumps of a P-glycoprotein. The development of compounds that mitigate the MDR phenotype by modulating the activity of these transport proteins is an important yet elusive target. Here, we screened the saponification and enzymatic degradation products from Salvia hispanica seed's mucilage to discover modulating compounds of the acquired resistance to chemotherapeutic in breast cancer cells. Preparative-scale recycling HPLC was used to purify the hydrolysis degradation products. All compounds were tested in eight different cancer cell lines and Vero cells. All compounds were noncytotoxic at the concentration tested against the drug-sensitive and multidrug-resistant cells (IC50>29.2M). For the all products, a moderate vinblastine-enhancing activity from 4.55-fold to 6.82-fold was observed. That could be significant from a therapeutic perspective. Copyright (c) 2017 John Wiley & Sons, Ltd. One of the Brazilians medicinal plants most cited in ethnopharmacological surveys for the treatment of ulcers and gastric diseases was evaluated for its efficacy and toxicity. Maytenus ilicifolia leaf extract (MIE) was acutely and chronically (180days) administered to rats, mice, and dogs. Acute tests were antiulcer effect and toxicological trials (observational pharmacological screening, LD50, motor coordination, sleeping time and motor activity). Chronic tests were the following: weight gain/loss and behavioral parameters in rats and mice; estrus cycle, effects on fertility, and teratogenic studies in rats and mutagenic features in mice, in addition to the Ames and micronucleus test. The following parameters were assessed in dogs: weight gain/loss, general physical conditions, water/food consumption, and anatomopathological examination of the organs subsequent to the 180-day treatment. The results showed a clear antiulcer activity for MIE from 70mg/kg and an absence of toxicological effects in the three animal species, even if given in high doses or over a long period. The present results confirm the antiulcer property and absence of toxicological effects in three animal species of MIE, which is in line with its current popular medicinal use. Copyright (c) 2017 John Wiley & Sons, Ltd. Maytenus ilicifolia is a plant widely used in South American folk medicine as an effective anti-dyspeptic agent, and the aim of this study was to evaluate their clinical and toxicological effects in healthy volunteers in order to establish its maximum safe dose. We selected 24 volunteers (12 women and 12 men) between 20 and 40years of age and put them through clinical/laboratory screening and testing to ascertain their psychomotor functions (simple visual reaction, speed and accuracy, finger tapping tests). M. ilicifolia tablets were administered in increasing weekly dosages, from an initial dose of 100mg to a final dose of 2000mg. The volunteers' clinical and biochemical profiles and psychomotor functions were evaluated weekly, and they also completed a questionnaire about any adverse reactions. All subjects completed the study without significant changes in the evaluated parameters. The most cited adverse reactions were xerostomia (dry mouth syndrome) (16.7%) and polyuria (20.8%), with reversal of these symptoms without any intervention during the study. The clinical Phase I study showed that the administration of up to 2000mg of the extract was well tolerated, with few changes in biochemical, hematological or psychomotor function parameters, and no significant adverse reactions. Copyright (c) 2017 John Wiley & Sons, Ltd. Aloe-emodin (1,8-dihydroxy-3-hydroxymethyl-anthraquinone) is one of the primary active compounds in total rhubarb anthraquinones isolated from some traditional medicinal plants such as Rheum palmatum L. and Cassia occidentalis, which induce hepatotoxicity in rats. Thus, the aim of this study was to determine the potential cytotoxic effects and the underlying mechanism of aloe-emodin on human normal liver HL-7702 cells. The CCK-8 assays demonstrated that aloe-emodin decreased the viability of HL-7702 cells in a dose-dependent and time-dependent manner. Aloe-emodin induced S and G2/M phase cell cycle arrest in HL-7702 cells. This apoptosis was further investigated by flow cytometry and nuclear morphological changes by DAPI staining, respectively. Moreover, aloe-emodin provoked the production of intracellular reactive oxygen species and the depolarization of mitochondrial membrane potential (MMP). Further studies by western blot indicated that aloe-emodin dose-dependently up-regulated the levels of Fas, p53, p21, Bax/Bcl-2 ratio, and cleaved caspase-3, -8, -9, and subsequent cleavage of poly(ADP-ribose)polymerase (PARP). Taken together, these results suggest that aloe-emodin inhibits cell proliferation of HL-7702 cells and induces cell cycle arrest and caspase-dependent apoptosis via both Fas death pathway and the mitochondrial pathway by generating reactive oxygen species, indicating that aloe-emodin should be taken into account in the risk assessment for human exposure. Copyright (c) 2017 John Wiley & Sons, Ltd. Harpagophytum procumbens has a long story of use for the treatment of inflammatory diseases. Considering both the antiinflammatory effects of H. procumbens in multiple tissues and the stability of harpagoside in artificial intestinal fluid, the aim of the present study was to explore the possible protective role of a microwave-assisted aqueous Harpagophytum extract (1-1000g/mL) on mouse myoblast C2C12 and human colorectal adenocarcinoma HCT116 cell lines, and isolated rat colon specimens challenged with lipopolysaccharide (LPS), a validated ex vivo model of acute ulcerative colitis. In this context, we evaluated the effects on C2C12 and HCT116 viability, and on LPS-induced production of serotonin (5-HT), tumor necrosis factor (TNF)-, prostaglandin (PG)E-2 and 8-iso-prostaglandin (8-iso-PG)F-2. Harpagophytum extract was well tolerated by C2C12 cells, while reduced HCT116 colon cancer cell viability. On the other hand, Harpagophytum extract reduced H2O2-induced (1mM) reactive oxygen species (ROS) production, in both cell lines, and inhibited LPS-induced colon production of PGE(2), 8-iso-PGF(2), 5-HT and TNF. Concluding, we demonstrated the efficacy of a microwave-assisted Harpagophytum aqueous extract in modulating the inflammatory, oxidative stress and immune response in an experimental model of inflammatory bowel diseases (IBD), thus suggesting a rational use of Harpagophytum in the management and prevention of ulcerative colitis in humans. Copyright (c) 2017 John Wiley & Sons, Ltd. The link prediction problem has received extensive attention in fields such as sociology, anthropology, information science, and computer science. In many practical applications, we only need to predict the potential links between the vertices of interest, instead of predicting all of the links in a complex network. In this paper, we propose a fast similarity based approach for predicting the links related to a given node. We construct a path set connected to the given node by a random walk. The similarity score is computed within a small sub-graph formed by the path set connected to the given node, which significantly reduces the computation time. By choosing the appropriate number of sampled paths, we can restrict the error of the estimated similarities within a given threshold. Our experimental results on a number of real networks indicate that the algorithm proposed in this paper can obtain accurate results in less time than existing methods. Multicast routing improves the efficiency of a network by effectively utilizing the available network bandwidth. In multichannel multiradio wireless mesh networks the channel allocation strategy plays a vital role along with multicast tree construction. However, the multicast routing problem in multichannel multiradio wireless mesh networks is proven to be NP-hard. With this paper, we propose a Quality of Service Channel Assignment and multicast Routing (Q-CAR) algorithm. The proposed algorithm jointly solves the channel assignment and multicast tree construction problem by intelligent computational methods. We use a slightly modified differential evolution approach for assigning channels to links. We design a genetic algorithm based multicast tree construction strategy which determines a delay, jitter bounded low cost multicast tree. Moreover, we define a multi objective fitness function for the tree construction algorithm which optimizes interference as well as tree cost. Finally, we compare the performance of Q-CAR with QoS Multicast Routing and Channel Assignment(QoS-MRCA) and intelligent Quality of service multicast routing and Channel Assignment(i-QCA) algorithm in multichannel multiradio wireless mesh network (simulated) environments. Our experimental results distinctly show the outstanding performance of the proposed algorithm. To accurately predict the price of tungsten with an optimal architecture of neural networks (NNs) and better generalization performance, based on poor generalization and overfitting of a predictor such as a NNs, this paper presents a new hybrid constructive neural network method (HCNNM) to repair the impacting value in the original data in the same manner as the jumping points of a function. A series of theorems was proven that show a function with m jumping discontinuity points (or impacting points) can be approximated with the simplest NNs and a constructive decay Radial basis function (RBF) NNs, and a function with m jumping discontinuity points can be constructively approximated by hybrid constructive NNs. The hybrid networks have an optimal architecture and generalize well. Additionally, a practical problem regarding Tungsten prices from 1900 to 2014 is presented with some impacting points to more accurately approximate the sample data set and forecast future prices with the HCNNM, and some performance measures, such as the training time, testing RMSE and neurons, are compared with traditional algorithms (BP, SVM, ELM and Deep Learning) through many numerical experiments that fully verify the superiority, correctness and validity of the theory. The soaring popularity of deep learning in a wide variety of fields ranging from computer vision and speech recognition to self-driving vehicles has sparked a flurry of research interest from both academia and industry. In this paper, we propose a deep learning approach to 3D shape retrieval using a multi-level feature learning paradigm. Low-level features are first extracted from a 3D shape using spectral graph wavelets. Then, mid-level features are generated via the bag-of-features model by employing locality-constrained linear coding as a feature coding method, in conjunction with the biharmonic distance and intrinsic spatial pyramid matching in a bid to effectively measure the spatial relationship between each pair of the bag-of-feature descriptors. Finally, high-level shape features are learned by applying a deep auto-encoder on mid-level features. Extensive experiments on SHREC-2014 and SHREC-2015 datasets demonstrate the much better performance of the proposed framework in comparison with state-of-the-art methods. The current study proposes a novel cognitive architecture for a computational model of the limbic system, inspired by human brain activity, which improves interactions between a humanoid robot and preschool children using joint attention during turn-taking gameplay. Using human-robot interaction (HRI), this framework may be useful for ameliorating problems related to attracting and maintaining attention levels of children suffering from attention deficit hyperactivity disorder (ADHD). In the proposed framework, computational models including the amygdala, hypothalamus, hippocampus, and basal ganglia are used to simulate a range of cognitive processes such as emotional responses, episodic memory formation, and selection of appropriate behavioral responses. In the currently proposed model limbic system, we applied reinforcement and unsupervised learning-based adaptation processes to a dynamic neural field model, resulting in a system that was capable of observing and controlling the physical and cognitive processes of a humanoid robot. Several interaction scenarios were tested to evaluate the performance of the model. Finally, we compared the results of our methodology with a neural mass model. Link prediction addresses the problem of finding potential links that may form in the future. Existing state of art techniques exploit network topology for computing probability of future link formation. We are interested in using Graphical models for link prediction. Graphical models use higher order topological information underlying a graph for computing Co-occurrence probability of the nodes pertaining to missing links. Time information associated with the links plays a major role in future link formation. There have been a few measures like Time-score, Link-score and T_Flow, which utilize temporal information for link prediction. In this work, Time-score is innovatively incorporated into the graphical model framework, yielding a novel measure called Temporal Co-occurrence Probability (TCOP) for link prediction. The new measure is evaluated on four standard benchmark data sets : DBLP, Condmat, HiePh-collab and HiePh-cite network. In the case of DBLP network, TCOP improves AUROC by 12 % over neighborhood based measures and 5 % over existing temporal measures. Further, when combined in a supervised framework, TCOP gives 93 % accuracy. In the case of three other networks, TCOP achieves a significant improvement of 5 % on an average over existing temporal measures and an average of 9 % improvement over neighborhood based measures. We suggest an extension to link prediction problem called Long-term link prediction, and carry out a preliminary investigation. We find TCOP proves to be effective for long-term link prediction. Twin support vector machine (TWSVM) is an efficient supervised learning algorithm, proposed for the classification problems. Motivated by its success, we propose Tree-based localized fuzzy twin support vector clustering (Tree-TWSVC). Tree-TWSVC is a novel clustering algorithm that builds the cluster model as a binary tree, where each node comprises of proposed TWSVM-based classifier, termed as localized fuzzy TWSVM (LF-TWSVM). The proposed clustering algorithm Tree-TWSVC has efficient learning time, achieved due to the tree structure and the formulation that leads to solving a series of systems of linear equations. Tree-TWSVC delivers good clustering accuracy because of the square loss function and it uses nearest neighbour graph based initialization method. The proposed algorithm restricts the cluster hyperplane from extending indefinitely by using cluster prototype, which further improves its accuracy. It can efficiently handle large datasets and outperforms other TWSVM-based clustering methods. In this work, we propose two implementations of Tree-TWSVC: Binary Tree-TWSVC and One-against-all Tree-TWSVC. To prove the efficacy of the proposed method, experiments are performed on a number of benchmark UCI datasets. We have also given the application of Tree-TWSVC as an image segmentation tool. In this paper, we present a novel algorithm for efficiently mining high average-utility itemsets (HAUIs) from incremental databases, in which their volumes can be expanded dynamically. The previous algorithms have inefficiencies in that they must scan a given database multiple times so as to generate candidate itemsets and determine valid itemsets level by level. The reason is that they follow the basic framework of an Apriori-like approach. This drawback can cause critical problems in processing incremental databases because scanning a database becomes a tougher task as the size of the database is increased. In contrast, the algorithm proposed in this paper builds a compact tree structure maintaining all necessary information in order to avoid such excessive database scanning during its mining process. The previous algorithms suffer from the huge generation of unnecessary candidate itemsets at each level accompanied by the naive combination based candidate generation manner of an Apriori-like approach, which generates candidate itemsets with (k+1)-lengths by simply joining itemsets with k-lengths. On the other hand, our algorithm employs the pattern growth approach, which allows the algorithm to generate a set of only essential candidate itemsets. In order for our algorithm to constantly preserve the compactness of its tree structure during the entire incremental mining process, a restructuring technique is exploited. In the performance evaluation, we show that our algorithm is faster and consumes less memory space than competitors. This paper examines the generalized intuitionistic fuzzy soft set (GIFSS) model which is an intuitively straightforward extension of the intuitionistic fuzzy soft set (IFSS) model. This concept which arises from IFSSs, is generalized by including a moderator's opinion regarding the validity of the information at hand, thus making it highly suitable for use in decision-making problems that involve uncertain, vague and/or unreliable data. In this paper, we introduce the tools that measure the distance, similarity and the degree of fuzziness of GIFSSs. The axiomatic definitions of the distance measure is introduced and subsequently used to define the similarity measure and intuitionistic entropy induced by this distance measure. Some of the algebraic properties of these measures are also verified. The well-known Hamming, normalized Hamming, Euclidean and normalized Euclidean distances are generalized to make them compatible with the concept of GIFSSs. Subsequently, some relations among these information measures are proposed and verified. These results indicate how these measures are related and how they can be deduced from one another. Finally, we demonstrate the application of the information measure between GIFSSs by applying it to a case study related to the moderation of school-based assessment components of students in externally accredited academic programs. The state-of-the-art image classification models, generally including feature coding and pooling, have been widely adopted to generate discriminative and robust image representations. However, the coding schemes available in these models only preserve salient features which results in information loss in the process of generating final image representations. To address this issue, we propose a novel spatial locality-preserving feature coding strategy which selects representative codebook atoms based on their density distribution to retain the structure of features more completely and make representations more descriptive. In the codebook learning stage, we propose an effective approximated K-means with cluster closures to initialize the codebook and independently adjust the center of each cluster of the dense regions. Afterwards, in the coding stage, we first define the concept of "density" to describe the spatial relationship among the code atoms and the features. Then, the responses of local features are adaptively encoded. Finally, in the pooling stage, a locality-preserving pooling strategy is utilized to aggregate the encoded response vectors into a statistical vector for representing the whole image or all the regions of interest. We carry out image classification experiments on three commonly used benchmark datasets including 15-Scene, Caltech-101, and Caltech-256. The experimental results demonstrate that, comparing with the state-of-the-art Bag-of-Words (BoW) based methods, our approach achieves the best classification accuracy on these benchmarked datasets. Existing work on Automated Negotiations commonly assumes the negotiators' utility functions have explicit closed-form expressions, and can be calculated quickly. In many real-world applications however, the calculation of utility can be a complex, time-consuming problem and utility functions cannot always be expressed in terms of simple formulas. The game of Diplomacy forms an ideal test bed for research on Automated Negotiations in such domains where utility is hard to calculate. Unfortunately, developing a full Diplomacy player is a hard task, which requires more than just the implementation of a negotiation algorithm. The performance of such a player may highly depend on the underlying strategy rather than just its negotiation skills. Therefore, we introduce a new Diplomacy playing agent, called D-Brane, which has won the first international Computer Diplomacy Challenge. It is built up in a modular fashion, disconnecting its negotiation algorithm from its game-playing strategy, to allow future researchers to build their own negotiation algorithms on top of its strategic module. This will allow them to easily compare the performance of different negotiation algorithms. We show that D-Brane strongly outplays a number of previously developed Diplomacy players, even when it does not apply negotiations. Furthermore, we explain the negotiation algorithm applied by D-Brane, and present a number of additional tools, bundled together in the new BANDANA framework, that will make development of Diplomacy-playing agents easier. Several neurological disorders, such as epilepsy, can be diagnosed by electroencephalogram (EEG). Data mining supported by machine learning (ML) techniques can be used to find patterns and to build classifiers for the data. In order to make it possible, data should be represented in an appropriate format, e.g. attribute-value table, which can be built by feature extraction approaches, such as the cross-correlation (CC) method, which uses one signal as reference and correlates it with other signals. However, the reference is commonly selected randomly and, to the best of our knowledge, no studies have been conducted to evaluate whether this choice can affect the ML method performance. Thereby, this work aims to verify whether the choice of an epileptic EEG segment as reference can affect the performance of classifiers built from data. Also, a CC with artificial reference (CCAR) method is proposed in order to reduce possible consequences of the random selection of a signal as reference. Two experimental evaluations were conducted in a set of 200 EEG segments to induce classifiers using ML algorithms, such as J48, 1NN, naive Bayes, BP-MLP, and SMO. In the first study, each epileptic EEG segment was selected as reference to apply CC and ML methods. The evaluation found extremely significant difference, evidencing that the choice of an EEG segment as reference can influence the performance of ML methods. In the second study, the CCAR method was performed, in which statistical tests, only in comparisons involving the SMO classifier, showed not-so-good results. Entity matching is to map the records in a database to their corresponding entities. It is a well-known problem in the field of database and artificial intelligence. In digital libraries such as DBLP, ArnetMiner, Google Scholar, Scopus, Web of Science, AllMusic, IMDB, etc., some of the attributes may evolve over time, i.e., they change their values at different instants of time. For example, affiliation and email-id of an author in bibliographic databases which maintain publication details of various authors like DBLP, ArnetMiner, etc. may change their values. A taxpayer can change his or her address over time. Sometimes people change their surnames due to marriage. When a database contains records of these natures and the number of records grows beyond a limit, then it becomes really challenging to identify which records belong to which entity due to the lack of a proper key. In the current paper, the problem of automatic partitioning of records is posed as an optimization problem. Thereafter, a genetic algorithm based automatic technique is proposed to solve the entity matching problem. The proposed approach is able to automatically determine the number of partitions available in a bibliographic dataset. A comparative analysis with the two existing systems - DBLP and ArnetMiner, over sixteen bibliographic datasets proves the efficacy of the proposed approach. The differential evolution algorithm (DE) has been shown to be a very simple and effective evolutionary algorithm. Recently, DE has been successfully used for the numerical optimization. In this paper, first, based on the fitness value of each individual, the population is partitioned into three subpopulations with different size. Then, a dynamically adjusting method is used to change the three subpopulation group sizes based on the previous successful rate of different mutation strategies. Second, inspired by the "DE/current to pbest/1", three mutation strategies including "DE/current to cbest/1", "DE/current to rbest/1" and "DE/current to fbest/1" are proposed to take on the responsibility for either exploitation or exploration. Finally, a novel effective parameter adaptation method is designed to automatically tune the parameter F and CR in DE algorithm. In order to validate the effectiveness of MSDE, it is tested on ten benchmark functions chosen from literature. Compared with some evolution algorithms from literature, MSDE performs better in most of the benchmark problems. Fuzzy spatiotemporal data models have been used to support spatial and temporal knowledge representation and reasoning in the presence of fuzziness. In the meantime, XML is expected to become the next generation standard language for exchanging data over the Internet, which will become a trend to represent fuzzy spatiotemporal data based on XML. However, fuzzy spatiotemporal XML documents may contain inconsistencies violating predefined spatial and temporal constraints, which cause the data inconsistency problems. Although those consistency problems in XML documents have been widely studied, their studies only take the general data into account, and the studies on consistencies of fuzzy spatiotemporal data are still open issues. In this paper we put forward solutions to the problems of inconsistencies in fuzzy spatiotemporal XML documents. We also analyze inconsistent states which are named discontinuity overlap or cycle of the temporal labels of some incoming edges. Then, we put forward the corresponding approaches to checking and fixing fuzzy spatiotemporal XML documents according to the inconsistent states. Finally, the experimental results show that our proposed algorithms can fix inconsistencies of fuzzy spatiotemporal XML documents significantly. Tannins, polyphenols in medicinal plants, have been divided into two groups of hydrolysable and condensed tannins, including gallotannins, ellagitannins, and (-)-epigallocatechin-3-gallate (EGCG). Potent anticancer activities have been observed in tannins (especially EGCG) with multiple mechanisms, such as apoptosis, cell cycle arrest, and inhibition of invasion and metastases. Furthermore, the combinational effects of tannins and anticancer drugs have been demonstrated in this review, including chemoprotective, chemosensitive, and antagonizing effects accompanying with anticancer effect. However, the applications of tannins have been hindered due to their poor liposolubility, low bioavailability, off-taste, and shorter half-life time in human body, such as EGCG, gallic acid, and ellagic acid. To tackle these obstacles, novel drug delivery systems have been employed to deliver tannins with the aim of improving their applications, such as gelatin nanoparticles, micelles, nanogold, liposomes, and so on. In this review, the chemical characteristics, anticancer properties, and drug delivery systems of tannins were discussed with an attempt to provide a systemic reference to promote the development of tannins as anticancer agents. (C) 2016 Wiley Periodicals, Inc. The chemical investigation of marine mollusks has led to the isolation of a wide variety of bioactive metabolites, which evolved in marine organisms as favorable adaptations to survive in different environments. Most of them are derived from food sources, but they can be also biosynthesized de novo by the mollusks themselves, or produced by symbionts. Consequently, the isolated compounds cannot be strictly considered as "chemotaxonomic markers" for the different molluscan species. However, the chemical investigation of this phylum has provided many compounds of interest as potential anticancer drugs that assume particular importance in the light of the growing literature on cancer biology and chemotherapy. The current review highlights the diversity of chemical structures, mechanisms of action, and, most importantly, the potential of mollusk-derived metabolites as anticancer agents, including those biosynthesized by mollusks and those of dietary origin. After the discussion of dolastatins and kahalalides, compounds previously studied in clinical trials, the review covers potentially promising anticancer agents, which are grouped based on their structural type and include terpenes, steroids, peptides, polyketides and nitrogen-containing compounds. The "promise" of a mollusk-derived natural product as an anticancer agent is evaluated on the basis of its ability to target biological characteristics of cancer cells responsible for poor treatment outcomes. These characteristics include high antiproliferative potency against cancer cells in vitro, preferential inhibition of the proliferation of cancer cells over normal ones, mechanism of action via nonapoptotic signaling pathways, circumvention of multidrug resistance phenotype, and high activity in vivo, among others. The review also includes sections on the targeted delivery of mollusk-derived anticancer agents and solutions to their procurement in quantity. (C) 2016 The Authors Medicinal Research Reviews Published by Wiley Periodicals, Inc. The efficacy of nonsteroidal anti-inflammatory drugs (NSAIDs) against inflammation, pain, and fever has been supporting their worldwide use in the treatment of painful conditions and chronic inflammatory diseases until today. However, the long-term therapy with NSAIDs was soon associated with high incidences of adverse events in the gastrointestinal tract. Therefore, the search for novel drugs with improved safety has begun with COX-2 selective inhibitors (coxibs) being straightaway developed and commercialized. Nevertheless, the excitement has fast turned to disappointment when diverse coxibs were withdrawn from the market due to cardiovascular toxicity. Such events have once again triggered the emergence of different strategies to overcome NSAIDs toxicity. Here, an integrative review is provided to address the breakthroughs of two main approaches: (i) the association of NSAIDs with protective mediators and (ii) the design of novel compounds to target downstream and/or multiple enzymes of the arachidonic acid cascade. To date, just one phosphatidylcholine-associated NSAID has already been approved for commercialization. Nevertheless, the preclinical and clinical data obtained so far indicate that both strategies may improve the safety of nonsteroidal anti-inflammatory therapy. (C) 2016 Wiley Periodicals, Inc. Polyglutamine (PolyQ) diseases are a group of neurodegenerative disorders caused by the expansion of cytosine-adenine-guanine (CAG) trinucleotide repeats in the coding region of specific genes. This leads to the production of pathogenic proteins containing critically expanded tracts of glutamines. Although polyQ diseases are individually rare, the fact that these nine diseases are irreversibly progressive over 10 to 30 years, severely impairing and ultimately fatal, usually implicating the full-time patient support by a caregiver for long time periods, makes their economic and social impact quite significant. This has led several researchers worldwide to investigate the pathogenic mechanism(s) and therapeutic strategies for polyQ diseases. Although research in the field has grown notably in the last decades, we are still far from having an effective treatment to offer patients, and the decision of which compounds should be translated to the clinics may be very challenging. In this review, we provide a comprehensive and critical overview of the most recent drug discovery efforts in the field of polyQ diseases, including the most relevant findings emerging from two different types of approaches-hypothesis-based candidate molecule testing and hypothesis-free unbiased drug screenings. We hereby summarize and reflect on the preclinical studies as well as all the clinical trials performed to date, aiming to provide a useful framework for increasingly successful future drug discovery and development efforts. (C) 2016 Wiley Periodicals, Inc. Vitiligo is the most frequent human pigmentary disorder, characterized by progressive autoimmune destruction of mature epidermal melanocytes. Of the current treatments offering partial and temporary relief, ultraviolet (UV) light is the most effective, coordinating an intricate network of keratinocyte and melanocyte factors that control numerous cellular and molecular signaling pathways. This UV-activated process is a classic example of regenerative medicine, inducing functional melanocyte stem cell populations in the hair follicle to divide, migrate, and differentiate into mature melanocytes that regenerate the epidermis through a complex process involving melanocytes and other cell lineages in the skin. Using an in-depth correlative analysis of multiple experimental and clinical data sets, we generated a modern molecular research platform that can be used as a working model for further research of vitiligo repigmentation. Our analysis emphasizes the active participation of defined molecular pathways that regulate the balance between stemness and differentiation states of melanocytes and keratinocytes: p53 and its downstream effectors controlling melanogenesis; Wnt/beta-catenin with proliferative, migratory, and differentiation roles in different pigmentation systems; integrins, cadherins, tetraspanins, and metalloproteinases, with promigratory effects on melanocytes; TGF-beta and its effector PAX3, which control differentiation. Our long-term goal is to design pharmacological compounds that can specifically activate melanocyte precursors in the hair follicle in order to obtain faster, better, and durable repigmentation. (C) 2016 Wiley Periodicals, Inc. Transient receptor potential vanilloid 1 (TRPV1) is an ion channel expressed on sensory neurons triggering an influx of cations. TRPV1 receptors function as homotetramers responsive to heat, proinflammatory substances, lipoxygenase products, resiniferatoxin, endocannabinoids, protons, and peptide toxins. Its phosphorylation increases sensitivity to both chemical and thermal stimuli, while desensitization involves a calcium-dependent mechanism resulting in receptor dephosphorylation. TRPV1 functions as a sensor of noxious stimuli and may represent a target to avoid pain and injury. TRPV1 activation has been associated to chronic inflammatory pain and peripheral neuropathy. Its expression is also detected in nonneuronal areas such as bladder, lungs, and cochlea where TRPV1 activation is responsible for pathology development of cystitis, asthma, and hearing loss. This review offers a comprehensive overview about TRPV1 receptor in the pathophysiology of chronic pain, epilepsy, cough, bladder disorders, diabetes, obesity, and hearing loss, highlighting how drug development targeting this channel could have a clinical therapeutic potential. Furthermore, it summarizes the advances of medicinal chemistry research leading to the identification of highly selective TRPV1 antagonists and their analysis of structure-activity relationships (SARs) focusing on new strategies to target this channel. (C) 2016 Wiley Periodicals, Inc. An unusually thermostable G-quadruplex is formed by a sequence fragment of a naturally occurring ribozyme, the human CPEB3 ribozyme. Strong evidence is provided for the formation of a uniquely stable intermolecular G-quadruplex structure consisting of five tetrad layers, by using CD spectroscopy, UV melting curves, 2DNMR spectroscopy, and gel shift analysis. The cationic porphyrin TMPyP4 destabilizes the complex. The synthesis of potent inhibitors of GH93 arabinanases as well as a synthesis of a chromogenic substrate to measure GH93 arabinanase activity are described. An insight into the reasons behind the potency of the inhibitors was gained through X-ray crystallographic analysis of the arabinanase Arb93A from Fusarium graminearum. These compounds lay a foundation for future inhibitor development as well as for the use of the chromogenic substrate in biochemical studies of GH93 arabinanases. More than a hundred distinct modified nucleosides have been identified in RNA, but little is known about their distribution across different organisms, their dynamic nature and their response to cellular and environmental stress. Mass-spectrometry-based methods have been at the forefront of identifying and quantifying modified nucleosides. However, they often require synthetic reference standards, which do not exist in the case of many modified nucleosides, and this therefore impedes their analysis. Here we use a metabolic labelling approach to achieve rapid generation of bio-isotopologues of the complete Caenorhabditis elegans transcriptome and its modifications and use them as reference standards to characterise the RNA modification profile in this multicellular organism through an untargeted liquid-chromatography tandem high-resolution mass spectrometry (LC-HRMS) approach. We furthermore show that several of these RNA modifications have a dynamic response to environmental stress and that, in particular, changes in the tRNA wobble base modification 5-methoxycarbonylmethyl-2-thiouridine (mcm(5)s(2)U) lead to codon-biased gene-expression changes in starved animals. Many organisms contain head-to-head isoprenoid synthases; we investigated three such types of enzymes from the pathogens Neisseria meningitidis, Neisseria gonorrhoeae, and Enterococcus hirae. The E.hirae enzyme was found to produce dehydrosqualene, and we solved an inhibitor-bound structure that revealed a fold similar to that of CrtM from Staphylococcus aureus. In contrast, the homologous proteins from Neisseria spp. carried out only the first half of the reaction, yielding presqualene diphosphate (PSPP). Based on product analyses, bioinformatics, and mutagenesis, we concluded that the Neisseria proteins were HpnDs (PSPP synthases). The differences in chemical reactivity to CrtM were due, at least in part, to the presence of a PSPP-stabilizing arginine in the HpnDs, decreasing the rate of dehydrosqualene biosynthesis. These results show that not only S.aureus but also other bacterial pathogens contain head-to-head prenyl synthases, although their biological functions remain to be elucidated. A one-pot, two-step biocatalytic platform for the regiospecfic C-methylation and C-ethylation of aromatic substrates is described. The tandem process utilises SalL (Salinospora tropica) for in situ synthesis of S-adenosyl-l-methionine (SAM), followed by alkylation of aromatic substrates by the C-methyltransferase NovO (Streptomyces spheroides). The application of this methodology is demonstrated for the regiospecific labelling of aromatic substrates by the transfer of methyl, ethyl and isotopically labelled (CH3,)-C-13 (CD3)-C-13 and CD3 groups from their corresponding SAM analogues formed in situ. The design of nanomaterials that are capable of specific and sensitive biomolecular recognition is an on-going challenge in the chemical and biochemical sciences. A number of sophisticated artificial systems have been designed to specifically recognize a variety of targets. However, methods based on natural biomolecular detection systems using antibodies are often superior. Besides greater affinity and selectivity, antibodies can be easily coupled to enzymatic systems that act as signal amplifiers, thus permitting impressively low detection limits. The possibility to translate this concept to artificial recognition systems remains limited due to design incompatibilities. Here we describe the synthesis of a synthetic nanomaterial capable of specific biomolecular detection by using an internal biocatalytic colorimetric detection and amplification system. The design of this nanomaterial relies on the ability to accurately grow hybrid protein-organosilica layers at the surface of silica nanoparticles. The method allows for label-free detection and quantification of targets at picomolar concentrations. Catalytic promiscuity can facilitate evolution of enzyme functionsa multifunctional catalyst may act as a springboard for efficient functional adaptation. We test the effect of single mutations on multiple activities in two groups of promiscuous AP superfamily members to probe this hypothesis. We quantify the effect of site-saturating mutagenesis of an analogous, nucleophile-flanking residue in two superfamily members: an arylsulfatase (AS) and a phosphonate monoester hydrolase (PMH). Statistical analysis suggests that no one physicochemical characteristic alone explains the mutational effects. Instead, these effects appear to be dominated by their structural context. Likewise, the effect of changing the catalytic nucleophile itself is not reaction-type-specific. Mapping of fitness landscapes of four activities onto the possible variation of a chosen sequence position revealed tremendous potential for respecialization of AP superfamily members through single-point mutations, highlighting catalytic promiscuity as a powerful predictor of adaptive potential. Protein-based pharmaceuticals represent the fastest growing group of drugs in development in the pharmaceutical industry. One of the major challenges in the discovery, development, and distribution of biopharmaceuticals is the assessment of changes in their higher-order structure due to chemical modification. Here, we investigated the interactions of three different biochemical probes (F(ab)s) generated to detect conformational changes in a therapeutic IgG1 antibody (mAbX) by local hydrogen-deuterium exchange mass spectrometry (HDX-MS). We show that two of the probes target the F-c part of the antibody, whereas the third probe binds to the hinge region. Through HDX-ETD, we could distinguish specific binding patterns of the F-c-binding probes on mAbX at the amino-acid level. Preliminary surface plasmon resonance (SPR) experiments showed that these domain-selective F-ab probes are sensitive to conformational changes in distinct regions of a full-length therapeutic antibody upon oxidation. Amine transaminase (ATA) catalyzing stereoselective amination of prochiral ketones is an attractive alternative to transition metal catalysis. As wild-type ATAs do not accept sterically hindered ketones, efforts to widen the substrate scope to more challenging targets are of general interest. We recently designed ATAs to accept aromatic and thus planar bulky amines, with a sequence-based motif that supports the identification of novel enzymes. However, these variants were not active against 2,2-dimethyl-1-phenyl-propan-1-one, which carries a bulky tert-butyl substituent adjacent to the carbonyl function. Here, we report a solution for this type of substrate. The evolved ATAs perform asymmetric synthesis of the respective R amine with high conversions by using either alanine or isopropylamine as amine donor. Within the endoplasmic reticulum, immature glycoproteins are sorted into secretion and degradation pathways through the sequential trimming of mannose residues from Man(9)GlcNAc(2) to Man(5)GlcNAc(2) by the combined actions of assorted -1,2-mannosidases. It has been speculated that specific glycoforms encode signals for secretion and degradation. However, it is unclear whether the specific signal glycoforms are produced by random mannosidase action or are produced regioselectively in a sequenced manner by specific -1,2-mannosidases. Here, we report the identification of a set of selective mannosidase inhibitors and development of conditions for their use that enable production of distinct pools of Man(8)GlcNAc(2) isomers from a structurally defined synthetic Man(9)GlcNAc(2) substrate in an endoplasmic reticulum fraction. Glycan processing analysis with these inhibitors provides the first biochemical evidence for selective production of the signal glycoforms contributing to traffic control in glycoprotein quality control. LectinA (LecA) from Pseudomonas aeruginosa is an established virulence factor. Glycoclusters that target LecA and are able to compete with human glycoconjugates present on epithelial cells are promising candidates to treat P.aeruginosa infection. A family of 32 glycodendrimers of generation0 and 1 based on a bifurcated bis-galactoside motif have been designed to interact with LecA. The influences both of the central multivalent core and of the aglycon of these glycodendrimers on their affinity toward LecA have been evaluated by use of a microarray technique, both qualitatively for rapid screening of the binding properties and also quantitatively (K-d). This has led to high-affinity LecA ligands with K-d values in the low nanomolar range (K-d=22nm for the best one). Proper chromosome separation in both mitosis and meiosis depends on the correct connection between kinetochores of chromosomes and spindle microtubules. Kinetochore dysfunction can lead to unequal distribution of chromosomes during cell division and result in aneuploidy, thus kinetochores are critical for faithful segregation of chromosomes. Centromere protein A (CENP-A) is an important component of the inner kinetochore plate. Multiple studies in mitosis have found that deficiencies in CENP-A could result in structural and functional changes of kinetochores, leading to abnormal chromosome segregation, aneuploidy and apoptosis in cells. Here we report the expression and function of CENP-A during mouse oocyte meiosis. Our study found that microinjection of CENP-A blocking antibody resulted in errors of homologous chromosome segregation and caused aneuploidy in eggs. Thus, our findings provide evidence that CENP-A is critical for the faithful chromosome segregation during mammalian oocyte meiosis. Esophageal cancer is a common malignant tumor, whose pathogenesis and prognosis factors are not fully understood. This study aimed to discover the gene clusters that have similar functions and can be used to predict the prognosis of esophageal cancer. The matched microarray and RNA sequencing data of 185 patients with esophageal cancer were downloaded from The Cancer Genome Atlas (TCGA), and gene co-expression networks were built without distinguishing between squamous carcinoma and adenocarcinoma. The result showed that 12 modules were associated with one or more survival data such as recurrence status, recurrence time, vital status or vital time. Furthermore, survival analysis showed that 5 out of the 12 modules were related to progression-free survival (PFS) or overall survival (OS). As the most important module, the midnight blue module with 82 genes was related to PFS, apart from the patient age, tumor grade, primary treatment success, and duration of smoking and tumor histological type. Gene ontology enrichment analysis revealed that "glycoprotein binding" was the top enriched function of midnight blue module genes. Additionally, the blue module was the exclusive gene clusters related to OS. Platelet activating factor receptor (PTAFR) and feline Gardner-Rasheed (FGR) were the top hub genes in both modeling datasets and the STRING protein interaction database. In conclusion, our study provides novel insights into the prognosis-associated genes and screens out candidate biomarkers for esophageal cancer. This study was to investigate the changes of autonomic nerve function and hemodynamics in patients with vasovagal syncope (VVS) during head-up tilt-table testing (HUT). HUT was performed in 68 patients with unexplained syncope and 18 healthy subjects served as control group. According to whether bradycardia, hypotension or both took place during the onset of syncope, the patients were divided during the test into three subgroups: vasodepressor syncope (VD), cardioinhibitory syncope (CI) and mixed syncope (MX) subgroups. Heart rate, blood pressure, heart rate variability (HRV), and deceleration capacity (DC) were continuously analyzed during HUT. For all the subjects with positive responses, the normalized low frequency (LFn) and the LF/HF ratio markedly decreased whereas normalized high frequency (HFn) increased when syncope occurred. Syncopal period also caused more significant increase in the power of the DC in positive groups. These changes were more exaggerated compared to controls. All the patients were indicative of a sympathetic surge in the presence of withdrawal vagal activity before syncope and a sympathetic inhibition with a vagal predominance at the syncopal stage by the frequency-domain analysis of HRV. With the measurements of DC, a decreased vagal tone before syncope stage and a vagal activation at the syncopal stage were observed. The vagal tone was higher in subjects showing cardioinhibitory responses at the syncopal stage. DC may provide an alternative method to understand the autonomic profile of VVS patients. Studies showed that the use of cyclic adenosine monophosphate (cAMP) substitutes or intracellular cAMP activators increased intracellular cAMP level, causing anti-inflammatory effects. This study was to investigate the effects of pretreatment with meglumine cyclic adenylate (MCA), a compound of meglumine and cAMP, on systemic inflammation induced by lipopolysaccharide (LPS) in rats. Eighteen adult male Sprague-Dawley rats were randomly divided into 3 groups (n=6 each): control group (NS group), LPS group (LPS group) and LPS with MCA pretreatment group (MCA group). Systemic inflammation was induced with LPS 10 mg/kg injected via the femoral vein in LPS and MCA groups. In MCA group, MCA 2 mg/kg was injected via the femoral vein 20 min before LPS injection, and the equal volume of normal saline was given in NS and LPS groups at the same time. Three hours after LPS injection, the blood samples were taken from the abdominal aorta for determination of plasma concentrations of TNF-alpha, IL-1, IL-6, IL-10, cAMP by ELISA and NF-kappa Bp65 expression by Western blotting. The experimental results showed that inflammatory and antiinflammatory indices were increased in LPS group compared to NS group; inflammatory indices were declined and anti-inflammatory indices were increased in MCA group relative to LPS group. Our study suggested that MCA pretreatment may attenuate LPS-induced systemic inflammation. This study determined the prevalence of diabetic peripheral neuropathy (DPN) and subclinical DPN (sDPN) in patients with type 2 diabetes mellitus (T2DM) using nerve conduction study (NCS) as a diagnostic tool. We also investigated the factors associated with the development of sDPN and compared factors between the sDPN and confirmed DPN (cDPN). This cross-sectional study involved 240 T2DM patients who were successively admitted to the endocrinology wards of Wuhan Union Hospital over the period of January to December 2014. Data on the medical history, physical and laboratory examinations were collected. DPN was diagnosed using NCS. One-way ANOVA with least significant difference (LSD) analysis or chi-square tests was used to compare parameters among DNP-free, sDPN and cDPN patients. Independent factors associated with sDPN were determined using logistic regression. The results showed that 50.8% of the participants had DPN, and among them, 17.1% had sDPN. sDPN showed significant independent associations with age, height, HbA1c, presence of atherosclerosis and diabetic retinopathy. Patients with DPN differed significantly from those without DPN with respect to age, duration of disease (DOD), HbA1c, presence of atherosclerosis, diabetic retinopathy, nephropathy and hypertension. Patients with cDPN, relative to those with sDPN, had significantly longer DOD and higher prevalence of peripheral artery disease (PAD) and coronary artery disease (CAD). Our study suggests that a significant number of T2DM patients are affected by sDPN, and the development of this condition is associated with advanced age, tall stature, poor glycaemic control, presence of diabetic retinopathy and atherosclerosis. On the other hand, patients with cDPN tend to have a longer DOD and are more likely to suffer from PAD and CAD. The sialyl Lewis X (SLe(x)) antigen encoded by the FUT7 gene is the ligand of endotheliam-selectin (E-selectin). The combination of SLe(x) antigen and E-selectin represents an important way for malignant tumor metastasis. In the present study, the effect of the SLe(x)-binding DNA aptamer on the adhesion and metastasis of hepatocellular carcinoma HepG2 cells in vitro was investigated. Reverse transcription-polymerase chain reaction (RT-PCR) and immunofluorescence staining were conducted to detect the expression of FUT7 at both transcriptional and translational levels. The SLe(x) expression in HepG2 cells treated with different concentrations of SLe(x)-binding DNA aptamer was detected by flow cytometry. Besides, the adhesion, migration, and invasion of HepG2 cells were measured by cell adhesion assay, and the Transwell migration and invasion assay. The results showed that the FUT7 expression was up-regulated at both mRNA and protein levels in HepG2 cells. SLe(x)-binding DNA aptamer could significantly decrease the expression of SLe(x) in HepG2 cells. The cell adhesion assay revealed that the SLe(x)-binding DNA aptamer could effectively inhibit the interactions between E-selectin and SLe(x) in the HepG2 cells. Additionally, SLe(x)-binding DNA aptamers at 20 nmol/L were found to have the similar effect to the monoclonal antibody CSLEX-1. The Transwell migration and invasion assay revealed that the number of penetrating cells on the down-side of Transwell membrane was significantly less in cells treated with 5, 10, 20 nmol/L SLe(x)-binding DNA aptamer than those in the negative control group (P < 0.01). Our study demonstrated that the SLe(x)-binding DNA aptamer could significantly inhibit the in vitro adhesion, migration, and invasion of HepG2 cells, suggesting that the SLe(x)-binding DNA aptamer may be used as a potential molecular targeted drug against metastatic hepatocellular carcinoma. The role of hydrogen sulfide (H2S) in portal hypertension (PH)-induced esophagus-gastric junction vascular lesions in rabbits was observed. The rabbit PH models were established. The animals were randomly divided into the following groups: normal, PH, PH+sodium hydrosulfide (PH+S), PH+propargylglycine (PH+PPG). The plasma H2S levels, apoptosis of esophageal-gastric junction vascular smooth muscle cells, and the expression of nuclear transcription factor-kappa B (NF-kappa B), p-AKT, I kappa Ba and Bcl-2 were detected. The cystathionine gamma lyase (cystathionine-gamma-splitting enzyme, CSE) in the junction vascular tissue was measured. The results showed that the plasma H2S levels and the CSE expression levels had statistically significant difference among different groups (P < 0.05). As compared with PH group, plasma H2S levels were declined obviously (11.9 +/- 4.2 vs. 20.6 +/- 4.5, P < 0.05), and CSE expression levels in the junction vascular tissue were notably reduced (1.7 +/- 0.6 vs. 2.8 +/- 0.8, P < 0.05), apoptosis rate of vascular smooth muscle cells per unit area was significantly decreased (0.10 +/- 0.15 vs. 0.24 +/- 0.07, P < 0.05), and the expression levels of p-AKT and NF-kappa B were significantly decreased (2.31 +/- 0.33 vs. 3.04 +/- 0.38, P < 0.05; 0.33 +/- 0.17 vs. 0.51 +/- 0.23, P < 0.05), however, I kappa Ba and Bcl-2 expression increased obviously (5.57 +/- 0.17 vs. 3.67 +/- 0.13, P < 0.05; 0.79 +/- 0.29 vs. 0.44 +/- 0.36, P < 0.05) in PH+PPG group. As compared with PH group, H2S levels were notably increased (32.7 +/- 7.3 vs. 20.6 +/- 4.5, P < 0.05), the CSE levels in the junction vascular tissue were significantly increased (6.3 +/- 0.7 vs. 2.8 +/- 0.8, P < 0.05), apoptosis rate of vascular smooth muscle cells per unit area was significantly increased (0.35 +/- 0.14 vs. 0.24 +/- 0.07, P < 0.05), and the expression levels of p-AKT and NF-kappa B were significantly increased (4.29 +/- 0.49 vs. 3.04 +/- 0.38, P < 0.05; 0.77 +/- 0.27 vs. 0.51 +/- 0.23, P < 0.05), yet I kappa Ba and Bcl-2 expression decreased significantly (3.23 +/- 0.24 vs. 3.67 +/- 0.13, P < 0.05; 0.31 +/- 0.23 vs. 0.48 +/- 0.34, P < 0.05) in PH+S group. It is concluded that esophagus-gastric junction vascular lesions happen under PH, and apoptosis of smooth muscle cells is declined. H2S can activate NF-kappa B by the p-AKT pathway, leading to the down-regulation of Bcl-2, eventually stimulating apoptosis of vascular smooth muscle cells, easing PH. H2S/CSE system may play an important role in remission of PH via the AKT-NF-kappa B pathway. Although quality assessment is gaining increasing attention, there is still no consensus on how to define and grade postoperative complications. The absence of a definition and a widely accepted ranking system to classify surgical complications has hampered proper interpretation of the surgical outcome. This study aimed to define and search the simple and reproducible classification of complications following hepatectomy based on two therapy-oriented severity grading system: Clavien-Dindo classification of surgical complications and Accordion severity grading of postoperative complications. Two classifications were tested in a cohort of 2008 patients who underwent elective liver surgery at our institution between January 1986 and December 2005. Univariate and multivariate analyses were performed to link respective complications with perioperative parameters, length of hospital stay and the quality of life. A total of 1716 (85.46%) patients did not develop any complication, while 292 (14.54%) patients had at least one complication. According to Clavien-Dindo classification of surgical complications system, grade I complications occurred in 150 patients (7.47%), grade II in 47 patients (2.34%), grade IIIa in 59 patients (2.94%), grade IIIb in 13 patients (0.65%), grade IVa in 7 patients (0.35%), grade IVb in 1 patient (0.05%), and grade V in 15 patients (0.75%). According to Accordion severity grading of postoperative complications system, mild complications occurred in 160 patients (7.97%), moderate complications in 48 patients (2.39%), severe complications (invasive procedure/no general anesthesia) in 48 patients (2.39%), severe complications (invasive procedure under general anesthesia or single organ system failure) in 20 patients (1.00%), severe complications (organ system failure and invasive procedure under general anesthesia or multisystem organ failure) in 1 patient (0.05%), and mortality was 0.75% (n=15). Complication severity of Clavien-Dindo system and Accordion system were all correlated with the length of hospital stay, the number of hepatic segments resected, the blood transfusion and the Hospital Anxiety and Depression Scale-Anxiety (HADS-A). The Clavien-Dindo classification system and Accordion classification system are the simple ways of reporting all complications following the liver surgery. Small intestinal obstruction is a common complication of primary gastrointestinal cancer or metastatic cancers. Patients with this condition are often poor candidates for surgical bypasses, and placement of self-expanding metal stent (SEMS) can be technically challenging. In this study, we examined the feasibility of combined application of single-balloon enteroscope (SBE) and colonoscope for SEMS placement in patients with malignant small intestinal obstruction. Thirty-four patients were enrolled in this study, among which 22 patients received SEMS placement by using SBE and colonoscope, while the other 12 patients received conservative medical treatment. The patients were followed up for one year. Stent placement was technically feasible in 95.5% (21/22). Clinical improvement was achieved in 86.4% (19/22). For the 19 clinical success cases, the average time of benefits from a gastric outlet obstruction scoring system (GOOSS) increase ae1 was 111.9 +/- 89.5 days. For the 12 patients receiving conservative medical treatment, no significant improvement in GOOSS score was observed. Moreover, a significant increase of Short-Form-36 health survey score was observed in the 19 patients at time of 30 days after stent placement. By Kaplan-Meier analysis, a significant survival improvement was observed in patients with successful SEMS placement, compared with patients receiving conservative medical treatment. Taken together, combined use of SBE and colonoscope makes endoscopic stent placement feasible in patients with malignant small intestinal obstruction, and patients can benefit from it in terms of prolonged survival and improved quality of life. This study aimed to examine the biocompatibility of calcium titanate (CaTiO3) coating prepared by a simplified technique in an attempt to assess the potential of CaTiO3 coating as an alternative to current implant coating materials. CaTiO3-coated titanium screws were implanted with hydroxyapatite (HA)-coated or uncoated titanium screws into medial and lateral femoral condyles of 48 New Zealand white rabbits. Imaging, histomorphometric and biomechanical analyses were employed to evaluate the osseointegration and biocompatibility 12 weeks after the implantation. Histology and scanning electron microscopy revealed that bone tissues surrounding the screws coated with CaTiO3 were fully regenerated and they were also well integrated with the screws. An interfacial fibrous membrane layer, which was found in the HA coating group, was not noticeable between the bone tissues and CaTiO3-coated screws. X-ray imaging analysis showed in the CaTiO3 coating group, there was a dense and tight binding between implants and the bone tissues; no radiation translucent zone was found surrounding the implants as well as no detachment of the coating and femoral condyle fracture. In contrast, uncoated screws exhibited a fibrous membrane layer, as evidenced by the detection of a radiation translucent zone between the implants and the bone tissues. Additionally, biomechanical testing revealed that the binding strength of CaTiO3 coating with bone tissues was significantly higher than that of uncoated titanium screws, and was comparable to that of HA coating. The study demonstrated that CaTiO3 coating in situ to titanium screws possesses great biocompatibility and osseointegration comparable to HA coating. The therapeutic potential of curcumin (Cur) is hampered by its poor aqueous solubility and low bioavailability. The aim of this study was to determine whether Cur nanoemulsions enhance the efficacy of Cur against prostate cancer cells and increase the oral absorption of Cur. Cur nanoemulsions were developed using the self-microemulsifying method and characterized by their morphology, droplet size and zeta potential. The results showed that the cytotoxicity and cell uptake were considerably increased with Cur nanoemulsions compared to free Cur. Cur nanoemulsions exhibited a significantly prolonged biological activity and demonstrated better therapeutic efficacy than free Cur, as assessed by apoptosis and cell cycle studies. In situ single-pass perfusion studies demonstrated higher effective permeability coefficient and absorption rate constant for Cur nanoemulsions than for free Cur. Our study suggested that Cur nanoemulsions can be used as an effective drug delivery system to enhance the anticancer effect and oral bioavailability of Cur. Radical retropubic prostatectomy (RRP) has been one of the most effective treatments for prostate cancer. This study is designed to identify the related predictive risk factors for complications in patients following RRP. Between 2000 and 2012 in Department of Urology, Fudan University Shanghai Cancer Center, 421 cases undergoing RRP for localized prostate cancer by one surgeon were included in this retrospective analysis. We reviewed various risk factors that were correlated with perioperative complications, including patient characteristics [age, body mass index (BMI), co-morbidities], clinical findings (preoperative PSA level, Gleason score, clinical stage, pathological grade), and surgeon's own clinical practice. Charlson comorbidity index (CCI) was used to explain comorbidities. The total rate of perioperative complications was 23.2% (98/421). There were 45/421 (10.7%), 28/421 (6.6%), 24/421 (5.7%) and 1/421 (0.2%) in grade I, II, III, IV respectively, and 323/421 (76.8%) cases had none of these complications. Statistical analysis of multiple potential risk factors revealed that BMI > 30 (P=0.014), Charlson score ae1 (P < 0.001) and surgical experience (P=0.0252) were predictors of perioperative complications. Age, PSA level, Gleason score, TNM stage, operation time, blood loss, and blood transfusion were not correlated with perioperative complications (P > 0.05). It was concluded that patients' own factors and surgeons' technical factors are related with an increased risk of development of perioperative complications following radical prostatectomy. Knowing these predictors can both favor risk stratification of patients undergoing RRP and help surgeons make treatment decisions. In order to study the microstructure characteristics of normal lunate bones, eight fresh cadaver normal lunates were scanned with micro-computed tomography. High-resolution images of the micro-structure of normal lunates were obtained and we analyzed the nutrient foramina. Then nine regions of interest (ROI) were chosen in the central sagittal plane so that we could obtain the parameters of trabecular bones of ROIs. The distal lamellar-like compact structure had statistically significant differences when it was compared with the ROIs in the volar and dorsal ends of the distal cortex. The difference of diameter between the volar and dorsal foramina was significant (P < 0.05). However, there was no significant difference regarding the number. The trabecular bones of the volar and dorsal distal ends had lower intensity than those of the distal central subchondral bone plate. The diameters of the nutrient foramina on the volar cortex were larger than those on the dorsal. This research provided more detailed information about microstructure of normal lunate and the nutrient foramina on cortex, and a reference for further study about diseased lunate. This prospective study was conducted to assess the rate of resolution of second trimester placenta previa in women with anterior placenta and posterior placenta, and that in women with and without previous cesarean section. In this study, placenta previa was defined as a placenta lying within 20 mm of the internal cervical os or overlapping it. We recruited 183 women diagnosed with previa between 20(+0) weeks and 25(+6) weeks. They were grouped according to their placenta location (anterior or posterior) and history of cesarean section. Comparative analysis was performed on demographic data, resolution rate of previa and pregnancy outcomes between anterior group and posterior group, and on those between cesarean section group and non-cesarean section group. Women with an anterior placenta tended to be advanced in parity (P=0.040) and have increased number of dilatation and curettage (P=0.044). The women in cesarean section group were significantly older (P=0.000) and had more parity (P=0.000), gravidity (P=0.000), and dilatation and curettage (P=0.048) than in non-cesarean section group. Resolution of previa at delivery occurred in 87.43% women in this study. Women with a posterior placenta had a higher rate of resolution (P=0.030), while history of cesarean section made no difference. Gestational age at resolution was earlier in posterior group (P=0.002) and non-cesarean section group (P=0.008) than in anterior group and cesarean section group correspondingly. Placenta location and prior cesarean section did not influence obstetric outcomes and neonatal outcomes. This study indicates that it is more likely to have subsequent resolution of the previa when the placenta is posteriorly located for women who are diagnosed with placenta previa in the second trimester. As one of the earliest markers for predicting pregnancy outcomes, human chorionic gonadotropin (hCG) values have been inconclusive on reliability of the prediction after frozen and fresh embryo transfer (ET). In this retrospective study, patients with positive hCG (day 12 after transfer) were included to examine the hCG levels and their predictive value for pregnancy outcomes following 214 fresh and 1513 vitrified-warmed single-blastocyst transfer cycles. For patients who got clinical pregnancy, the mean initial hCG value was significantly higher after frozen cycles than fresh cycles, and the similar result was demonstrated for patients with live births (LB). The difference in hCG value existed even after adjusting for the potential covariates. The area under curves (AUC) and threshold values calculated by receiver operator characteristic curves were 0.944 and 213.05 mIU/mL for clinical pregnancy after fresh ET, 0.894 and 399.50 mIU/mL for clinical pregnancy after frozen ET, 0.812 and 222.86 mIU/mL for LB after fresh ET, and 0.808 and 410.80 mIU/mL for LB after frozen ET with acceptable sensitivity and specificity, respectively. In conclusion, single frozen blastocyst transfer leads to higher initial hCG values than single fresh blastocyst transfer, and the initial hCG level is a reliable predictive factor for predicting IVF outcomes. The effect and underlying mechanism of Bu-Shen-An-Tai recipe on ovarian apoptosis in mice with controlled ovarian hyperstimulation (COH) implantation dysfunction were studied. The COH implantation dysfunction model in mice was established by intraperitoneal injection of 7.5 IU pregnant mare's serum gonadotrophin (PMSG), followed by 7.5 IU human chorionic gonadotrophin (HCG) 48 h later. Then the female mice were mated with male at a ratio of 2:1 in the same cage at 6:00 p.m. The female mice from normal group were injected intraperitoneally with normal saline and mated at the corresponding time. Day 1 of pregnancy was recorded by examining its vaginal smears at 8:00 a.m. of the next day. Fifty successfully pregnant mice were equally randomly divided into 5 groups: normal control pregnant group (NC), COH implantation dysfunction model group (COH), low dosage of Bu-Shen-An-Tai recipe group (LOW), middle dosage of Bu-Shen-An-Tai recipe group (MID) and high dosage of Bu-Shen-An-Tai recipe group (HIGH). Then from day 1, the mice in different groups were respectively intragastrically given corresponding treatments at 9:00 a.m. for 5 consecutive days. The concentrations of 17 beta-estradiol (E-2) and progesterone (P-4) were determined by radioimmunoassay (RIA). The ultrastructural changes of ovarian tissues were observed by transmission electron microscope (TEM). The histopathological changes of ovarian tissues were observed by HE staining. The number of atretic follicles and pregnant corpus luteum were also recorded. TUNEL was applied to measure apoptotic cells of ovarian tissues. Western blotting was used to detect the protein expression of apoptosis- related factors like Bax, Bcl-2 and cleaved-caspase-3 in ovarian tissue of mice. The results showed that ovarian weight, the concentrations of E2 and P4, the number of atretic follicles and pregnant corpus luteum, as well as the apoptosis of granulosa cells were significantly increased in the COH group. The ultrastructures of ovarian tissues in the COH group showed that chromatin in granulosa cells was increased, agglutinated, aggregated or crescent-shaped. The focal cavitation and the typical apoptotic bodies could be seen in granulosa cells in the late stage of apoptosis. After the treatment with different doses of Bu-Shen-An-Tai recipe, the ultrastructural changes of ovarian granulosa cells apoptosis were dramatically improved and even disappeared under TEM. Visible mitochondria and mitochondrial cristae were increased and vacuoles were significantly reduced. The lipid dropltes were shown in a circluar or oval shape. The protein expression levels of Bax and cleaved-caspase-3 were decreased, and the expression of Bcl-2 protein was increased after treatment. It was concluded that Bu-Shen-An-Tai recipe can inhibit the apoptosis of ovarian granulosa cells, probably by up-regulating the protein expression of Bcl-2 and down-regulating Bax and cleaved-caspase-3, which contributes to the formation and maintenance of ovarian corpus luteum. It's helpful to promote the embryonic implantation, to reduce embryo loss and ultimately to improve the success rate of pregnancy. It has always been controversial whether a single allergen performs better than multiple allergens in polysensitized patients during the allergen-specific immunotherapy. This study aimed to examine the clinical efficacy of single-allergen sublingual immunotherapy (SLIT) versus multi-allergen subcutaneous immunotherapy (SCIT) and to discover the change of the biomarker IL-4 after 1-year immunotherapy in polysensitized children aged 6-13 years with allergic rhinitis (AR) induced by house dust mites (HDMs). The AR polysensitized children (n=78) were randomly divided into two groups: SLIT group and SCIT group. Patients in the SLIT group sublingually received a single HDM extract and those in the SCIT group were subcutaneously given multiple-allergen extracts (HDM in combination with other clinically relevant allergen extracts). Before and 1 year after the allergen-specific immunotherapy (ASIT), the total nasal symptom scores (TNSS), total medication scores (TMS) and IL-4 levels in peripheral blood mononuclear cells (PBMCs) were compared respectively between the two groups. The results showed that the TNSS were greatly improved, and the TMS and IL-4 levels were significantly decreased after 1-year ASIT in both groups (SLIT group: P < 0.001; SCIT group: P < 0.001). There were no significant differences in any outcome measures between the two groups (for TNSS: P > 0.05; for TMS: P > 0.05; for IL-4 levels: P > 0.05). It was concluded that the clinical efficacy of single-allergen SLIT is comparable with that of multi-allergen SCIT in 6-13-year-old children with HDM-induced AR. Nasal polyp (NP) is a common chronic inflammatory disease of the nasal cavity and sinuses. Although some authors have suggested that NP is related to inflammatory factors such as interleukin (IL)-1 beta, IL-5, IL-8, granulocyte-macrophage colony-stimulating factor (GM-CSF), tumor necrosis factor (TNF)-alpha, and IL-17, the mechanisms underlying the pathogenesis and progression of NP remain obscure. This study investigated the expression and distribution of IL-17 and syndecan-1 in NP, and explored the roles of these two molecules in the pathogenesis of eosinophilic chronic rhinosinusitis with nasal polyps (Eos CRSwNP) and non-Eos CRSwNP. Real-time PCR and immunohistochemistry were used to detect the expression of IL-17 and syndecan-1 in samples [NP, unciform process (UP) from patients with CRS, and middle turbinate (MT) from healthy controls undergoing pituitary tumor surgery]. The results showed that the expression levels of IL-17 and syndecan-1 were upregulated in both NP and UP tissues, but both factors were higher in NP tissues than in UP tissues. There was no significant difference in IL-17 levels between the Eos CRSwNP and non-Eos CRSwNP samples, and syndecan-1 levels were increased in the non-Eos CRSwNP tissues as compared with those in Eos CRSwNP tissues. In all of the groups, there was a close correlation between the expression of IL-17 and syndecan-1 in nasal mucosa epithelial cells, glandular epithelial cells, and inflammatory cells, suggesting that IL-17 and syndecan-1 may play a role, and interact with each other, in the pathogenesis of non-Eos CRSwNP. Overdenture as a treatment modality for both partially and fully edentulous patients is costeffective and less expensive. The purpose of the present study was to examine the newly fabricated attachments by comparing them with conventional O-ring attachment in vitro in terms of retention force and cyclic aging resistance. A total of 150 samples were prepared and divided into five groups according to the materials used (O-ring attachment, Deflex M10 XR, Deflex Classic SR, Deflex Acrilato FD, and flexible acrylic resin). The retention force of different attachments was measured by a mini dental implant after three subsequent aging (0, 63, and 126) cycles in the circumstances similar to the oral environment. The gap space between the head of the implant and the inner surface of the attachments was detected. Two-way analysis of variance (ANOVA) analysis with multiple comparisons test was applied for statistical analysis. The results showed that Deflex M10 XR had the highest retention force and the lowest gap space after cyclic aging; in addition, by comparing the relative force reduction, the lowest values were obtained in the O-ring attachment and the highest values in the flexible acrylic resin attachment. The retention force measured after cyclic aging for the Deflex M10 XR attachment was greatly improved when compared with the O-ring attachment and other types of attachment materials; in addition, the Deflex M10 XR attachment exhibited the minimum gap space between the inner surface and the mini dental implant head. In conclusion, Deflex M10 XR has the ability to withstand weathering conditions and retains its durable and retentive properties after aging when compared with other attachments. This study was undertaken to investigate the correlation of the enhancement degree on contrast-enhanced ultrasound (CEUS) with the histopathology of carotid plaques and the serum high sensitive C-reactive protein (hs-CRP) levels in patients undergoing carotid endarterectomy (CEA). Carotid CEUS was performed preoperatively in 115 patients who would undergo CEA, and the enhancement degree of the carotid plaques was evaluated by both the visual semiquantitative analysis and the quantitative time-intensity curve analysis. Serum hs-CRP levels were detected using the particle-enhanced immunoturbidimetric assay also before the operation. Additionally, the carotid plaque samples were subjected to histopathological examination postoperatively. The density of neovessels and the number of macrophages in the plaques were assessed by immunohistochemistry. The results showed that among the 115 patients, grade 0 plaque contrast enhancement was noted in 35 patients, grade 1 in 48 patients and grade 2 in 32 patients. The degree of plaque enhancement, the density of neovessels, the number of macrophages, and the hs-CRP levels were highest in the grade 2 patients. Correlation analysis showed that the enhancement degree of the carotid plaques was closely related to the immunohistochemical parameters of the plaques and the serum hs-CRP levels. It was suggested that the carotid plaque enhancement on CEUS can be used to evaluate the vulnerability of carotid plaques. This study was to explore the optimal threshold of thyroid-stimulating hormone (TSH)-stimulated serum thyroglobulin (s-Tg) for patients who were to receive F-18-fluorodeoxyglucose (F-18-FDG) PET/CT scan owing to clinical suspicion of differentiated thyroid cancer (DTC) recurrence but negative post-therapeutic I-131 whole-body scan (I-131-WBS). A total of 60 qualified patients underwent PET/CT scanning from October 2010 to July 2014. The receiver operating characteristic (ROC) curve analyses showed that s-Tg levels over 49 mu g/L led to the highest diagnostic accuracy of PET/CT to detect recurrence, with a sensitivity of 89.5% and a specificity of 90.9%. Besides, bivariate correlation analysis showed positive correlation between s-Tg levels and the maximum standardized uptake values (SUVmax) of F-18-FDG in patients with positive PET/CT scanning, suggesting a significant influence of TSH both on Tg release and uptake of F-18-FDG. So, positive PET/CT imaging is expected when patients have negative I-131-WBS but s-Tg levels over 49 mu g/L. Mild encephalopathy/encephalitis with a reversible splenial (MERS) lesion is a clinic-radiological entity. The clinical features of MERS in neonates are still not systemically reported. This paper presents five cases of MERS, and the up-to-date reviews of previously reported cases were collected and analyzed in the literature. Here we describe five cases clinically diagnosed with MERS. All of them were neonates and the average age was about 4 days. They were admitted for the common neurological symptoms such as hyperspasmia, poor reactivity and delirium. Auxiliary examinations during hospitalization also exhibited features in common. In this report, we reached following conclusions. Firstly, magnetic resonance imaging revealed solitary or comprehensive lesions in the splenium of corpus callosum, some of them extending to almost the whole corpus callosum. The lesions showed low intensity signal on T1-weighted images, homogeneously hyperintense signal on T2-weighted images, fluid-attenuated inversion recovery and diffusion-weighted images, and exhibited an obvious reduced diffusion on apparent diffusion coefficient map. Moreover, the lesions in the magnetic resonance imaging disappeared very quickly even prior to the clinical recovery. Secondly, all the cases depicted here suffered electrolyte disturbances especially hyponatremia which could be easily corrected. Lastly, all of the cases recovered quickly over one week to one month and majority of them exhibited signs of infections and normal electroencephalography. The present study aimed to clarify the smoking cessation motivations, challenges and coping strategies among pregnant couples. A qualitative design using a grounded theory approach was applied. Data were collected by individual semi-structured interviews with 39 married individuals (21 non-smoking pregnant women and 18 smoking or ever-smoking men with a pregnant wife) and 3 imams in an ethnically diverse region of far western China. The most common theme for smoking cessation motivation was "embryo quality" (i.e., a healthier baby), followed by family's health. Most interviewees reported that husband's withdrawal symptoms were the greatest challenge to smoking cessation, followed by the Chinese tobacco culture. Coping strategies given by the pregnant women typically involved combining emotional, behavioral and social interventions. Social interventions showed advantages in helping to quit smoking. Pregnancy appears to be a positive stimulus for pregnant couples' smoking cessation. Our results suggest that pregnancy, a highly important life event, may help to reduce barriers to smoking cessation at the social level (e.g., limiting access to cigarettes, avoiding temptation to smoke), but does little to help with the withdrawal symptoms. Professional guidance for smoking cessation is still necessary. Continued smoking following stroke is associated with adverse outcomes including increased risk of mortality and secondary stroke. The aim of this study was to examine the long-term trends in smoking behaviors and factors associated with smoking relapse among men who survived their first-ever stroke. Data collection for this longitudinal study was conducted at baseline through face-to-face interviews and follow-up was completed every 3 months via telephone, beginning in 2010 and continuing through 2014. Cox proportional hazard regression models were used to identify predictors of smoking relapse behavior. At baseline, 372 male patients were recruited into the study. Totally, 155 (41.7%) of these patients stopped smoking for stroke, and 61 (39.3%) began smoking again within 57 months after discharge with an increasing trend in the number of cigarettes smoked per day. Exposure to environmental tobacco smoke at places outside of home and work (such as bars, restaurants) (HR, 2.34; 95% CI, 1.04-5.29, P=0.04), not having a spouse (HR, 0.12; 95% CI, 0.04-0.36; P=0.0002) and smoking at least 20 cigarettes per day before stroke (HR, 2.42; 95% CI, 1.14-5.14, P=0.02) were predictors of smoking relapse. It was concluded that environmental tobacco smoke is an important determinant of smoking relapse among men who survive their first stroke. Environmental tobacco smoke should be addressed by smoke-free policies in public places. The present study aimed to investigate the efficacy of adenotonsillectomy (AT) for children with obstructive sleep apnea syndrome (OSAS) and the improvement of their cognitive function. Studies on cognitive performance of OSAS children treated with or without AT were identified by searching the Pubmed, EMBASE and Cochrane library. A meta-analysis was conducted to analyze the literature. The random-effects model was used to evaluate 11 eligible studies using an inverse- variance method. The neuropsychological test results of 4 cognitive domains (general intelligence, memory, attention-executive function and verbal ability) were obtained and analyzed. By comparison of cognitive function between OSAS children and healthy controls, the effect sizes of each domain were achieved as follows: general intelligence,-0.5 (P < 0.0001); memory,-0.18 (P=0.02); attention-executive function,-0.21 (P=0.002); and verbal ability,-0.48 (P=0.0006). The effect sizes of general intelligence, memory, attention-executive function, and verbal ability after AT compared to baseline level were-0.37 (P=0.008),-0.36 (P=0.0005),-0.02 (P=0.88), and-0.45 (P=0.009), respectively. Comparing the cognitive ability between OSAS children after AT and healthy controls showed that the effect sizes were-0.54 (P=0.0009),-0.24 (P=0.12),-0.17 (P=0.35), and-0.45 (P=0.009) in general intelligence, memory, attention-executive function, and verbal ability, respectively. Our results confirmed that OSAS children performed worse than healthy children in terms of the 4 cognitive domains investigated. After 6-12 months of observation, significant improvement in attention-executive function and verbal ability were found in OSAS children treated with AT compared to their baseline level; restoration of attention-executive function and memory were observed in OSAS children after AT in comparison to healthy controls. Further rigorous randomized controlled trials should be conducted to obtain definitive conclusions. The prognostic value of phosphatidylinositol-4, 5-bisphosphate 3-kinase, catalytic subunit alpha (PIK3CA) in patients with esophageal squamous cell carcinoma (ESCC) is controversial. We aimed to investigate the prognostic significance of PIK3CA mutation in patients with ESCC. EMBASE, PubMed, and Web of Science databases were systematically searched from inception through Oct. 3, 2016. The hazard ratios (HRs) and 95% confidence intervals (CI) were calculated using a random effects model for overall survival (OS) and disease-free survival (DFS). Seven studies enrolling 1505 patients were eligible for inclusion of the current meta-analysis. Results revealed that PIK3CA mutation was not significantly associated with OS (HR: 0.90, 95% CI: 0.63-1.30, P=0.591), with a significant heterogeneity (I (2)=65.7%, P=0.012). Additionally, subgroup analyses were further conducted according to various variables, such as types of specimen, the sample size, technique and statistical methodology. All results suggested that no significant relationship was found between PIK3CA mutation and OS in patients with ESCC. For DFS, there was no significant association between PIK3CA mutation and DFS in patients with ESCC (HR: 1.00, 95% CI=0.47-2.11, P=0.993, I (2)=73.7%). Publication bias was not present and the results of sensitivity analysis were very stable in the current meta-analysis. Our findings suggest that PIK3CA mutation has no significant effects on OS and DFS in ESCC patients. More well-designed prospective studies with better methodology for PIK3CA assessment are required to clarify the prognostic significance of PIK3CA mutation in ESCC patients. We consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computationally intensive problem necessary for many practical problems. We analyze a technique called product-convolution expansion: The operator is locally approximated by a convolution, allowing to design fast numerical algorithms based on the fast Fourier transform. We design various types of expansions, provide their explicit rates of approximation and their complexity depending on the time-varying impulse response smoothness. This analysis suggests novel wavelet-based implementations of the method with numerous assets such as optimal approximation rates, low complexity and storage requirements as well as adaptivity to the kernels regularity. The proposed methods are an alternative to more standard procedures such as panel clustering, cross-approximations, wavelet expansions or hierarchical matrices. With the advent of novel 3D image acquisition techniques, their efficient and reliable analysis becomes more and more important. In particular in 3D, the amount of data is enormous and requires for an automated processing. The tasks are manifold, starting from simple image enhancement, image reconstruction, image description and object/feature detection to high-level contextual feature extraction. One important property that most of these tasks have in common is their covariance to rotations. Spherical Tensor Algebra (STA) offers a general framework to fulfill these demands. STA transfers theories from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. The main objects of interest are orientation fields. The interpretations of the fields are manifold. Depending on the application, they can represent local image descriptors, features, orientation scores or filter responses. STA deals with the processing of such fields in the domain of the irreducible representations of the rotation group. Two operations are fundamental: the extraction/projection of the features by convolution-like procedures and the nonlinear covariant combination by spherical products. In this paper, we propose an open-source toolbox that implements, in addition to fundamental STA operators, advanced functions for feature detection and image enhancement and makes them accessible to the 3D image processing community. The core features are implemented in C (CPU and GPU) with APIs in C++ and MATLAB. As examples, we show applications for medical and biological images. This paper is the third in a series devoted to orders on partial partitions in the framework of image analysis; the first two (with a view to image filtering and segmentation) identified 4 basic operations on blocks involved in such orders: merging, apportioning, creating and inflating blocks. Here, we consider orders where growing a partial partition decreases its support and diminishes the number of blocks. This can be done by a combination of merging (or apportioning) blocks with removing or deflating blocks (the opposite of creating or inflating blocks). We also introduce related operations on blocks: partial apportioning and partial merging. The new orders that we obtain can be useful in relation to skeletonization, image simplification, for processing segmentation markers or to describe the evolution of object boundaries in hierarchies. There are also possible applications in geographic information processing. A class of linear degenerate elliptic equations inspired by nonlinear diffusions of image processing is considered. It is characterized by an interior degeneration of the diffusion coefficient. It is shown that no particularly natural, unique interpretation of the equation is possible. This phenomenon is reflected in the behavior of numerical schemes for its resolution and points to similar issues that potentially affect its nonlinear counterpart. In this paper we study the application of nonlinear cross-diffusion systems as mathematical models of image filtering. These are systems of two nonlinear, coupled partial differential equations of parabolic type. The nonlinearity and cross-diffusion character are provided by a nondiagonal matrix of diffusion coefficients that depends on the variables of the system. We prove the well-posedness of an initial-boundary-value problem with Neumann boundary conditions and uniformly positive definite cross-diffusion matrix. Under additional hypotheses on the coefficients, the models are shown to satisfy the scale-space properties of shift, contrast, average grey and translational invariances. The existence of Lyapunov functions and the asymptotic behaviour of the solutions are also studied. According to the choice of the cross-diffusion matrix (on the basis of the results on filtering with linear cross-diffusion, discussed by the authors in a companion paper and the use of edge stopping functions ) the performance of the models is compared by computational means in a filtering problem. The numerical results reveal differences in the evolution of the filtering as well as in the quality of edge detection given by one of the components of the system, in terms of the cross-diffusion matrix. The use of cross-diffusion problems as mathematical models of different image processes is investigated. Here the image is represented by two real-valued functions which evolve in a coupled way, generalizing the approaches based on real and complex diffusion. The present paper is concerned with linear filtering. First, based on principles of scale invariance, a scale-space axiomatic is built. Then, some properties of linear complex diffusion are generalized, with particular emphasis on the use of one of the components of the cross-diffusion problem for edge detection. The performance of the cross-diffusion approach is analysed by numerical means in some one- and two-dimensional examples. Graph-based variational methods have recently shown to be highly competitive for various classification problems of high-dimensional data, but are inherently difficult to handle from an optimization perspective. This paper proposes a convex relaxation for a certain set of graph-based multiclass data segmentation models involving a graph total variation term, region homogeneity terms, supervised information and certain constraints or penalty terms acting on the class sizes. Particular applications include semi-supervised classification of high-dimensional data and unsupervised segmentation of unstructured 3D point clouds. Theoretical analysis shows that the convex relaxation closely approximates the original NP-hard problems, and these observations are also confirmed experimentally. An efficient duality-based algorithm is developed that handles all constraints on the labeling function implicitly. Experiments on semi-supervised classification indicate consistently higher accuracies than related non-convex approaches and considerably so when the training data are not uniformly distributed among the data set. The accuracies are also highly competitive against a wide range of other established methods on three benchmark data sets. Experiments on 3D point clouds acquire by a LaDAR in outdoor scenes and demonstrate that the scenes can accurately be segmented into object classes such as vegetation, the ground plane and human-made structures. Physical exercise results in the increased expression of neurotrophic factors and the subsequent induction of signal transduction cascades with a positive impact on neuronal functions. In this study, we used a voluntary physical exercise rat model to determine correlations in hippocampus mRNA expression of the neurotropic factors Bdnf, VegfA, and Igf1; their receptors TrkB, Igf1R, VegfR1, and VegfrR2; and downstream signal transducers Creb, Syn1, and Vgf. In hippocampi of physically exercised rats, the mRNA expression levels of Bdnf transcript 4 (Bdnf-t4), VegfA, and Igf1, as well as VegfR1, TrkB, Creb, Vgf, and Syn1, were increased. Bdnf-t4 mRNA expression positively correlated with mRNA expression of Creb, Vgf, and Syn1 in hippocampi of exercised rats. A correlation between Bdnf-t4 and Syn1 mRNA expression was also observed in hippocampi of sedentary rats. Igf1 and VegfA mRNA expression was positively correlated in hippocampi of both exercised and sedentary rats. But, neither Igf1 nor VegfA mRNA expression was correlated with the expression of Bdnf-t4 or the expression of the signal transducers Creb, Syn1, and Vgf. In hippocampi of exercised rats, Creb mRNA expression was positively correlated with TrkB, Syn1, and Vgf mRNA expression and with the correlation between Creb and Vgf mRNA expression also observed in hippocampi of sedentary rats. To examine for causality of the in vivo observed correlated mRNA expression levels between Bdnf-t4 and the other examined transcripts, we used nuclease-deactivated CRISPR-Cas9 fused with VP64 to induce mRNA expression of endogenous Bdnf-t4 in rat PC12 cells. Following Bdnf-t4 mRNA induction, we observed increased Creb mRNA expression. This in vitro result is in accordance with the in vivo results and supports that under specified conditions, an increase in Creb mRNA expression can be a downstream signal transduction event due to induction of endogenous Bdnf mRNA expression. Transcription factor cAMP response element-binding protein (CREB) plays a critical role in memory formation. Ubiquitin-proteasome system-dependent protein degradation affects the upstream signaling pathways which regulate CREB activity. However, the molecular mechanisms of proteasome inhibition on reductive CREB activity are still unclear. The current study demonstrated that MG132-inhibited proteasome activity resulted in a dose dependence of CREB dephosphorylation at Ser133 as well as decreased phosphorylation of N-methyl-d-aspartate (NMDA) receptor subunit NR2B (Tyr1472) and its tyrosine protein kinase Fyn (Tyr416). These dephosphorylations are probably caused by disturbance of expression and post-translational modifications of tau protein since tau siRNA decreased the activity of Fyn, NR2B, and CREB. To further confirm this perspective, HEK293 cells stably expressing human tau441 protein were treated with MG132 and dephosphorylations of CREB and NR2B were observed. The current research provides an alternative pathway, tau/Fyn/NR2B signaling, regulating CREB activity. Early maternal infections with Neisseria gonorrhoeae (NG) correlate to an increased lifetime schizophrenia risk for the offspring, which might be due to an immune-mediated mechanism. Here, we investigated the interactions of polyclonal antisera to NG (alpha-NG) with a first trimester prenatal brain multiprotein array, revealing among others the SNARE-complex protein Snap23 as a target antigen for alpha-NG. This interaction was confirmed by Western blot analysis with a recombinant Snap23 protein, whereas the closely related Snap25 failed to interact with alpha-NG. Furthermore, a polyclonal antiserum to the closely related bacterium Neisseria meningitidis (alpha-NM) failed to interact with both proteins. Functionally, in SH-SY5Y cells, alpha-NG pretreatment interfered with both insulin-induced vesicle recycling, as revealed by uptake of the fluorescent endocytosis marker FM1-43, and insulin-dependent membrane translocation of the glucose transporter GluT4. Similar effects could be observed for an antiserum raised directly to Snap23, whereas a serum to Snap25 failed to do so. In conclusion, Snap23 seems to be a possible immune target for anti-gonococcal antibodies, the interactions of which seem at least in vitro to interfere with vesicle-associated exocytosis. Whether these changes contribute to the correlation between maternal gonococcal infections and psychosis in vivo remains still to be clarified. Genome-wide association studies (GWAS) have identified hundreds of new potential genetic risk loci associated with numerous complex diseases such as multiple sclerosis (MS). Genes which have been discovered by GWAS are now the focus of numerous ongoing studies. The goal of this study was to confirm and understand the potential role of one of such genes-transmembrane protein 39A gene (TMEM39A)-in multiple sclerosis. We showed the difference in TMEM39A messenger RNA (mRNA) expression between MS patients and controls (T (2) (2;74) = 5.429; p = 0.0063). In our study, the lower mRNA expression of TMEM39A gene in patients did not correlate with a higher methylation level of the TMEM39A promoter. Moreover, a decreased level of TMEM39A mRNA was associated neither with rs1132200 nor with rs17281647. Additionally, we did not find an association between these two TMEM39A polymorphisms and the risk and progression of multiple sclerosis. Our investigation is the first which indicates that TMEM39A mRNA expression may be associated with the development and/or course of multiple sclerosis. Previous studies conveyed that diabetes causes learning and memory deficits. Data also suggest that celecoxib exerts an anti-hyperalgesic, anti-allodynic, and a plethora of other beneficial effects in diabetic rats. However, whether celecoxib could alleviate memory deficit in diabetic rat is unknown. In the present study, we aimed to examine the potential of celecoxib to counter memory deficits in diabetes. Experimental diabetes was induced by streptozotocin (STZ, 60 mg/kg) in male SD rats. Rats were divided into three groups (n = 16/group): normal control group injected with normal saline, diabetes group injected with STZ, and diabetes + celecoxib group in which diabetic rats were administered with celecoxib by gavage in drinking water (10 mg/kg) for 10 days in terms of which memory performance in animals was measured, hippocampal tissue harvested, and long-term potentiation assessed. Western blotting and immunohistochemical staining were performed to determine cyclooxygenase 2 (COX-2) expression in hippocampus. The results showed that a rat model of STZ-induced diabetes was successfully established and that celecoxib treatment significantly improved the associated nephropathy and inflammation. Moreover, spatial memory and hippocampal long-term potentiation (LTP) were impaired in diabetic model (P < 0.05). Interestingly, our data revealed that oral application of celecoxib reversed the memory deficit and hippocampal LTP in the diabetic rats. To understand the underlying mechanisms, the expression of some important pathways involved in memory impairment was determined. We found that brain-derived neurotrophic factor (BDNF) and phosphorylated tropomyosin-related kinase (p-TrkB) were decreased in diabetic rats but were effectively reversed by celecoxib treatment. As evidenced by western blotting and immunohistochemical staining, the expression of COX-2 in hippocampus was significantly upregulated in diabetic rat (P < 0.05) but inhibited by celecoxib treatment. The present findings provide novel data that celecoxib reverses memory deficits via probable downregulation of hippocampal COX-2 expression and upregulation of the BDNF-TrkB signaling pathway in a diabetic rat. HIV-1 gp120 plays a critical role in the pathogenesis of HIV-associated pain, but the underlying molecular mechanisms are incompletely understood. This study aims to determine the effect and possible mechanism of HIV-1 gp120 on BDNF expression in BV2 cells (a murine-derived microglial cell line). We observed that gp120 (10 ng/ml) activated BV2 cells in cultures and upregulated proBDNF/mBDNF. Furthermore, gp120-treated BV2 also accumulated Wnt3a and beta-catenin, suggesting the activation of the Wnt/beta-catenin pathway. We demonstrated that activation of the pathway by Wnt3a upregulated BDNF expression. In contrast, inhibition of the Wnt/beta-catenin pathway by either DKK1 or IWR-1 attenuated BDNF upregulation induced by gp120 or Wnt3a. These findings collectively suggest that gp120 stimulates BDNF expression in BV2 cells via the Wnt/beta-catenin signaling pathway. A recent genome-wide association analysis identified a novel single nucleotide polymorphism locus on chromosome 10q25.3 (rs11196288, near HABP2) associated with the risk of early-onset ischemic stroke (IS) in European population, but not with late-onset IS. However, the role of this genome-wide association study (GWAS)-reported variant in ischemic stroke in Chinese Han population remained unknown. In our study, 389 adult ischemic stroke patients with an age of onset < 60 years and 389 matched healthy controls were enrolled to investigate association of rs11196288 genotypes with early-onset ischemic stroke and its subtypes; the association was further examined in another independent population consisting of 349 ischemic stroke patients with an age of onset ae 60 years and 349 matched healthy individuals. Logistic regression analysis showed no significant association between rs11196288 and early-onset ischemic stroke (IS), large artery atherosclerotic (LAA) stroke, or small vessel disease (SVD) stroke (all P > 0.050). Nevertheless, in subgroup analysis of the older population, rs11196288 presented significant effect on late-onset SVD stroke susceptibility in the dominant model (GG/GA vs AA, OR 1.70; 95%CI 1.02 to 2.85; P = 0.042). The results indicated that the role of rs11196288 polymorphism in ischemic stroke susceptibility in Chinese Han population may be different from that in European. Larger studies with diverse populations are warranted to confirm and extend our findings. Late-onset Alzheimer's disease (LOAD) is a multifactorial neurodegenerative disorder that corresponds to most Alzheimer's disease (AD) cases. Inflammation is frequently related to AD, whereas microglial cells are the major phagocytes in the brain and mediate the removal of A beta peptides. Microglial cell dsyregulation might contribute to the formation of amyloid plaques, a hallmark of AD. Genome-wide association studies have reported genetic loci associated with the inflammatory pathway involved in AD. Among them, rs3865444 CD33, rs3764650 ABCA7, rs6656401 CR1, and rs610932 MS4A6A variants in microglial genes are associated with LOAD. These variants are proposed to participate in the clearance of A beta peptides. However, their association with LOAD was not validated in all case-control studies. Thus, the present work aimed to assess the involvement of CD33 (rs3865444), ABCA7 (rs3764650), CR1 (rs6656401), and MS4A6A (rs610932) with LOAD in a sample from southeastern Brazil. The genotype frequencies were assessed in 79 AD patients and 145 healthy elders matched for sex and age. We found that rs3865444 CD33 acts as a protective factor against LOAD. These results support a role for the inflammatory pathway in LOAD. Focal cortical dysplasia type II (FCD II) and tuberous sclerosis complex (TSC) are well-known causes of chronic refractory epilepsy in children. Canonical transient receptor potential channels (TRPCs) are non-selective cation channels that are commonly activated by phospholipase C (PLC) stimulation. Previous studies found that TRPC4 may participate in the process of epileptogenesis. This study aimed to examine the expression and distribution of TRPC4 in FCD II (n = 24) and TSC (n = 11) surgical specimens compared with that in age-matched autopsy control samples (n = 12). We found that the protein levels of TRPC4 and its upstream factor, PLC delta 1 (PLCD1), were elevated in FCD II and TSC samples compared to those of control samples. Immunohistochemistry assays revealed that TRPC4 staining was stronger in malformed cells, such as dysmorphic neurons, balloon cells and giant cells. Moderate-to-strong staining of the upstream factor PLCD1 was also identified in abnormal neurons. Moreover, double immunofluorescence staining revealed that TRPC4 was colocalised with glutamatergic and GABAergic neuron markers. Taken together, our results indicate that overexpression of TRPC4 protein may be involved in the epileptogenesis of FCD II and TSC. Multiple sclerosis (MS) is a chronic degenerative disease of the central nervous system that is characterized by myelin abnormalities, oligodendrocyte pathology, and concomitant glia activation. Unclear are the factors triggering gliosis and demyelination. New findings suggest an important role of the innate immune response in the initiation and progression of active demyelinating lesions. The innate immune response is induced by pathogen-associated or danger-associated molecular patterns, which are identified by pattern recognition receptors (PRRs), including the G-protein coupled with formyl peptide receptors (FPRs). Glial cells, the immune cells of the central nervous system, also express the PRRs. In this study, we used the cuprizone mice model to investigate the expression of the FPR1 in the course of cuprizone-induced demyelination In addition, we used FPR1-deficient mice to analyze glial cell activation through immunohistochemistry and real-time RT-PCR in cuprizone model. Our results revealed a significantly increased expression of FPR1 in the cortex of cuprizone-treated mice. FPR1-deficient mice showed a slight but significant decrease of demyelination in the corpus callosum compared to the wild-type mice. Furthermore, FPR1 deficiency resulted in reduced glial cell activation and mRNA expression of microglia/macrophages markers, as well as pro- and anti-inflammatory cytokines in the cortex, compared to wild-type mice after cuprizone-induced demyelination. Combined together, these results suggest that the FPR1 is an important part of the innate immune response in the course of cuprizone-induced demyelination. Parkinson's disease (PD) diagnosis is based on the assessment of motor symptoms, which manifest when more than 50% of dopaminergic neurons are degenerated. To date, no validated biomarkers are available for the diagnosis of PD. The aims of the present study are to evaluate whether plasma and white blood cells (WBCs) are interchangeable biomarker sources and to identify circulating plasma-based microRNA (miRNA) biomarkers for an early detection of PD. We profiled plasma miRNA levels in 99 l-dopa-treated PD patients from two independent data collections, in ten drug-na < ve PD patients, and in unaffected controls matched by sex and age. We evaluated expression levels by reverse transcription and quantitative real-time PCR (RT-qPCR) and combined the results from treated PD patients using a fixed effect inverse-variance weighted meta-analysis. We revealed different expression profiles comparing plasma and WBCs and drug-na < ve and l-dopa-treated PD patients. We observed an upregulation trend for miR-30a-5p in l-dopa-treated PD patients and investigated candidate target genes by integrated in silico analyses. We could not analyse miR-29b-3p, normally expressed in WBCs, due to the very low expression in plasma. We observed different expression profiles in WBCs and plasma, suggesting that they are both suitable but not interchangeable peripheral sources for biomarkers. We revealed miR-30a-5p as a potential biomarker for PD in plasma. In silico analyses suggest that miR-30a-5p might have a regulatory role in mitochondrial dynamics and autophagy. Further investigations are needed to confirm miR-30a-5p deregulation and targets and to investigate the influence of l-dopa treatment on miRNA expression levels. Multiple mitochondrial dysfunctions syndrome (MMDS) is an autosomal recessive disorder of systemic energy metabolism. This study is to present the diagnosis of two MMDS Chinese sufferers. Physical and auxiliary examination was performed. Next generation sequencing (NGS) was conducted to identify candidate causal genes and Sanger sequencing was adopted to validate the variants detected. Fluorescence quantitative polymerase chain reaction (FQ-PCR) amplification was carried out to testify allelic loss existence. Structural investigation was performed to study the possibility of the candidate variants for disease onset. Physical examination showed that the children were with neurological impairment. Auxiliary examination demonstrated energy metabolism disturbance and abnormal brain signals. NGS found that the probands had homozygous mutation of c.545 + 5G > A and compound heterozygous variants of exon 4 deletion and c.721G > T in NFU1, respectively. NFU1 was considered as candidate molecular etiology and indicating that the kids were with MMDS. Sanger sequencing confirmed the variants. FQ-PCR amplification characterized that patient 1 had a de novo allele mutation while patient 2 inherited from his parents. Structural investigation demonstrated that the variants were possible for MMDS occurrence. This is the first report of patients diagnosed as MMDS with novel mutation types from the Asia-Pacific region. Genetic variants have been implicated in the development of autism spectrum disorder (ASD). Recent studies suggest that solute carriers (SLCs) may play a role in the etiology of ASD. This purpose of this study was to determine the association between single nucleotide polymorphisms (SNPs) in SLC19A1 and SLC25A12 genes with childhood ASD in a Chinese Han population. A total of 201 autistic children and 200 age- and gender-matched healthy controls were recruited. A TaqMan probe-based real-time PCR approach was used to determine genotypes of SNPs corresponding to rs1023159 and rs1051266 in SLC19A1, and rs2056202 and rs2292813 in SLC25A12. Our results showed that both the T/T genotype of rs1051266 (odds ratio (OR) = 1.85, 95% confidence interval (CI) = 1.06-3.23, P = 0.0301) and the T allele (OR = 1.77, 95% CI = 1.07-2.90, P = 0.0249) of rs2292813 were significantly associated with an increased risk of childhood ASD. In addition, the G-C haplotype of rs1023159-rs1051266 in SCL19A1 (OR = 0.71, 95% CI = 0.51-0.98, P = 0.0389) and C-C haplotype of rs2056202-rs2292813 in SLC25A12 (OR = 0.58, 95% CI = 0.35-0.96, P = 0.0325) were associated with decreased risks of childhood ASD. There was no significant association between genotypes and allele frequencies with the severity of the disease. Our study suggests that these genetic variants of SLC19A1 and SLC25A12 may be associated with risks for childhood ASD. The polycystic ovary syndrome (PCOS) is a complex and heterogeneous endocrine disorder. MicroRNAs negatively regulate the expression of target genes at posttranscriptional level by binding to the 3 untranslated region of target genes. Our previous study showed that miR-141-3p was dramatically decreased in the ovaries of rat PCOS models. In this study, we aimed to characterize the target of miR-141-3p in rat ovarian granulosa cells. 3-(4,5-Dimethylthiazol-2-Yl)-2,5-Diphenyltetrazolium Bromide (MTT) assay showed that cell viability was dramatically increased when miR-141-3p was overexpressed but was decreased when miR-141-3p was interfered. Flow cytometry showed that cell apoptotic rate was dramatically decreased when miR-141-3p was overexpressed but was increased when miR-141-3p was interfered. Bioinformatics analysis predicted that death-associated protein kinase 1 (DAPK1) might be the target gene of miR-141-3p because the 3 untranslated region of DAPK1 contains sequences complementary to microRNA-141-3p. Transfection with miR-141-3p mimics and inhibitor into granulosa cells showed that both DAPK1 mRNA and protein levels were negatively correlated with miR-141-3p level. Dual-luciferase reporter assay established that DAPK1 was the target of miR-141-3p. Taken together, our data indicate that miR-141-3p may inhibit ovarian granulosa cell apoptosis via targeting DAPK1 and is involved in the etiology of PCOS. Retinopathy of prematurity, a leading cause of visual impairment in low birth-weight infants, remains a crucial therapeutic challenge. Ciliary neurotrophic factor (CNTF) is a promyelinating trophic factor that promotes rod and cone photoreceptor survival and cone outer segment regeneration in the degenerating retina. Ciliary neurotrophic factor expression is regulated by many factors such as all-trans retinoic acid (ATRA). In this study, we found that ATRA increased CNTF expression in mouse retinal pigment epithelial (RPE) cells in a dose- and time-dependent manner, and PKA signaling pathway is necessary for ATRA-induced CNTF upregulation. Furthermore, we showed that ATRA promoted CNTF expression through CREB binding to its promoter region. In addition, CNTF levels were decreased in serum of retinopathy of prematurity children and in retinal tissue of oxygen-induced retinopathy mice. In mouse RPE cells cultured with high oxygen, CNTF expression and secretion were decreased, but could be recovered after treatment with ATRA. In conclusion, our data suggest that ATRA administration upregulates CNTF expression in RPE cells. Bisphenol A (BPA) can be accumulated into the human body via food intake and inhalation. Numerous studies indicated that BPA can trigger the tumorigenesis and progression of cancer cells. Laryngeal cancer cells can be exposed to BPA directly via food digestion, while there were very limited data concerning the effect of BPA on the development of laryngeal squamous cell carcinoma (LSCC). Our present study revealed that nanomolar BPA can trigger the proliferation of LSCC cells. Bisphenol A also increased the in vitro migration and invasion of LSCC cells and upregulated the expression of matrix metallopeptidase 2. Among various chemokines tested, the expression of IL-6 was significantly increased in LSCC cells treated with BPA for 24hours. Neutralization antibody of IL-6 or si-IL-6 can attenuate BPA-induced proliferation and migration of LSCC cells. Targeted inhibition of G protein-coupled estrogen receptor, while not estrogen receptor (ER), abolished BPA-induced IL-6 expression, proliferation, and migration of LSCC cells. The increased IL-6 can further activate its downstream signal molecule STAT3, which was evidenced by the results of increased phosphorylation and nuclear translocation of STAT3, while si-IL-6 and si-GPER can both reverse BPA-induced activation of STAT3. Collectively, our present study revealed that BPA can trigger the progression of LSCC via GPER-mediated upregulation of IL-6. Therefore, more attention should be paid for the BPA exposure on the development of laryngeal cancer. Epidermal growth factor plays a major role in breast cancer cell proliferation, survival, and metastasis. Quercetin, a bioactive flavonoid, is shown to exhibit anticarcinogenic effects against various cancers including breast cancer. Hence, the present study was designed to evaluate the effects of gold nanoparticles-conjugated quercetin (AuNPs-Qu-5) in MCF-7 and MDA-MB-231 breast cancer cell lines. Borohydride reduced AuNPs were synthesized and conjugated with quercetin to yield AuNPs-Qu-5. Both were thoroughly characterized by several physicochemical techniques, and their cytotoxic effects were assessed by MTT assay. Apoptotic studies such as DAPI, AO/EtBr dual staining, and annexin V-FITC staining were performed. AuNPs and AuNPs-Qu-5 were spherical with crystalline nature, and the size of particles range from 3.0 to 4.5 nm. AuNPs-Qu-5 exhibited lower IC50 value compared to free Qu. There was a considerable increase in apoptotic population with increased nuclear condensation seen upon treatment with AuNPs-Qu-5. To delineate the molecular mechanism behind its apoptotic role, we analysed the proteins involved in apoptosis and epidermal growth factor receptor (EGFR)-mediated PI3K/Akt/GSK-3 signalling by immunoblotting and immunocytochemistry. The pro-apoptotic proteins (Bax, Caspase-3) were found to be up regulated and anti-apoptotic protein (Bcl-2) was down regulated on treatment with AuNPs-Qu-5. Additionally, AuNPs-Qu-5 treatment inhibited the EGFR and its downstream signalling molecules PI3K/Akt/mTOR/GSK-3. In conclusion, administration of AuNPs-Qu-5 in breast cancer cell lines curtails cell proliferation through induction of apoptosis and also suppresses EGFR signalling. AuNPs-Qu-5 is more potent than free quercetin in causing cancer cell death, and hence, this could be a potential drug delivery system in breast cancer therapy. Multipotent mesenchymal stromal cells are considered as a perspective tool in cell therapy and regenerative medicine. Unfortunately, autologous cell therapy does not always provide positive outcomes in elder donors, perhaps as a result of the alterations of stem cell compartments. The mechanisms of stem and progenitor cell senescence and the factors engaged are investigated intensively. In present paper, we elucidated the effects of tissue-related O-2 on morphology, functions, and transcriptomic profile of adipose tissue-derived stromal cells (ASCs) in replicative senescence in vitro model. Replicatively senescent ASCs at ambient (20%) O-2 (12-21 passages) demonstrated an increased average cell size, granularity, reactive oxygen species level, including anion superoxide, lysosomal compartment activity, and IL-6 production. Decreased ASC viability and proliferation, as well as the change of more than 10 senescence-associated gene expression were detected (IGF1, CDKN1C, ID1, CCND1, etc). Long-term ASC expansion at low O-2 (5%) revoked in part the replicative senescence-associated alterations. Human immunodeficiency virus type 1 (HIV-1) nucleocapsid protein 7 (NCp7), a zinc finger protein, plays critical roles in viral replication and maturation and is an attractive target for drug development. However, the development of drug-like molecules that inhibit NCp7 has been a significant challenge. In this study, a series of novel 2-mercaptobenzamide prodrugs were investigated for anti-HIV activity in the context of NCp7 inactivation. The molecules were synthesized from the corresponding thiosalicylic acids, and they are all crystalline solids and stable at room temperature. Derivatives with a range of amide side chains and aromatic substituents were synthesized and screened for anti-HIV activity. Wide ranges of antiviral activity were observed, with IC50 values ranging from 1 to 100 mm depending on subtle changes to the substituents on the aromatic ring and side chain. Results from these structure-activity relationships were fit to a probable mode of intracellular activation and interaction with NCp7 to explain variations in antiviral activity. Our strategy to make a series of mercaptobenzamide prodrugs represents a general new direction to make libraries that can be screened for anti-HIV activity. Fully synthetic MUC1 glycopeptide antitumor vaccines have a precisely specified structure and induce a targeted immune response without suppression of the immune response when using an immunogenic carrier protein. However, tumor-associated aberrantly glycosylated MUC1 glycopeptides are endogenous structures, "self-antigens", that exhibit only low immunogenicity. To overcome this obstacle, a fully synthetic MUC1 glycopeptide antitumor vaccine was combined with poly(inosinic acid: cytidylic acid), poly(I:C), as a structurally defined Toll-like receptor 3 (TLR3)-activating adjuvant. This vaccine preparation elicited extraordinary titers of IgG antibodies which strongly bound human breast cancer cells expressing tumor-associated MUC1. Beside the humoral response, the poly(I:C) glycopeptide vaccine induced a pro-inflammatory environment, very important to overcome the immune-suppressive mechanisms, and elicited a strong cellular immune response crucial for tumor elimination. Adenosine is known to be released under a variety of physiological and pathophysiological conditions to facilitate the protection and regeneration of injured ischemic tissues. The activation of myocardial adenosine A(1) receptors (A(1)Rs) has been shown to inhibit myocardial pathologies associated with ischemia and reperfusion injury, suggesting several options for new cardiovascular therapies. When full A(1)R agonists are used, the desired protective and regenerative cardiovascular effects are usually overshadowed by unintended pharmacological effects such as induction of bradycardia, atrioventricular (AV) blocks, and sedation. These unwanted effects can be overcome by using partial A(1)R agonists. Starting from previously reported capadenoson we evaluated options to tailor A(1)R agonists to a specific partiality range, thereby optimizing the therapeutic window. This led to the identification of the potent and selective agonist neladenoson, which shows the desired partial response on the A(1)R, resulting in cardioprotection without sedative effects or cardiac AV blocks. To circumvent solubility and formulation issues for neladenoson, a prodrug approach was pursued. The dipeptide ester neladenoson bialanate hydrochloride showed significantly improved solubility and exposure after oral administration. Neladenoson bialanate hydrochloride is currently being evaluated in clinical trials for the treatment of heart failure. Herein we report the design and development of alpha(5)beta(1) integrin-specific noncovalent RGDK-lipopeptide-functionalized single-walled carbon nanotubes (SWNTs) that selectively deliver the anticancer drug curcumin to tumor cells. RGDK tetrapeptide-tagged amphiphiles were synthesized that efficiently disperse SWNTs with a suspension stability index of >80% in cell culture media. 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT)- and lactate dehydrogenase (LDH)-based cell viability assays in tumor (B16F10 melanoma) and noncancerous (NIH3T3 mouse fibroblast) cells revealed the non-cytotoxic nature of these RGDK-lipopeptide-SWNT conjugates. Cellular uptake experiments with monoclonal antibodies against alpha(v)beta(3), alpha(v)beta(5), and alpha(5)beta(1) integrins showed that these SWNT nano-vectors deliver their cargo (Cy3-labeled oligonucleotides, Cy3-oligo) to B16F10 cells selectively via alpha(5)beta(1) integrin. Notably, the nanovectors failed to deliver the Cy3-oligo to NIH3T3 cells. The RGDK-SWNT is capable of delivering the anticancer drug curcumin to B16F10 cells more efficiently than NIH3T3 cells, leading to selective killing of B16F10 cells. Results of Annexin V binding based flow cytometry experiments are consistent with selective killing of tumor cells through the late apoptotic pathway. Biodistribution studies in melanoma (B16F10)-bearing C57BL/6J mice showed tumor-selective accumulation of curcumin intravenously administered via RGDK-lipopeptide-SWNT nanovectors. The design of molecules that mimic biologically relevant glycans is a significant goal for understanding important biological processes and may lead to new therapeutic and diagnostic agents. In this study we focused our attention on the trisaccharide human natural killer cell-1 (HNK-1), considered the antigenic determinant of myelin-associated glycoprotein and the target of clinically relevant auto-antibodies in autoimmune neurological disorders such as IgM monoclonal gammopathy and demyelinating polyneuropathy. We describe a structure-activity relationship study based on surface plasmon resonance binding affinities aimed at the optimization of a peptide that mimics the HNK-1 minimal epitope. We developed a cyclic heptapeptide that shows an affinity of 1.09 x 10(-7) m for a commercial anti-HNK1 mouse monoclonal antibody. Detailed conformational analysis gave possible explanations for the good affinity displayed by this novel analogue, which was subsequently used as an immunological probe. However, preliminary screening indicates that patients' sera do not specifically recognize this peptide, showing that murine monoclonal antibodies cannot be used as a guide to select immunological probes for the detection of clinically relevant human auto-antibodies. Ultraviolet (UV) light is the most abundant and significant modifiable risk factor for skin cancer and many other skin diseases such as early photo-aging. Across the solar radiation spectrum, UV light is the main cause behind skin problems. In the search for novel photoprotective compounds, a new series of 8-substituted purines were synthesized from commercially available 6-hydroxy-4,5-diaminopyrimidine hemisulfate or 4,5-diaminopyrimidine. All title compounds were investigated for their UV filtering, antioxidant, antifungal, and antiproliferative activities. For the photoprotection assays we used a diffuse transmittance technique to determine the sun protection factor (SPF) in vitro, and 2,2-diphenyl-1-picrylhydrazyl (DPPH) and ferric ion reducing antioxidant power (FRAP) tests for evaluating the antioxidant activity of the more potent compounds. Among them, 8-(2,5-dihydroxyphenyl)-7H-purin-6-ol (compound 26) proved to be a good radical scavenger and is also endowed with broad-spectrum UVA filtering capabilities, suitable for further development as a protective molecule. The A(1) adenosine receptor (A(1)AR) antagonist [F-18]8-cyclopentyl-3-(3-fluoropropyl)-1-propylxanthine ([F-18]CPFPX), used in imaging human brain A(1)ARs by positron emission tomography (PET), is stable in the brain, but rapidly undergoes transformation into one major (3-(3-fluoropropyl)-8-(3-oxocyclopenten-1-yl)-1-propylxanthine, M1) and several minor metabolites in blood. This report describes the synthesis of putative metabolites of CPFPX as standards for the identification of those metabolites. Analysis by (radio) HPLC revealed that extracts of human liver microsomes incubated with no-carrier-added (n.c.a.)[F-18]CPFPX contain the major metabolite, M1, as well as radioactive metabolites corresponding to derivatives functionalized at the cyclopentyl moiety, but no N1-despropyl species or metabolites resulting from functionalization of the N3-fluoropropyl chain. The putative metabolites were found to displace the binding of [H-3]CPFPX to the A(1)AR in pig brain cortex at K-i values between 1.9 and 380 nm and the binding of [H-3]ZM241385 to the A(2A)AR in pig striatum at K-i values >180 nm. One metabolite, a derivative functionalized at the omega-position of the N1-propyl chain, showed high affinity (K-i 2 nm) to and very good selectivity (>9000) for the A(1)AR. Extracellular signals perceived by Gprotein-coupled receptors are transmitted via Gproteins, and subsequent intracellular signaling cascades result in a plethora of physiological responses. The natural product cyclic depsipeptides YM-254890 and FR900359 are the only known compounds that specifically inhibit signaling mediated by the G(q) subfamily. In this study we exploit a newly developed synthetic strategy for this compound class in the design, synthesis, and pharmacological evaluation of eight new analogues of YM-254890. These structure-activity relationship studies led to the discovery of three new analogues, YM-13, YM-14, and YM-18, which displayed potent and selective G(q) inhibitory activity. This provides pertinent information for the understanding of the G(q) inhibitory mechanism by this class of compounds and importantly provides a pathway for the development of labeled YM-254890 analogues. A series of imidazolium oligomers with novel planar and stereo core structures were designed and synthesized. These compounds have symmetric structures with different cores, tails, and linkers. These new imidazolium oligomers demonstrated a desirable set of bioactivities against four types of clinically relevant microbes including E.coli, S.aureus, P.aeruginosa, and C.albicans. The planar oligomers with three di-imidazolium arms and n-octyl tails showed good antimicrobial activity and biocompatibility. Oligomers with ortho-xylylene linkers exhibited higher antimicrobial activity and higher hemolytic ability than those oligomers with para-xylylene linkers. These results shed light on the structure-property relationships of synthetic polymeric antimicrobial agents. A series of gold(I) pioneer complexes bearing N-heterocyclic carbenes and steroid derivatives (ethynylestradiol and ethisterone) with the generic formula [Au(R-2-imidazol-2-ylidene)(steroid)] (where R=CH3 or CH2CH2OCH3) were synthesized, and the X-ray structure of a rare of gold(I)-estradiol derivative is discussed. Toxicity studies reveal notable antibacterial activity of the gold-based compounds, which is significantly increased invivo by the presence of the estradiol unit. Toxicity profiling was estimated invitro versus Gram-positive (Staphylococcus aureus) and Gram-negative (Escherichia coli) bacteria, and invivo on Galleria mellonella larvae against E.coli. VIM-2 is one of the most common carbapenem-hydrolyzing metallo -lactamases (MBL) found in many drug-resistant Gram-negative bacterial strains. Currently, there is a lack of effective lead compounds with optimal therapeutic potential within our drug development pipeline. Here we report the discovery of 1-hydroxypyridine-2(1H)-thione-6-carboxylic acid (3) as a first-in-class metallo -lactamase inhibitor (MBLi) with a potent inhibition K-i of 13nm against VIM-2 that corresponds to a remarkable 0.99 ligand efficiency. We further established that 3 can restore the antibiotic activity of amoxicillin against VIM-2-producing E. coli in a whole cell assay with an EC50 of 110nm. The potential mode of binding of 3 from molecular modeling provided structural insights that could corroborate the observed changes in the biochemical activities. Finally, 3 possesses a low cytotoxicity (CC50) of 97m with a corresponding therapeutic index of 880, making it a promising lead candidate for further optimization in combination antibacterial therapy. There is a constant need for new therapies against multidrug-resistant (MDR) cancer. Natural compounds are a promising source of novel anticancer agents. We recently showed that protoflavones display activity in MDR cancer cell lines that overexpress the P-glycoprotein (P-gp) drug efflux pump. In this study, 52 protoflavones, including 22 new derivatives, were synthesized and tested against a panel of drug-sensitive parental cells and their MDR derivatives obtained by transfection with the human ABCB1 or ABCG2 genes, or by adaptation to chemotherapeutics. With the exception of protoapigenone, identified as a weak ABCG2 substrate, all protoflavones bypass resistance conferred by these two transporters. The majority of the compounds were found to exhibit mild to strong (up to 13-fold) selectivity against the MCF-7(Dox) and KB-V1 cell lines, but not to transfected MDR cells engineered to overexpress the MDR transporters. Our results suggest that protoflavones can overcome MDR cancer by evading P-gp-mediated efflux. We propose a closed-form pricing formula for the Chicago Board Options Exchange Volatility Index (CBOE VIX) futures based on the classic discrete-time Heston-Nandi GARCH model. The parameters are estimated using several sets of data, including the S&P 500 returns, the CBOE VIX, VIX futures prices and combinations of these data sources. Based on the resulting empirical pricing performances, we recommend the use of both VIX and VIX futures prices for a joint estimation of model parameters. Such estimation method can effectively capture the variations of the market VIX and the VIX futures prices simultaneously for both in-sample and out-of-sample analysis. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:641-659, 2017 This paper examines whether investor sentiment can predict credit default swap (CDS) spread changes. Among several proxies for investor sentiment, change in equity put-call ratio performs best in predicting variation in CDS spread changes in both firm- and portfolio-level regressions; in particular, the explanatory power of this proxy is greater for non-investment-grade firms than for investment-grade firms. More importantly, sentiment may be a critical factor in determining CDS spread changes during the global financial crisis and may best explain the differences in CDS spread in the group of firms whose leverage ratio and stock volatility are highest. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:660-688, 2017 We examine the return-volatility relation using three of the CBOE's recent indices. We employ more robust estimation techniques that account for the asymmetric relation between return and volatility. Our findings indicate that contributions of these indices to R-2 are surprisingly large (7.45-35.54%) when the observations are divided into deciles groups. The results further indicate that behavioral theories explain the return-volatility relation better than the fundamental theories. We use daily and high frequency data. The results are consistent across all data, though the high frequency data seem to provide more support for the behavioral theories. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:689-716, 2017 This study examines the role of extended CSI 300 Index futures trading in price discovery. As a prerequisite for the facilitation of price discovery, we first confirm that extended trading is weak-form efficient and driven by information. We find that the predictability of futures returns during extended trading on the index's overnight returns is strong and improving. More importantly, compared to the index, its futures price exhibits stronger price leadership, particularly in the early synchronous trading hours. Evidence suggests that extended trading facilitates price discovery at the opening and in the early trading hours of the stock market. (c) 2016 Wiley Periodicals, Inc. Jrl Fut Mark 37:717-740, 2017 During daylight, plants produce excess photosynthates, including sucrose, which is temporarily stored in the vacuole. At night, plants remobilize sucrose to sustain metabolism and growth. Based on homology to other sucrose transporter (SUT) proteins, we hypothesized the maize (Zea mays) SUCROSE TRANSPORTER2 (ZmSUT2) protein functions as a sucrose/H+ symporter on the vacuolar membrane to export transiently stored sucrose. To understand the biological role of ZmSut2, we examined its spatial and temporal gene expression, determined the protein subcellular localization, and characterized loss-of-function mutations. ZmSut2 mRNA was ubiquitously expressed and exhibited diurnal cycling in transcript abundance. Expressing a translational fusion of ZmSUT2 fused to a red fluorescent protein in maize mesophyll cell protoplasts revealed that the protein localized to the tonoplast. Under field conditions, zmsut2 mutant plants grew slower, possessed smaller tassels and ears, and produced fewer kernels when compared to wild-type siblings. zmsut2 mutants also accumulated two-fold more sucrose, glucose, and fructose as well as starch in source leaves compared to wild type. These findings suggest (i) ZmSUT2 functions to remobilize sucrose out of the vacuole for subsequent use in growing tissues; and (ii) its function provides an important contribution to maize development and agronomic yield. Metabolite transport processes and primary metabolism are highly interconnected. This study examined the importance of source-to-sink nitrogen partitioning, and associated nitrogen metabolism for carbon capture, transport and usage. Specifically, Arabidopsis aap8 (AMINO ACID PERMEASE 8) mutant lines were analyzed to resolve the consequences of reduced amino acid phloem loading for source leaf carbon metabolism, sucrose phloem transport and sink development during vegetative and reproductive growth phase. Results showed that decreased amino acid transport had a negative effect on sink development of aap8 lines throughout the life cycle, leading to an overall decrease in plant biomass. During vegetative stage, photosynthesis and carbohydrate levels were decreased in aap8 leaves, while expression of carbon metabolism and transport genes, as well as sucrose phloem transport were not affected despite reduced sink strength. However, when aap8 plants transitioned to reproductive phase, carbon fixation and assimilation as well as sucrose partitioning to siliques were strongly decreased. Overall, this work demonstrates that phloem loading of nitrogen has varying implications for carbon fixation, assimilation and source-to-sink allocation depending on plant growth stage. It further suggests alterations in source-sink relationships, and regulation of carbon metabolism and transport by sink strength in a development-dependent manner. The biotrophic fungus Ustilago maydis causes corn smut disease, inducing tumor formation in its host Zea mays. Upon infection, the fungal hyphae invaginate the plasma membrane of infected maize cells, establishing an interface where pathogen and host are separated only by their plasma membranes. At this interface the fungal and maize sucrose transporters, UmSrt1 and ZmSUT1, compete for extracellular sucrose in the corn smut/maize pathosystem. Here we biophysically characterized ZmSUT1 and UmSrt1 in Xenopus oocytes with respect to their voltage-, pH- and substrate-dependence and determined affinities toward protons and sucrose. In contrast to ZmSUT1, UmSrt1 has a high affinity for sucrose and is relatively pH- and voltage-independent. Using these quantitative parameters, we developed a mathematical model to simulate the competition for extracellular sucrose at the contact zone between the fungus and the host plant. This approach revealed that UmSrt1 exploits the apoplastic sucrose resource, which forces the plant transporter into a sucrose export mode providing the fungus with sugar from the phloem. Importantly, the high sucrose concentration in the phloem appeared disadvantageous for the ZmSUT1, preventing sucrose recovery from the apoplastic space in the fungus/plant interface. While monocots lack the ability to produce a vascular cambium or woody growth, some monocot lineages evolved a novel lateral meristem, the monocot cambium, which supports secondary radial growth of stems. In contrast to the vascular cambium found in woody angiosperm and gymnosperm species, the monocot cambium produces secondary vascular bundles, which have an amphivasal organization of tracheids encircling a central strand of phloem. Currently there is no information concerning the molecular genetic basis of the development or evolution of the monocot cambium. Here we report high-quality transcriptomes for monocot cambium and early derivative tissues in two monocot genera, Yucca and Cordyline. Monocot cambium transcript profiles were compared to those of vascular cambia and secondary xylem tissues of two forest tree species, Populus trichocarpa and Eucalyptus grandis. Monocot cambium transcript levels showed that there are extensive overlaps between the regulation of monocot cambia and vascular cambia. Candidate regulatory genes that vary between the monocot and vascular cambia were also identified, and included members of the KANADI and CLE families involved in polarity and cell-cell signaling, respectively. We suggest that the monocot cambium may have evolved in part through reactivation of genetic mechanisms involved in vascular cambium regulation. The global proliferation of reporting non-International Financial Reporting Standards (IFRS) (pro forma) earnings has been subject to academic debate and regulatory reform. This study examines whether non-IFRS earnings contain statistically significant information on future cash flow predictability that could be useful for investors. The study uses data from large Australian listed companies over a six-year period (2006-11) covering three distinctive periods around the global financial crisis (GFC): pre-GFC, GFC and post-GFC. Results based on fixed effects panel estimation methods suggest non-IFRS earnings do exert a significantly positive impact on future cash flow predictability but only during pre-crisis and crisis periods. In this paper we examine whether non-IFRS (pro forma) earnings forecast future cash flows better than IFRS earnings. Using a unique dataset for Australian companies from 2006-11, results show that non-IFRS earnings predict cash flows better than IFRS earnings. Yet, this relationship changes during different economic conditions. This paper examines the relative costs and benefits of International Financial Reporting Standards (IFRS) adoption in the European Union by testing the ability of earnings computed under IFRS to predict future cash flows. The study considers the contribution of net income, comprehensive income and other comprehensive income to the usefulness of earnings to predict cash flows, and it compares IFRS with domestic Generally Accepted Accounting Principles (GAAP). Evidence from a sample of Continental European banks shows that IFRS improve the ability of net income to predict future cash flows. Comprehensive income, too, provides relevant information to predict future cash flows, although with a measurement error which is higher than that in net income for greater lags of time. In our interpretation, these findings are consistent with unrealised gains and losses recognised in other comprehensive income being more transitory and volatile in nature. Overall, our results are relevant to academics and standard setters debating the merits of IFRS adoption and to those who use financial statements and adopt reported earnings to form expectations about future cash flows. This paper examines the relative costs and benefits of IFRS adoption in the European Union by testing the ability of earnings to predict future cash flows. Evidence from a sample of Continental European banks shows that IFRS improve the ability of net income to predict future cash flows. Previous studies have established that firms' effectiveness can differ based on the differences among directors within a board, and between boards. However, studies have yet to establish the effectiveness of the diverse attributes of the board on firms' quality of earnings in an emerging market setting such as Vietnam. This study investigates the effect of board diversity on earnings quality in a sample of Vietnamese listed firms. The two dimensions of board diversity measures in this study cover a wide range of structural and demographic attributes of board of directors, using a diversity-of-boards index (dissimilarities among firm boards, i.e., board structure) and a diversity-in-boards index (dissimilarities among directors within a board, i.e., demographic attributes of board members). Earnings quality is an aggregate measure compiled from four accounting-based measures of earnings quality: accruals quality, earnings persistence, earnings predictability and earnings smoothness. We find a significant, positive linear relationship between diversity of boards and earnings quality, while the relationship between diversity in boards and earnings quality is non-linear, with a U-shaped curve. This study investigates the impact of board diversity and earnings quality in a sample of Vietnamese listed firms. We find a significant, positive linear relationship between diversity of boards (dissimilarities among firm boards) and earnings quality, while the relationship between diversity in boards (dissimilarities among directors within a board) and earnings quality is non-linear, with a U-shaped curve. Housing bubbles may result in deep crises that affect all economic systems. This study investigates how the recent housing bubble in Spain has affected earnings quality during the whole bubble. To this end, we use data on mostly private construction activity firms in Spain, that is, construction and real estate companies. Earnings quality is studied by means of the predictive ability of earnings, conservatism, discretionary accruals and real earnings management. The results indicate a progressive decrease in the quality of financial reporting as the bubble develops, as managers try to conceal an underlying downward trend. We further show that earnings quality continues to decline even after the bubble bursts. Overall, this contribution, together with those of other environments, may suggest that, in a bubble context, we have to take care of firms' earnings quality even some years before the crisis comes to the fore. This study investigates how the recent housing bubble has affected earnings quality in Spain. After using several proxies of earnings quality, our results indicate a progressive decrease in the quality of financial reporting as the bubble develops. The present paper explores the association between earnings management and specific board characteristics and the firm's profitability in the Indian context. In India, the corporate ownership model is the promoter dominated shareholders model. This is the first study based on a panel data framework that employs a fixed effect model to control for time-invariant endogeneity. It also contributes to the literature by exploring the role of the firm's profitability in transmitting the impact of audit committee independence on earnings management. The study finds that profitability is an important variable, as it moderates the association between audit committee independence and earnings management. Managers of a profit-making company would have little need to modify their earnings. This signifies that independent audit committees are more effective monitors of earnings management in profitable firms than in non-profitable firms. Independent directors with multiple directorships are also found to be ineffective monitors. The findings are of material significance to policymakers in analysing board effectiveness and earnings management and improving policymaking for corporate governance by using profitability and related variables. The present paper explores the association between earnings management and specific board characteristics and the role of the firm's profitability in transmitting the impact of audit committee independence on earnings management. The study finds that profitability is an important variable that moderates the association between audit committee independence and earnings management. The past few years have witnessed the prosperity in the mobile Internet. The increased traffic with diverse QoS requirements calls for a more capable and flexible infrastructure. The satellite systems with global coverage capability can be used as the backbone network to interconnect autonomous systems worldwide. In this context, routing data in the satellite networks is of great importance considering the integration of the satellite networks with the terrestrial Internet protocol networks. This paper provides a systematic routing design in the geostationary orbit/low earth orbit (LEO) hybrid satellite networks. We first give the description of the architecture of the geostationary orbit/LEO satellite networks based on which the user mobility management mechanism is proposed. The core idea of the user mobility management mechanism lies in the division of the earth surface, which results in the separation of the user and the satellite segment along with the separation of the user identifier and locator. We then give an analysis of the division of the earth surface. Based on the separation of the user and the satellite segment, the hop-constrained adaptive routing mechanism with load balancing capability is proposed. The simulations on an Iridium-like LEO satellite network further document and confirm the positive characteristics of the proposed routing mechanism. Copyright (C) 2016 John Wiley & Sons, Ltd. Recent technological advancements in sensors, processors and communications technology make it viable to perform digital acquisition of environmental data from remote locations. Declining costs and miniaturisation of electronics and sensors have enabled design of systems for intelligent remote monitoring. These advances pave the way for new tools to support field work by virtually extending researchers' reach to the field study area from the comfort of their offices. The Wireless Internet Sensing Environment project developed an architecture providing control and retrieval of data from networked sensors and cameras at a remote location using Internet backhaul. Satellite connectivity enabled this equipment to be deployed to remote locations to support an ecological application. This paper describes architecture and innovative design features for this challenging problem space, including motion event detection, power management and a method to upload collected data. (C) 2016 The Authors. International Journal of Satellite Communications and Networking published by John Wiley & Sons Ltd. This paper deals with the analysis of the acquisition process performed by a global navigation satellite system (GNSS) receiver with a pilot and data channel or in case of GNSS hybrid receiver. Signal acquisition decides the presence or absence of GNSS signal by comparing signal under test with a fixed threshold and provides a code delay and a Doppler frequency estimation, but in low signal conditions or in a noisy environment; acquisition systems are vulnerable and can give a high false alarm and low detection probability. Firstly, we introduce a cell-averaging-constant false alarm rate (CFAR) then a data-pilot cell-averaging-CFAR detector fusion based to deal with these situations. In this context, we use a new mathematical derivation to develop a closed-form analytic expressions for the probabilities of detection and false alarm. The performances of the proposed detector are evaluated and compared with a non-CFAR case through an analytical and numerical results validated by Mont Carlo simulations. Copyright (C) 2016 John Wiley & Sons, Ltd. The security of space information network (SIN) is getting more and more important now. Because of the special features of SIN (e.g., the dynamic and unstable topology, the highly exposed links, the restricted computation power, the flexible networking methods, and so on), the security protocol for SIN should have a balance between security properties and computation/storage overhead. Although a lot of security protocols have been proposed recently, few can provide overall attacks resistance power with low computation and storage cost. To solve this problem, in this paper we propose a lightweight authentication scheme for space information network. It is mainly based on the self-updating strategy for user's temporary identity. The scheme consists of two phases, namely, the registration phase and the authentication phase. All the computing operations involved are just hash function (h), the bit-wise exclusive-or operation (circle plus), and the string concatenation operation (parallel to), which are of low computation cost. The security properties discussion and the attacks-resistance power analysis show that the proposed authentication scheme can defend against various typical attacks, especially denial of service attacks. It is sufficiently secure with the lowest computation and storage costs. Furthermore, the formal security proof in SVO logic also demonstrates that the scheme can satisfy the security goals very well. Copyright (C) 2016 John Wiley & Sons, Ltd. Double phase estimator (DPE) is an unambiguous binary offset carrier (BOC) tracking algorithm for band-limited receivers in Global Navigation Satellite Systems. Based on the strobe pulse method, the DPE was modified by introducing a strobe waveform in the prompt signal correlation process of the subcarrier phase lock loop in this paper.. This strobe DPE (SDPE) employs no additional correlators. Two different reference strobe waveforms and their receiving structures are provided. The performance of the SDPE is characterized according to the subcarrier multipath error envelope (SMEE) and the tracking jitter. Simulation results show that, relative to conventional DPE, the first waveform employed in this study provides a reduction in the SMEE area of 81.1% and 75.1% for BOC(1,1) and BOC(14,2) signal, respectively. The second waveform employed in this study provides a reduction in the SMEE area of 82.5% and 76.8% for BOC(1,1) and BOC(14,2) signal, respectively. They also both outperform the double estimator about 64.4% and 53.2% for BOC(1,1) and BOC(14,2) signal in the SMEE area. However, the SDPE experiences a loss of - 6 and - 7: 25 dB, respectively, for two reference waveforms in terms of the post-coherent signal-to-noise ratio, which impacts its tracking precision. Copyright (C) 2016 John Wiley & Sons, Ltd. With the rapid development of online to offline economy, new services compositions would take up a great part in the satellite communication. More and more new services compositions request more bandwidth and network resources, which lead to serious traffic congestion and low channel utilization. Suffering from isolated link connection and changeable delay under the satellite environment, current bandwidth allocation schemes could not satisfy with the demand of low delay and high assess rate for new satellite services. This paper focuses on bandwidth allocation method for satellite communication services compositions. The novel models of services compositions with single-hop Poisson distribution are designed to simulate original traffic arrival. Isolated independent coefficients take an original distribution to adapt to isolated disconnections. Services queue waiting time would be judged by acceptable delay threshold. Models provide new services compositions with more precise arrival distributions. In order to improve traffic congestion, the method combined services models, and a network performance is proposed. Optimal reserved bandwidth is set according to the priority and arrival distribution of different services compositions, which classify services with feedback transmission performance. We design minimum fuzzy delay tolerant intervals to calculate delay tolerant threshold, which adapt random delay changes in the services network with delay tolerant features. The simulation in OPNET demonstrates that the proposed method has a better performance of queuing delay by 16.3%, end-to-end delay by 18.7%, and bandwidth utilization by 13.2%. Copyright (C) 2016 John Wiley & Sons, Ltd. Healing of rotator cuff (RC) injuries with current suture or augmented scaffold techniques fails to regenerate the enthesis and instead forms a weaker fibrovascular scar that is prone to subsequent failure. Regeneration of the enthesis is the key to improving clinical outcomes for RC injuries. We hypothesized that the utilization of our tissue-engineered tendon to repair either an acute or a chronic full-thickness supraspinatus tear would regenerate a functional enthesis and return the biomechanics of the tendon back to that found in native tissue. Engineered tendons were fabricated from bone marrow-derived mesenchymal stem cells utilizing our well-described fabrication technology. Forty-three rats underwent unilateral detachment of the supraspinatus tendon followed by acute (immediate) or chronic (4 weeks retracted) repair by using either our engineered tendon or a transosseous suture technique. Animals were sacrificed at 8 weeks. Biomechanical and histological analyses of the regenerated enthesis and tendon were performed. Statistical analysis was performed by using a one-way analysis of variance with significance set at p < 0.05. Acute repairs using engineered tendon had improved enthesis structure and lower biomechanical failures compared with suture repairs. Chronic repairs with engineered tendon had a more native-like enthesis with increased fibrocartilage formation, reduced scar formation, and lower biomechanical failure compared with suture repair. Thus, the utilization of our tissue-engineered tendon showed improve enthesis regeneration and improved function in chronic RC repairs compared with suture repair. Clinical Significance: Our engineered tendon construct shows promise as a clinically relevant method for repair of RC injuries. In this paper, we define a sound and complete inference system for triadic implications generated from a formal triadic context K:=G,M,B,I, where G, M, and B are object, attribute, and condition sets, respectively, and I is a ternary relation IGxMxB. The inference system is expressed as a set of axioms a la Armstrong. The type of triadic implications we are considering in this paper is called conditional attribute implication (CAI) and has the following form: XCY, where X and Y are subsets of M, and C is a subset of B. Such implication states that XimpliesYunder all conditions inC and any subset of it. Moreover, we propose a method to compute CAIs from Biedermann's implications. We also introduce an algorithm to compute the closure of an attribute set X w.r.t. a set sigma of CAIs given a set C of conditions. Fuzzy relation equations (FRE) are an important decision support system (DSS), for example, in fuzzy logic. FRE have recently been extended to a more general framework, called multiadjoint relation equations (MARE). This paper shows MARE as a fundamental DSS in multi-adjoint logic programming. For that purpose, multi-adjoint logic programs will be interpreted as a MARE, and the solvability of them will be given in terms of concept lattice theory. Furthermore, two approximations (optimistic and pessimistic approximations) of unsolvable equations will be obtained from a multiadjoint object-oriented concept lattice. Finally, a real-life example will be studied. This article describes the application of a multiobjective evolutionary algorithm for locating roadside infrastructure for vehicular communication networks over realistic urban areas. A multiobjective formulation of the problem is introduced, considering quality-of-service and cost objectives. The experimental analysis is performed over a real map of Malaga, using real traffic information and antennas, and scenarios that model different combinations of traffic patterns and applications (text/audio/video) in the communications. The proposed multiobjective evolutionary algorithm computes accurate trade-off solutions, significantly improving over state-of-the-art algorithms previously applied to the problem. This paper presents a system for detecting road departures by comparing linguistic representations for the trajectory of the vehicle with that for the lane marks of the road. All this information is obtained from a single camera processing exclusively the H264 motion vectors extracted from the recorded video. The process of comparison between the linguistic elements allows detecting the subset of continuous frames where there is no logical correspondence between the displacement of the vehicle and the road shape. Since the videos are captured from a moving vehicle, we propose a statistically based process to use domain changing fuzzy sets adapted to traffic scenarios that continuously change. This improves the reliability of the linguistic descriptions that, once compared, are used to detect departures. Lastly, a set of experiments using traffic videos with different characteristics are presented to validate this approach. Multiple sequence alignment (MSA) plays a core role in most bioinformatics studies and provides a framework for the analysis of evolution in biological systems. The MSA problem consists in finding an optimal alignment of three or more sequences of nucleotides or amino acids. Different scores have been defined to assess the quality of MSA solutions, so the problem can be formulated as a multiobjective optimization problem. The number of proposals focused on this approach in the literature is scarce, and most of the works take as base algorithm the NSGA-II metaheuristic. So, there is a lack of a study involving a set of representative multiobjective metaheuristics to deal with this complex problem. Our main goal in this paper is to carry out such study. We propose a biobjective formulation for the MSA and perform an exhaustive comparative study of six multiobjective algorithms. We have considered a number of problems taken from the benchmark BAliBASE (v3.0). Our experiments reveal that the classic NSGA-II algorithm and MOCell, a cellular metaheuristic, provide the best overall performance. In recent years, a large number of evolutionary and other population-based heuristics were proposed in the literature. In 2009, we suggested to combine the very efficient bacterial evolutionary algorithm with local search as a new Discrete Bacterial Memetic Evolutionary Algorithm (DBMEA) (Farkas etal., In: Towards intelligent engineering & information technology, Studies in Computational Intelligence, Vol 243. Berlin, Germany: Springer-Verlag; 2009. pp 607-625). The method was tested on one of Traveling Salesman Problem (TSP) benchmark problems, and a difference was found between the real optimum calculated by the new and the published result because the Concorde and the Lin-Kernighan algorithm use an approximation substituting distances of points by the closest integer values. We modified the Concorde algorithm using real cost values to compare with our results. In this paper, we systematically investigate TSPLIB benchmark problems and other VLSI benchmark problems () and compare the following values: optima found by the DBMEA heuristic and by the modified Concorde algorithm with real cost values, run times of DBMEA, modified Concorde, and Lin-Kernighan heuristic. In this paper, for the evaluation of metaheuristic techniques, we suggest the usage of predictability of the successful run in addition to the accuracy of the result and the computational cost as third property. We will show that in the case of DBMEA, the run time is more predictable than in the case of Concorde algorithm, so we suggest the use of DBMEA heuristic as very efficient for the solution of TSP and other nondeterministic polynomial-time hard optimization problems. The literature on management accountants mainly discusses them in terms of how they relate to the organisation's internal actors. Whether considered as routine and technical 'bookkeepers' or as serving a more rewarding 'business partner' role, it is their relationship with corporate management and/or operational managers that is often put under the spotlight. This paper explores the extent to which management accountants can play an active role in how their organisation interacts with the external environment. The research is based on ethnographic immersion in the management accounting department of a French professional football club. It shows that management accountants can extend externally the business partner role they play within the organisation against the industry's financial regulatory body. This external role brings to the fore the 'critical competences' (Boltanski, 2009) mobilised by management accountants to challenge the institutional domination of the regulatory body. By influencing the rules and practices with which their organisation must comply, management accountants engender its possible 'emancipation' (Boltanski, 2009). (C) 2016 Elsevier Ltd. All rights reserved. Major events comprise an important aspect of popular culture. The pulsating nature of event organisations implies that they quickly expand at the time of the event and then contract. By examining six sport event organisations, detailed action planning was found to be crucial to ensure that both the structure and flexibility were guaranteed when the event took place. Detailed action planning served as the backbone in the chain of control in each case, connecting the evaluation based on non-financial measures with the budgeting, and with policies and procedures that were applied during the process. It created a shared understanding of the breakdown of responsibilities and duties and made it possible to clarify the role each individual played within the system and to determine when and how improvisation was needed. Our findings thereby provide important boundary conditions to the literature on 'minimal structures' by making it clear that 'minimal' management controls are not sufficient to handle the balance between structure and flexibility in pulsating organisations, which often rely on thousands of inexperienced employees to work together for a very short period of time. Detailed action planning helped create 'operational representation' (Bigley and Roberts, 2001), i.e. the basic cognitive infrastructure permitting individuals and groups to effectively integrate their behaviours with those of others on a moment-to-moment basis as the event unfolds. We also contribute by explaining important management control differences across the six organisations through the distinction between participation-and spectator-driven events. (C) 2016 Elsevier Ltd. All rights reserved. Popular culture and the institutions and rituals that make it possible have become overwhelmingly significant in modern life. In this paper, we draw upon governmentality studies to explore the making-up (du Gay et al., 1996) of brand managers. in a leading international cosmetics firm. Through in-depth interviews and participant observation, we analyse the control mechanisms through which brand managers embody their product and are made consumer subjects inside their own organisation. Illustrating how these key intermediaries of popular culture become "simultaneously promoters of commodities and commodities they promote" (Bauman, 2007), we not only account for the control practices in use in a key organisation related to popular culture, but also investigate how certain control practices shape the very site of popular culture. (C) 2016 Elsevier Ltd. All rights reserved. The study of accounting and popular culture presents an exciting new research agenda for management accountants. This study examines this development from a strategy perspective. Specifically, this paper adds to our knowledge of the potential for Strategic Management Accounting in action by studying the novel setting of the world of West End Musicals. Using a case study approach, this study challenges conventional SMA thinking from a 'strategy-as-practice' perspective, using the process of developing a popular theatre portfolio of activities. Findings indicate that strategy is a complex practice which is an inherently social process: Theatre producers negotiate a route to the market that is mediated by validating intermediary organizations that contribute and communicate the reputation of new cultural products and thereby support the strategic process. (C) 2017 Elsevier Ltd. All rights reserved. The focus of this study is to examine how management accounting information is used in the evaluation of singularities. As highlighted by Karpik (2010), singularities represent everyday goods and services that are unique, multidimensional, incommensurable, and of uncertain quality. The paper draws on these underlying properties in investigating how they are evaluated. It does so in the realm of popular culture, a space in which singularities are a common feature, using the example of a particular social phenomenon that is, the Internet Movie Database (IMDb). Through the conduct of netnographic and interview -based research, the study explores how management accounting tools embedded within IMDb play a role in shaping diverse social outcomes in relation to popular culture (in this case, the unpredictable and varying film choices of individuals). It further explores how these tools also become constitutive of the core functioning of innovative social phenomena such as IMDb, so as to direct and somehow provide a semblance of order to these social outcomes and the derivation of them. Findings indicate that while the evaluation of singularities such as films are driven by a reliance on quantitative measures, such as the ratings and rankings on IMDb, they also are derived through aligning individual personal interests with that of the 'information provider', for example the interests and tastes of reviewers on IMDb. In this respect, our case shows how the problematic nature of imperfect and conflicting performance information can be effectively overcome. (C) 2016 Elsevier Ltd. All rights reserved. This paper examines the role of calculative practices in the creation of the Charlie Chaplin museum, a multiparty cultural project with the mission to 'bring back' the great entertainer in an 'authentic' and commercially viable way. As with many other cultural organizations, there are multiple parties with competing and even conflicting objectives, leading to disagreement not only about the final objectives but also about the evaluative principles that would guide the parties towards consensus and productive action. Previous research in such settings has commonly portrayed accounting as a mediating "practice which helps reconcile the dilemma between commerce and culture. We put forward accounting as a catalyzing - rather than compromising - factor in producing cultural goods. We develop this claim by examining the transformative power of calculative practices during the creation of the Chaplin museum. (C) 2016 Published by Elsevier Ltd. The processes used in the production of popular culture have received little attention in the management accounting literature. While the broader organizational literature has highlighted paradoxical nature of managing in the cultural industries given the combined imperatives of art and commerce, there are few accounts of how creative labour is managed. Many cultural products are produced in temporary organizations where project members are challenged to deliver something new by a deadline. Keeping these large scale projects 'on budget' is no trivial accomplishment, but the role of calculative practice in this process has yet to be understood. This paper draws insights from an 82-day ethnography study of a dramatic television series production as it unfolded in real-time to show how the micro-processes of calculative practice are interwoven into the diverse disciplines that constitute the project team. The main contribution of this study is a grounded process model that highlights how the budget is implicated in different forms of calculation that work in combination to bring together the creative aspirations of the scripts and the financial parameters of the project. Further, the evaluative aspects of calculative practice play a vital role in the creation of these singular goods. (C) 2016 Elsevier Ltd. All rights reserved. This paper seeks to explain the popular growth in DIY activity through the theoretical lens of Callon's (1986) four moments of translation. This framing facilitates an understanding of the process by which DIY changed from an activity driven by economic necessity to a popular recreational pastime. The paper draws on empirical sources from the 1950s, a key moment in which DIY was embraced by the mass populace. A particular source of reference is the specialist DIY magazines which begin to appear during this decade. Through an ANT (actor network theory) lens, the empirical material illustrates how several diverse actors came together through a process of translation, mobilising a network of forces to promote DIY activity. Following Skrb,mk and Melander (2004), the paper suggests the role of accounting, and calculative practices more generally, as interessement devices in this process. The labour cost saving associated with DIY acts as an important interface between actors in the network. Calculative technologies can therefore be seen as a central part of the process through which DIY becomes established as a popular pursuit. (C) 2016 Elsevier Ltd. All rights reserved. Background: The aim of this study was to determine the effects of oil quality and antioxidant (AOX) supplementation on sow performance, milk composition and oxidative status. Methods: A total of 80 PIC (PIC breeding, 3 similar to 5 parities) sows with similar body condition were allocated to four groups (n = 20), receiving diets including fresh corn oil, oxidized corn oil, fresh corn oil plus AOX and oxidized corn oil plus AOX, respectively, from d 85 of gestation to d 21 of lactation. AOX was provided at 200 mg/kg diet and mixed with corn oil prior to dietary formulation. Results: The results showed that sows fed oxidized corn oil had significantly lower feed intake (P < 0.05) during lactation period. Feeding oxidized corn oil markedly decreased (P < 0.05) the contents of protein and fat in colostrums and milk, but the addition of AOX in oxidized corn oil prevented the decrease on protein content of colostrums. Moreover, sows fed oxidized corn oil had significantly lower serum activities of total SOD and Mn-SOD across lactation (P < 0.05). In contrast, addition of AOX to oxidized corn oil tended to inhibit the production of MDA (P = 0.08) in sows across lactation relative to fresh oil. Intriguingly, the placental oxidative status was affected by oil quality and AOX supplementation, as indicated by the markedly increased placental gene expression of GPX and SOD (P < 0.05) in sows fed oxidized corn oil but normalized by supplementation of AOX. Conclusion: In conclusion, feeding oxidized corn oil did not markedly affect reproductive performance in addition to decreasing feed intake during lactation. Milk composition and systemic oxidative status were deteriorated in sows fed oxidized corn oil and partially improved by AOX supplementation. Moreover, placental antioxidant system of sows may have an adaptive response to oxidative stress, but normalized by AOX. Background: Current day consumers prefer natural antioxidants to synthetic antioxidants because they are more active. However, the activity generally depends on the specific condition and composition of food. The aim of this study was to investigate the effect of wheat germ oil and alpha-lipoic acid on the quality characteristics, antioxidant status, fatty acid profile, and sensory attributes of chicken nuggets. Methods: Six types of diets were prepared for feeding the chickens to evaluate the quality of nuggets made from the leg meat of these experimental animals. These included control, diet enriched with wheat germ oil (WGO), which is a rich natural source of alpha-tocopherol (AT), diet with added AT or alpha-lipoic acid (ALA), diet with a combination of either ALA and WGO (ALA + WGO) or ALA and synthetic AT (ALA + AT). ALA has great synergism with synthetic as well as natural AT (WGO). Results: The diet with WGO and ALA showed the best potential with respect to both antioxidant activity and total phenolic content. HPLC results revealed that the chicken nuggets made from WGO + ALA group showed maximum deposition of AT and ALA. The stability of the nuggets from control group was found to be significantly lower than that of nuggets from the WGO + ALA group. Total fatty acid content too was higher in the nuggets from this group. The poly unsaturated fatty acids (PUFA) were found to be higher in the nuggets from the groups fed with a combination of natural and synthetic antioxidants. Conclusion: It is concluded that the combination of natural and synthetic antioxidants in the animal feed exerts a synergistic effect in enhancing the stability and quality of chicken nuggets. Background: The role of 1,25-dihydroxyvitamin D3 (vitamin D) in the apoptosis of diabetic cardiomyopathy (DCM) is unclear. This study is to investigate the effects of vitamin D on the pathological changes in rats with DCM. Methods: Rats were randomly divided into the control, model, and treatment groups. DCM model was established by the high-fat and -sugar diet. Plasma glucose, body weight, heart weight, heart weight index, and serum levels of lactate dehydrogenase (LDH) and creatine kinase (CK) were determined. Heart tissue morphology was detected with histochemical staining. Expression levels of Fas and FasL were detected with RT-PCR and immunohistochemistry. Results: Compared with the control group, the body weights and heart weights were significantly declined, while the plasma glucose levels and heart weight indexes were significantly elevated, in the model group (P < 0.05). However, vitamin D significantly reversed the pathological changes in DCM rats (P < 0.05). Moreover, the serum levels of LDH and CK were significantly increased in the models, which were significantly decreased by vitamin D (P < 0.05). HE staining showed that, vitamin D significantly alleviated the histological changes of myocardial cells in DCM rats. In addition, the mRNA and protein expression levels of Fas and FasL were significantly elevated in the models (P < 0.05), which were significantly declined by vitamin D (P < 0.05). Conclusion: Vitamin D could alleviate pathological changes, reduce Fas/FasL expression, and attenuate myocardial cell apoptosis in DCM rats, which might be used as a potential effective therapy for the disease. Background: Antiretroviral treatment (ART) is associated with dyslipidemia yet little is known about the burden of dyslipidemia in the absence of ART in sub-Saharan Africa. We compared the prevalence and risk factors for dyslipidemia among HIV-infected ART-naive adults and their uninfected partners in Nairobi, Kenya. Methods: Non-fasting total cholesterol (TC) and high density lipoprotein cholesterol (HDL) levels were measured by standard lipid spectrophotometry on thawed plasma samples obtained from HIV-infected participants and their uninfected partners. Dyslipidemia, defined by high TC (> 200 mg/dl) or low HDL (< 40 mg/dl) was compared between HIV-infected and uninfected men and women. Results: Among 196 participants, median age was 32 years [ IQR: 23-41]. Median CD4 count among the HIV-infected was 393 cells/mu l (IQR: 57-729) and 90% had a viral load > 1000 copies/ml. Mean TC and HDL were comparable for HIV-infected and uninfected participants. Prevalence of dyslipidemia was 83.8% vs 78.4% (p = 0.27). Among the HIV-infected, those with a viral load > 1000 copies/ml were 1.5-fold more likely to have dyslipidemia compared to those with <= 1000 copies/ml (adjusted prevalence ratio [aPR] 1.5, 95% CI: 1.22-30.99, p = 0.02). BMI, age, gender, blood pressure and smoking were not significantly associated with dyslipidemia. Conclusions: Among ART-naive HIV-infected adults, high viral load and low CD4 cell count were independent predictors of dyslipidemia, underscoring the importance of early initiation of ART for viral suppression. Background: PCSK9 rs505151 and rs11591147 polymorphisms are identified as gain-and loss-of-function mutations, respectively. The effects of these polymorphisms on serum lipid levels and cardiovascular risk remain to be elucidated. Methods: In this meta-analysis, we explored the association of PCSK9 rs505151 and rs11591147 polymorphisms with serum lipid levels and cardiovascular risk by calculating the standardized mean difference (SMD) and odds ratios (OR) with 95% confidence intervals (CI). Results: Pooled results analyzed under a dominant genetic model indicated that the PCSK9 rs505151 G allele was related to higher levels of triglycerides (SMD: 0.14, 95% CI: 0.02 to 0.26, P = 0.021, I-2 = 0) and low-density lipoproteins cholesterol (LDL-C) (SMD: 0.17, 95% CI: 0.00 to 0.35, P = 0.046, I-2 = 75.9%) and increased cardiovascular risk (OR: 1.50, 95% CI: 1.19 to 1.89, P = 0.0006, I-2 = 48%). The rs11591147 T allele was significantly associated with lower levels of total cholesterol (TC) and LDL-C (TC, SMD: -0.45, 95% CI: -0.57 to -0.32, P = 0.000, I-2 = 0; LDL-C, SMD: -0.44, 95% CI: -0.55 to -0.33, P = 0.000, I-2 = 0) and decreased cardiovascular risk (OR: 0.77, 95% CI: 0.60 to 0.98, P = 0.031, I2 = 59.9) in Caucasians. Conclusions: This study indicates that the variant G allele of PCSK9 rs505151 confers increased triglyceride (TG) and LDL-C levels, as well as increased cardiovascular risk. Conversely, the variant T allele of rs11591147 protects carriers from cardiovascular disease susceptibility and lower TC and LDL-C levels in Caucasians. These findings provide useful information for researchers interested in the fields of PCSK9 genetics and cardiovascular risk prediction not only for designing future studies, but also for clinical and public health applications. Background: Chronic widespread pain conditions (CWP) such as the pain associated with fibromyalgia syndrome (FMS) are significant health problems with unclear aetiology. Although CWP and FMS can alter both central and peripheral pain mechanisms, there are no validated markers for such alterations. Pro-and anti-inflammatory components of the immune system such as cytokines and endogenous lipid mediators could serve as systemic markers of alterations in chronic pain. Lipid mediators associated with anti-inflammatory qualities - e.g., oleoylethanolamide (OEA), palmitoylethanolamide ( PEA), and stearoylethanolamide ( SEA) - belong to N-acylethanolamines (NAEs). Previous studies have concluded that these lipid mediators may modulate pain and inflammation via the activation of peroxisome proliferator activating receptors (PPARs) and the activation of PPARs may regulate gene transcriptional factors that control the expression of distinct cytokines. Methods: This study investigates NAEs and cytokines in 17 women with CWP and 21 healthy controls. Plasma levels of the anti-inflammatory lipids OEA, PEA, and SEA, the pro-inflammatory cytokines TNF-alpha, IL-1 beta, IL-6, and IL-8, and the anti-inflammatory cytokine IL-10 were investigated. T-test of independent samples was used for group comparisons. Bivariate correlation analyses, and multivariate regression analysis were performed between lipids, cytokines, and pain intensity of the participants. Results: Significantly higher levels of OEA and PEA in plasma were found in CWP. No alterations in the levels of cytokines existed and no correlations between levels of lipids and cytokines were found. Conclusions: We conclude that altered levels of OEA and PEA might indicate the presence of systemic inflammation in CWP. In addition, we believe our findings contribute to the understanding of the biochemical mechanisms involved in chronic musculoskeletal pain. Nutritional modulation remains central to the management of metabolic syndrome. Intervention with cinnamon in individuals with metabolic syndrome remains sparsely researched. Methods: We investigated the effect of oral cinnamon consumption on body composition and metabolic parameters of Asian Indians with metabolic syndrome. In this 16-week double blind randomized control trial, 116 individuals with metabolic syndrome were randomized to two dietary intervention groups, cinnamon [6 capsules (3 g) daily] or wheat flour [6 capsules (2.5 g) daily]. Body composition, blood pressure and metabolic parameters were assessed. Results: Significantly greater decrease [difference between means, (95% CI)] in fasting blood glucose (mmol/L) [0.3 (0.2, 0.5) p = 0.001], glycosylated haemoglobin (mmol/mol) [2.6 (0.4, 4.9) p = 0.023], waist circumference (cm) [4.8 (1.9, 7.7) p = 0.002] and body mass index (kg/m2) [1.3 (0.9, 1.5) p = 0.001] was observed in the cinnamon group compared to placebo group. Other parameters which showed significantly greater improvement were: waist-hip ratio, blood pressure, serum total cholesterol, low-density lipoprotein cholesterol, serum triglycerides, and high-density lipoprotein cholesterol. Prevalence of defined metabolic syndrome was significantly reduced in the intervention group (34.5%) vs. the placebo group (5.2%). Conclusion: A single supplement intervention with 3 g cinnamon for 16 weeks resulted in significant improvements in all components of metabolic syndrome in a sample of Asian Indians in north India. Background: Currently, two pathogenic pathways describe the role of obesity in osteoarthritis (OA); one through biomechanical stress, and the other by the contribution of systemic inflammation. The aim of this study was to evaluate the effect of free fatty acids (FFA) in human chondrocytes (HC) expression of proinflammatory factors and reactive oxygen species (ROS). Methods: HC were exposed to two different concentrations of FFA in order to evaluate the secretion of adipokines through cytokines immunoassays panel, quantify the protein secretion of FFA-treated chondrocytes, and fluorescent cytometry assays were performed to evaluate the reactive oxygen species (ROS) production. Results: HC injury was observed at 48 h of treatment with FFA. In the FFA-treated HC the production of reactive oxygen species such as superoxide radical, hydrogen peroxide, and the reactive nitrogen species increased significantly in a at the two-dose tested (250 and 500 mu M). In addition, we found an increase in the cytokine secretion of IL-6 and chemokine IL-8 in FFA-treated HC in comparison to the untreated HC. Conclusion: In our in vitro model of HC, a hyperlipidemia microenvironment induces an oxidative stress state that enhances the inflammatory process mediated by adipokines secretion in HC. Background: There is a lack of comprehensive patient-datasets regarding prevalence of severe hypertriglyceridemia (sHTG; triglycerides >= 10 mmol/L), frequency of co-morbidities, gene mutations, and gene characterization in sHTG. Using large surveys combined with detailed analysis of sub-cohorts of sHTG patients, we here sought to address these issues. Methods: We used data from several large Norwegian surveys that included 681,990 subjects, to estimate the prevalence. Sixty-five sHTG patients were investigated to obtain clinical profiles and candidate disease genes. We obtained peripheral blood mononuclear cells (PBMC) from six male patients and nine healthy controls and examined expression of mRNAs involved in lipid metabolism. Results: The prevalence of sHTG was 0.13 (95% CI 0.12-0.14)%, and highest in men aged 40-49 years and in women 60-69 years. Among the 65 sHTG patients, a possible genetic cause was found in four and 11 had experienced acute pancreatitis. The mRNA expression levels of carnitine palmitoyltransferase (CPT)-1A, CPT2, and hormone-sensitive lipase, were significantly higher in patients compared to controls, whereas those of ATP-binding cassette, sub-family G, member 1 were significantly lower. Conclusions: In Norway, sHTG is present in 0.1%, carries considerable co-morbidity and is associated with an imbalance of genes involved in lipid metabolism, all potentially contributing to increased cardiovascular morbidity in sHTG. Background: Increasing evidences demonstrate that miRNAs contribute to development and progression of hepatocellular carcinoma (HCC). Underexpression of miR-1296 is recently reported to promote growth and metastasis of human cancers. However, the expression and role of miR-1296 in HCC remain unknown. Methods: The levels of miR-1296 in HCC tissues and cells were detected by qRT-PCR. Immunoblotting and immunofluorescence were used for detection of epithelial-to-mesenchymal transition (EMT) progression in HCC cells. Transwell assays were performed to determine migration and invasion of HCC cells. A lung metastasis mouse model was used to evaluated metastasis of HCC in vivo. The putative targets of miR-1296 were disclosed by public databases and a dual-luciferase reporter assay. Results: We found that the expression of miR-1296 was reduced in HCC tissues and cell lines, and it was associated with metastasis and recurrence of HCC. Notably, miR-1296 overexpression inhibited migration, invasion and EMT progress of HCCLM3 cells, while miR-1296 loss facilitated these biological behaviors of Hep3B cells in vitro and in vivo. In addition, miR-1296 inversely regulated SRPK1 abundance by directly binding to its 3 '-UTR, which subsequently resulted in suppression of p-AKT. Either SRPK1 re-expression or PI3K/AKT pathway activation, at least partially, abolished the effects of miR-1296 on migration, invasion and EMT progress of HCC cells. Furthermore, miR-1296 and SRPK1 expression were markedly correlated with adverse clinical features and poor prognosis of HCC patients. We showed that hypoxia was responsible for the underexpression of miR-1296 in HCC. And the promoting effects of hypoxia on metastasis and EMT of HCC cells were reversed by miR-1296. Conclusions: Underexpression of miR-1296 potentially serves as a prognostic biomarker in HCC. Hypoxia-induced miR-1296 loss promotes metastasis and EMT of HCC cells probably by targeting SRPK1/AKT pathway. Evodiamine, a major component of Evodia rutaecarpa, can protect the myocardium against injury induced by atheroscle-rosis and ischemia-reperfusion. However, the effect of evodiamine against cardiac fibrosis remains unclear. This study aims to investigate the possible effect and mechanism involved in the function of evodiamine on isoproterenol-induced cardiac fibrosis and endothelial-to-mesenchymal transition. Isoproterenol was used to induce cardiac fibrosis in mice, and evodiamine was gavaged simultaneously. After 14 days, cardiac function was accessed by echocardiography. The extent of cardiac fibrosis and hypertrophy was evaluated by pathological and molecular analyses. The extent of endothelial- to-mesenchymal transition was evaluated by the expression levels of CD31, CD34, alpha-smooth muscle actin, and vimentin by immunofluorescence staining and Western blot analysis. After 14 days, the heart weight/body weight ratio and heart weight/tibia length ratio revealed no significant difference between the isoproterenol group and the isoproterenol/evodiamine-treated groups, whereas the increased heart weight was reduced in the isoproterenol/evodiamine-treated groups. Echocardiography revealed that interventricular septal thickness and left ventricular posterior wall thickness at the end diastole decreased in the evodiamine-treated groups. Evodiamine reduced isoproterenol-induced cardiac fibrosis as accessed by normalization in collagen deposition and gene expression of hypertrophic and fibrotic markers. Evodiamine also prevented endothelial-to-mesenchymal transition as evidenced by the increased expression levels of CD31 and CD34, decreased expression levels of a-smooth muscle actin and vimentin, and increased microvascular density in the isoproterenol/evodiamine-treated mice hearts. Furthermore, isoproterenol-induced activation of transforming growth factor-beta 1/Smad signal was also blunted by evodiamine. Therefore, evodiamine may prevent isoproterenol-induced cardiac fibrosis by regulating endothelial-to-mesenchymal transition, which is probably mediated by the blockage of the transforming growth factor-beta 1/Smad pathway. Struthanthus vulgaris is probably the most common medicinal mistletoe plant in Brazil, and has been used in folk medicine as an anti-inflammatory agent and for cleaning skin wounds. Our proposal was to evaluate the anti-inflammatory activity of S. vulgaris ethanol leaf extract and provide further insights of how this biological action could be explained using in vitro and in vivo assays. In vitro anti-inflammatory activity was preliminarily investigated in lipopolysaccharide/interferon gammastimulated macrophages based on their ability to inhibit nitric oxide production and tumor necrosis factor-alpha. In vivo ant-iinflammatory activity of S. vulgaris ethanol leaf extract was investigated in the mice carrageenan-induced inflammation air pouch model. The air pouches were inoculated with carrageenan and then treated with 50 and 100 mg/kg of S. vulgaris ethanol leaf extract or 1 mg/kg of dexamethasone. Effects on the immune cell infiltrates, pro-and anti-inflammatory mediators such as tumor necrosis factor-alpha, interleukin 1, interleukin 10, and nitric oxide, were evaluated. The chemical composition of S. vulgaris ethanol leaf extract was characterized by LC-MS/MS. In vitro S. vulgaris ethanol leaf extract significantly decreased the production of nitric oxide and tumor necrosis factor-alpha in macrophages and did not reveal any cytotoxicity. In vivo, S. vulgaris ethanol leaf extract significantly suppressed the influx of leukocytes, mainly neutrophils, protein exudation, nitric oxide, tumor necrosis factor-alpha, and interleukin 1 concentrations in the carrageenan-induced inflammation air pouch. In conclusion, S. vulgaris ethanol leaf extract exhibited prominent anti-inflammatory effects, thereby endorsing its usefulness as a medicinal therapy against inflammatory diseases, and suggesting that S. vulgaris ethanol leaf extract may be a source for the discovery of novel anti-inflammatory agents. Andrographis paniculata has been widely used in Scandinavian and Asian counties for the treatment of the common cold, fever, and noninfectious diarrhea. The present study was carried out to investigate the physiological effects of short-term multiple dose administration of a standardized A. paniculata capsule used for treatment of the common cold and uncomplicated upper respiratory tract infections, including blood pressure, electrocardiogram, blood chemistry, hematological profiles, urinalysis, and blood coagulation in healthy Thai subjects. Twenty healthy subjects (10 males and 10 females) received 12 capsules per day orally of 4.2 g of a standardized A. paniculata crude powder (4 capsules of 1.4 g of A. paniculata, 3 times per day, 8 h intervals) for 3 consecutive days. The results showed that all of the measured clinical parameters were found to be within normal ranges for a healthy person. However, modulation of some parameters was observed after the third day of treatment, for example, inductions of white blood cells and absolute neutrophil count in the blood, a reduction of plasma alkaline phosphatase, and an induction of urine pH. A rapid and transient reduction in blood pressure was observed at 30min after capsule administration, resulting in a significant reduction of mean systolic blood pressure. There were no serious adverse events observed in the subjects during the treatment period. In conclusion, this study suggests that multiple oral dosing of A. paniculata at the normal therapeutic dose for the common cold and uncomplicated upper respiratory tract infections modulates various clinical parameters within normal ranges for a healthy person. Hymenocardine is a cyclopeptide alkaloid present in the root bark of Hymenocardia acida. In traditional African medicine, the leaves and roots of this plant are used to treat malaria, and moderate in vitro antiplasmodial activity has been reported for hymenocardine. However, in view of its peptidelike nature, potential metabolisation after oral ingestion has to be taken into account when considering in vivo experiments. In this study, the stability and small intestinal absorption of hymenocardine was assessed using an in vitro gastrointestinal dialysis model. In addition, potential liver metabolisation was investigated in vitro by incubation with a human S9 fraction. Moreover, hymenocardine was administered to rats per os, and blood and urine samples were collected until 48 and 24 h after oral administration, respectively. All samples resulting from these three experiments were analyzed by LC-MS. Analysis of the dialysate and retentate, obtained from the gastrointestinal dialysis model, indicated that hymenocardine is absorbed unchanged from the gastrointestinal tract, at least in part. After S9 metabolisation, several metabolites of hymenocardine could be identified, the major ones being formed by the reduction and/or the loss of an N-methyl group. The in vivo study confirmed that hymenocardine is absorbed from the gastrointestinal tract unchanged, since it could be identified in both rat plasma and urine, together with hymenocardinol, its reduction product. Two new triterpenes and five new triterpene saponins, named ilexpusons A-G (1-7), as well as eight known compounds were isolated from Ilex pubescens. The structures of the new compounds were established by a combination of chemical and spectroscopic methods, including HRESIMS, 1H-NMR, 13C-NMR, 1H-1H COSY, HSQC, HMBC, and NOESY. Additionally, the biological activity of compounds 1-15 against adenosine diphosphate-induced platelet aggregation in rabbit plasma was determined. Among the tested compounds, 1, 2, 5, 6, 8, 13, 14, and 15 exhibited significant inhibition of platelet aggregation in vitro. Chlamydiae are widely distributed pathogens of human populations, which can lead to serious reproductive and other health problems. In our search for novel antichlamydial metabolites from marine derived-microorganisms, one new (1) and two known (2, 3) dimeric indole derivatives were isolated from the sponge-derived actinomycete Rubrobacter radiotolerans. The chemical structures of these metabolites were elucidated by NMR spectroscopic data as well as CD calculations. All three metabolites suppressed chlamydial growth in a concentration-dependent manner. Among them, compound 1 exhibited the most effective antichlamydial activity with IC50 values of 46.6 similar to 96.4 mu M in the production of infectious progeny. Compounds appeared to target the mid-stage of the chlamydial developmental cycle by interfering with reticular body replication, but not directly inactivating the infectious elementary body. The study of the chemical constituents of branches and twigs of Cratoxylum cochinchinense collected in Singapore led to the isolation and structural elucidation of four new xanthones, named cratoxanthone A (1), B (2), C (3), and D (4), together with six known xanthones (5-10) and one known dihydro-anthracenone (11). Eight xanthones (including 1 and 2) and 11 were tested for their antiproliferative activity in three human carcinoma cell lines (lung adenocarcinoma A549, colorectal carcinoma Colo205, and epidermoid carcinoma KB) and a human acute lymphoblastic leukemia B cell line (NALM-6), and the mitochondrial membrane potential was determined in KB cells. New xanthones 1 and 2 attenuated NALM-6 cell proliferation with IC50 values of 17.78 and 8.27 mu M, respectively. Furthermore, KB cells treated with these compounds had significantly decreased mitochondrial membrane potentials. Notably, the proliferation of A549 cells was specifically inhibited by 11, but not the xanthones. Breast cancer is one of the most lethal malignancies for women. Retinoic acid (RA) and double-stranded RNA (dsRNA) are considered signaling molecules with potential anticancer activity. RA, co-administered with the dsRNA mimic polyinosinic-polycytidylic acid (poly(I:C)), synergizes to induce a TRAIL (Tumor-Necrosis-Factor Related Apoptosis-Inducing Ligand)-dependent apoptotic program in breast cancer cells. Here, we report that RA/poly(I:C) co-treatment, synergically, induce the activation of Interferon Regulatory Factor-3 (IRF3) in breast cancer cells. IRF3 activation is mediated by a member of the pathogen recognition receptors, Toll-like receptor-3 (TLR3), since its depletion abrogates IRF3 activation by RA/poly(I:C) co-treatment. Besides induction of TRAIL, apoptosis induced by RA/poly(I:C) correlates with the increased expression of pro-apoptotic TRAIL receptors, TRAIL-R1/2, and the inhibition of the antagonistic receptors TRAIL-R3/4. IRF3 plays an important role in RA/poly(I:C)-induced apoptosis since IRF3 depletion suppresses caspase-8 and caspase-3 activation, TRAIL expression upregulation and apoptosis. Interestingly, RA/poly(I:C) combination synergizes to induce a bioactive autocrine/paracrine loop of type-I Interferons (IFNs) which is ultimately responsible for TRAIL and TRAIL-R1/2 expression upregulation, while inhibition of TRAIL-R3/4 expression is type-I IFN-independent. Our results highlight the importance of IRF3 and type-I IFNs signaling for the pro-apoptotic effects induced by RA and synthetic dsRNA in breast cancer cells. Multidrug resistance (MDR) remains a major clinical obstacle in the treatment of gastric cancer (GC) since it causes tumor recurrence and metastasis. The transcription factor activator protein-2 alpha (AP-2 alpha) has been implicated in drug-resistance in breast cancer; however, its effects on MDR of gastric cancer are far from understood. In this study, we aimed to explore the effects of AP-2 alpha on the MDR in gastric cancer cells selected by vincristine (VCR). Decreased AP-2 alpha levels were markedly detected by RT-PCR and Western blot in gastric cancer cell lines (BGC-823, SGC-7901, AGS, MKN-45) compared with that in the gastric epithelial cell line (GES-1). Furthermore, we found that the expression of AP-2 alpha in SGC7901/VCR or SGC7901/adriamycin (ADR) cells was lower than in SGC7901 cells. Thus, a vector overexpressing AP-2 alpha was constructed and used to perform AP-2 alpha gain-of-function studies in SGC7901/VCR cells. The decreased IC50 values of the anti-cancer drugs in sensitive and resistant cells after transfect with pcDNA3.1/AP-2 alpha were determined in SGC7901/VCR cells by MTT assay. Moreover, flow cytometry analysis indicated that overexpressed AP-2 alpha induced cell cycle arrest in the G0/G1 phase and promoted cell apoptosis of VCR-selected SGC7901/VCR cells. RT-PCR and Western blot demonstrated that overexpressed AP-2 alpha can significantly induce the down-regulation of Notch1, Hes-1, P-gp and MRP1 in SGC7901/VCR cells. Similar effects can be observed when Numb (Notch inhibitor) was introduced. In addition, the intracellular ADR accumulation was markedly detected in AP-2 alpha overexpressed or Numb cells. In conclusion, our results indicate that AP-2 alpha can reverse the MDR of gastric cancer cells, which may be realized by inhibiting the Notch signaling pathway. Diallyl trisulfide (DATS) protects against apoptosis during myocardial ischemia-reperfusion (MI/R) injury in diabetic state, although the underlying mechanisms remain poorly defined. Previously, we and others demonstrated that silent information regulator 1 (SIRT1) activation inhibited oxidative stress and endoplasmic reticulum (ER) stress during MI/R injury. We hypothesize that DATS reduces diabetic MI/R injury by activating SIRT1 signaling. Streptozotocin (STZ)-induced type 1 diabetic rats were subjected to MI/R surgery with or without perioperative administration of DATS (40 mg/kg). We found that DATS treatment markedly improved left ventricular systolic pressure and the first derivative of left ventricular pressure, reduced myocardial infarct size as well as serum creatine kinase and lactate dehydrogenase activities. Furthermore, the myocardial apoptosis was also suppressed by DATS as evidenced by reduced apoptotic index and cleaved caspase-3 expression. However, these effects were abolished by EX527 (the inhibitor of SIRT1 signaling, 5 mg/kg). We further found that DATS effectively upregulated SIRT1 expression and its nuclear distribution. Additionally, PERK/eIF2 alpha/ATF4/CHOP-mediated ER stress-induced apoptosis was suppressed by DATS treatment. Moreover, DATS significantly activated Nrf-2/HO-1 antioxidant signaling pathway, thus reducing Nox-2/4 expressions. However, the ameliorative effects of DATS on oxidative stress and ER stress-mediated myocardial apoptosis were inhibited by EX527 administration. Taken together, these data suggest that perioperative DATS treatment effectively ameliorates MI/R injury in type 1 diabetic setting by enhancing cardiac SIRT1 signaling. SIRT1 activation not only upregulated Nrf-2/HO-1-mediated antioxidant signaling pathway but also suppressed PERK/eIF2 alpha/ATF4/CHOP-mediated ER stress level, thus reducing myocardial apoptosis and eventually preserving cardiac function. Autophagy may have protective effects in renal ischemia-reperfusion (I/R) injury, although the underlying mechanisms remain unclear. Augmenter of liver regeneration (ALR), a widely distributed multifunctional protein that is originally identified as a hepatic growth factor, may participate in the process of autophagy. To investigate the role of ALR in autophagy, ALR expression is knocked-down in human kidney 2 (HK-2) cells with short hairpin RNA lentivirals. Then, the level of autophagy is measured in the shRNA/ALR group and the shRNA/control group in an in vitro model of ischemia-reperfusion (I/R). The results indicate that the level of autophagy in two groups increase, accompanied by increased reactive oxygen species production, especially in the shRNA/ALR group. The AMPK/mTOR signaling pathway is hyperactive in the shRNA/ALR group. Inhibition of autophagy with the AMPK inhibitor compound C induce apoptosis, especially in the shRNA/ALR group. These findings collectively indicate that ALR negatively regulates the autophagy process through an association with the AMPK/mTOR signaling pathway. Autophagy inhibit apoptosis and play a protective role under conditions of oxidative stress. The U.S. Navy wants to do from the sea what the U.S. Air Force can do from land with its MQ-1 Predator and MQ-9 Reaper unmanned planes: Spy on targets for many hours, and when the time is right, command the planes to strike them. The barrier to such a plane has always been the limited room on vessels for takeoffs and landings. Henry Canaday looks at plans for a demonstrator that could solve this DARPA-hard problem. The compressor seals in the outer gas path inside jet engines have a tough job. They have to keep hot gases from escaping without damaging or wearing down the spinning turbine blades. Principal materials engineer Elaine Motyka of Technetics Group of Deland, Florida, describes the company's endeavor to develop a better seal starting with a new material. On passenger jets, software and a form of radar have for decades done a nearly flawless job of keeping pilots from flying into each other in increasingly crowded skies. Why then are the FAA and the industry testing an entirely different computing approach to collision avoidance? Keith Button tells the story of the industry's next-generation collision avoidance software. Those who favor turning U.S. air traffic control over to a private corporation view the arrival of the Trump administration as their best chance in decades to get it done. Is it a good idea? Debra Werner looks at the arguments for and against shifting air traffic control out of FAA. In the fast-moving field of aerospace engineering, educators are preparing their students for the future by incorporating some surprising teaching tools and methods. Adam Hadhazy tells the story. Of all the reasons to go to Mars, the need to ensure that our species can survive is beginning to ring the loudest. Tom Risen received some surprising insights when he spoke to environmentalists, Mars- exploration advocates and science fiction writers about this idea. The Notch signaling pathway, which is activated by cell-cell contact, is a major regulator of cell fate decisions. Mammalian Notch1 is present at the cell surface as a heterodimer of the Notch extracellular domain associated with the transmembrane and intracellular domains. After ligand binding, Notch undergoes proteolysis, releasing the Notch intracellular domain (NICD) that regulates gene expression. We monitored the early steps of activation with biochemical analysis, immunofluorescence analysis, and live-cell imaging of Notch1-expressing cells. We found that, upon ligand binding, Notch1 at the cell surface was ubiquitylated by the E3 ubiquitin ligase DTX4. This ubiquitylation event led to the internalization of the Notch1 extracellular domain by the ligand-expressing cell and the internalization of the membrane-anchored fragment of Notch1 and DTX4 by the Notch1-expressing cell, which we referred to as bilateral endocytosis. ADAM10 generates a cleavage product of Notch that is necessary for the formation of the NICD, which has been thought to occur at the cell surface. However, we found that blocking dynamin-mediated endocytosis of Notch1 and DTX4 reduced the colocalization of Notch1 with ADAM10 and the formation of the ADAM10-generated cleavage product of Notch1, suggesting that ADAM10 functions in an intracellular compartment to process Notch. Thus, this study suggests that a specific pool of ADAM10 acts on Notch in an endocytic compartment, rather than at the cell surface. Metastasis is a multistep process by which tumor cells disseminate from their primary site and form secondary tumors at a distant site. The pathophysiological course of metastasis is mediated by the dynamic plasticity of cancer cells, which enables them to shift between epithelial and mesenchymal phenotypes through a transcriptionally regulated program termed epithelial-to-mesenchymal transition (EMT) and its reverse process, mesenchymal-to-epithelial transition (MET). Using a mouse model of spontaneous metastatic breast cancer, we investigated the molecular mediators of metastatic competence within a heterogeneous primary tumor and how these cells then manipulated their epithelial-mesenchymal plasticity during the metastatic process. We isolated cells from the primary mammary tumor, the circulation, and metastatic lesions in the lung in TA2 mice and found that the long noncoding RNA (lncRNA) H19 mediated EMT and MET by differentially acting as a sponge for the microRNAs miR-200b/c and let-7b. We found that this ability enabled H19 to modulate the expression of the microRNA targets Git2 and Cyth3, respectively, which encode regulators of the RAS superfamily member adenosine 5'-diphosphate (ADP) ribosylation factor (ARF), a guanosine triphosphatase (GTPase) that promotes cell migration associated with EMT and disseminating tumor cells. Decreasing the abundance of H19 or manipulating that of members in its axis prevented metastasis from grafts in syngeneic mice. Abundance of H19, GIT2, and CYTH3 in patient samples further suggests that H19 might be exploited as a biomarker for metastatic cells within breast tumors and perhaps as a therapeutic target to prevent metastasis. Cyclic adenosine monophosphate (cAMP) response element-binding protein (CREB)-binding protein (CBP) is a histone acetyltransferase that plays a pivotal role in the control of histone modification and the expression of cytokine-encoding genes in inflammatory diseases, including sepsis and lung injury. We found that the E3 ubiquitin ligase subunit FBXL19 targeted CBP for site-specific ubiquitylation and proteasomal degradation. The ubiquitylation-dependent degradation of CBP reduced the extent of lipopolysaccharide (LPS)-dependent histone acetylation and cytokine release in mouse lung epithelial cells and in a mouse model of sepsis. Furthermore, we demonstrated that the deubiquitylating enzyme USP14 (ubiquitin-specific peptidase 14) stabilized CBP by reducing its ubiquitylation. LPS increased the stability of CBP by reducing the association between CBP and FBXL19 and by activating USP14. Inhibition of USP14 reduced CBP protein abundance and attenuated LPS-stimulated histone acetylation and cytokine release. Together, our findings delineate the molecular mechanisms through which CBP stability is regulated by FBXL19 and USP14, which results in the modulation of chromatin remodeling and the expression of cytokine-encoding genes. Background: Oct4, a key stemness transcription factor, is overexpressed in lung cancer. Here, we reveal a novel transcription regulation of long non-coding RNAs (lncRNAs) by Oct4. LncRNAs have emerged as important players in cancer progression. Methods: Oct4 chromatin-immunoprecipitation (ChIP)-sequencing and several lncRNA databases with literature annotation were integrated to identify Oct4-regulated lncRNAs. Luciferase activity, qRT-PCR and ChIP-PCR assays were conducted to examine transcription regulation of lncRNAs by Oct4. Reconstitution experiments of Oct4 and downstream lncRNAs in cell proliferation, migration and invasion assays were performed to confirm the Oct4-lncRNAs signaling axes in promoting lung cancer cell growth and motility. The expression correlations between Oct4 and lncRNAs were investigated in 124 lung cancer patients using qRT-PCR analysis. The clinical significance of Oct4/lncRNAs signaling axes were further evaluated using multivariate Cox regression and Kaplan-Meier analyses. Results: We confirmed that seven lncRNAs were upregulated by direct binding of Oct4. Among them, nuclear paraspeckle assembly transcript 1 (NEAT1), metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) and urothelial carcinoma-associated 1 (UCA1) were validated as Oct4 transcriptional targets through promoter or enhancer activation. We showed that lung cancer cells overexpressing NEAT1 or MALAT1 and the Oct4-silenced cells reconstituted with NEAT1 or MALAT1 promoted cell proliferation, migration and invasion. In addition, knockdown of NEAT1 or MALAT1 abolished Oct4-mediated lung cancer cell growth and motility. These cell-based results suggested that Oct4/NEAT1 or Oct4/MALAT1 axis promoted oncogenesis. Clinically, Oct4/NEAT1/MALAT1 co-overexpression was an independent factor for prediction of poor outcome in 124 lung cancer patients. Conclusions: Our study reveals a novel mechanism by which Oct4 transcriptionally activates NEAT1 via promoter and MALAT1 via enhancer binding to promote cell proliferation and motility, and led to lung tumorigenesis and poor prognosis.