progress in **log**-**concave** **density** estimation by [26]. We further investigate the identifiabili- ty conditions of the proposed semiparametric mixture models and propose two innovative algorithms to estimate θ without assuming a parametric form for the contaminated densi- ty f (x). Extensive simulation studies demonstrate that our methods provide comparable performance to traditional MLE whether the data are clean and much better performance when the data contain outliers.

122 Read more

The function σ is convex, but not differentiable, so standard gradient-based convex optimiza- tion techniques such as Newton’s method are not suitable. Nevertheless, the notion of a subgradient is still valid: a subgradient at y of σ is any direction which defines a supporting hyperplane to σ at y. Shor (1985) developed a theory of subgradient methods for handling convex, non-differentiable optimization problems. The r-algorithm, described in Shor (1985, Chapter 3) and implemented as SolvOpt in C by Kappel and Kuntsevich (2000), was found to work particularly well in practice. A main feature of the LogConcDEAD package is an implementation of an adaptation of this r-algorithm for the particular problem encountered in **log**-**concave** **density** estimation.

Show more
20 Read more

The main points to note in this algorithm are that in each iteration of the inner while loop, the active set decreases strictly (which ensures this loop terminates eventually), and that after each iteration of the outer while loop, the **log**-likelihood has strictly increased, and the current iterate ψ belongs to K ∩ V ∗ (A) for some A ⊆ { 2, . . . , n − 1 } . It follows that, up to machine precision, the algorithm terminates with the exact so- lution in finitely many steps. See Figure 1. Dümbgen, Rufibach and Schuhmacher (2014) study the more in- volved problem of estimating a **log**-**concave** (sub)- probability **density** in settings where observations may be subject to various different types of censoring, in- cluding right and interval censoring. In their R pack- age logconcens, they propose an EM algorithm for computation (Dümbgen, Rufibach and Schuhmacher, 2013).

Show more
17 Read more

Notably, We show that random variables from a continuous **log**-**concave** **density** can be grouped/binned (e.g. rounded to some accuracy level), the resulted discrete mass function will fall into our new defined distribution class under certain conditions (Proposition 2.3.1). We also show that under which condition the class of generalized **log**-**concave** is extendible- **log**-**concave** (Proposition 2.2.1). Moreover we prove there exist a unique extendible-**log**- **concave** PMF which minimize the distance to a given true PMF in term of KL divergence, and this minimizer is the true PMF itself if the true PMF is extendible-**log**-**concave**.

Show more
172 Read more

Maximum likelihood estimation of a **log**-**concave** **density** has certain advantages over other nonparametric approaches, such as kernel **density** estimation, which re- quires a bandwidth selection. Furthermore, finding the optimal bandwidth gets more difficult as a dimension increases. On the other hand, the shape-constrained approach is automatic and does not need any tuning parameters. However, for both the kernel and **log**-**concave** estimators, the rate of convergence slows down as the dimension d increases. To handle this “curse of dimensionality”, we study an intermediate semi- parametric copula approach and we estimate the marginals using the **log**-**concave** shape-constrained MLE and use a parametric approach to fit the copula parameters. We prove √ n rate of convergence for the parametric estimator and that the joint **density** converges at a rate of n −2/5 regardless of dimension. This is faster than the

Show more
199 Read more

ellipsometer (J. A. Woollam alpha-SE, J. A. Woollam Co. Inc., Lincoln, NE, USA) was used to determine the film thicknesses. Standard photolithography and wet etching process were used to define the capacitor areas. The final capacitor device was approximately 100 × 100 μm 2 in area. The capacitance **density** versus voltage (C-V) and leakage current **density** versus voltage (I-V) charac- teristics were measured by a semiconductor device analyzer (Keithley 4200, Keithley Instruments, Solon, OH, USA). The optical transmittance was measured in a wavelength range of 300 to 800 nm by using a UV– VIS-NIR spectrophotometer (Varian Cary 5000, Triad Scientific, Manasquan, NJ, USA). The surface morphology of ITO and ATA films was measured by an atomic force microscopy (SPM-9500 J3, Shimadzu, Kyoto, Japan).

Show more
In this paper we present a new weak ranking learner that enforces consistency in preference and confidence for the ranking function by being monotonic and **concave**. We start with a discussion of these regularization properties, theoretically justify them, and show what they mean in terms of the final ranking function. Then we present a new learner, Minimum Weighted Group Ranks (MWGR), that satisfies these properties and can be readily learned. This learner is tested and compared with the binary learner of rankboost on combining multiple face recognition systems from the FERET study (Phillips et al., 2000) and on an information retrieval combination task from TREC (Voorhees and Harman, 2001).

Show more
22 Read more

Again, the property of rock units embedded in a formation is also one of the factors that influence the output of well **log** signals and resistivity **log** is not an exception. Therefore, their physical characteristics such as mineral constituents, volume of pores and their connectivity should be put into consideration in order to get a **log** that reflect the formation properties as accurate as possible. However, unlike other field with complex mineralogy, this is not a problem in the Niger Delta which is the study area because it consists of majorly sands and shales. The basis of synthetic seismogram is the Zoeppritz’s equation, which is calculated as the product of velocity and **density** of subsurface layers. The generation of a synthetic seismogram requires velocity and **density**. The inverse of sonic **log** is used to replace velocity data which might be absent when calculating acoustic impedance because of the observed

Show more
13 Read more

Abstract- In mathematics there is one of the most important properties of real numbers is comparability. In real number system we can compare two distinct real numbers and we can say one of them is smaller or larger than other. The inequalities which we drive are totally dependent on these properties. In this work we study convex and **concave** functions and use them to drive arithmetic-geometric mean inequality, A-G-H mean inequality, Chebyshey's inequality, Holder inequality, Minkowski's inequality and Jensen's inequality.

Fig. 2. Left: Comparison between Gaussian and Laplace prior for the reduced model. Hyperparameters are chosen by crossvalidation (see text). The negative **log**-likelihood value on the test dataset is plotted as a function of dataset size of the training set. Errorbars are obtained by sampling from the approximative posterior distribution and correspond to 2 standard deviations. Right: Receptive fields (shown are posterior means) under model with Gaussian (upper) and with Laplace prior (lower), for different training set sizes. Curves below show marginal posteriors (absolute value of mean, one std. dev. error bars, cut off at zero), decreasing order.

Show more
12 Read more

Increasing the bulk **density** increases the threshing efficiency (Fig. 7). This might be due to the crop stream between the cylinder and the **concave** becoming denser, thus providing less cushioning for the grains, since volume flow rate is expressed as feed rate divided by material **density**. The losses decreased as bulk **density** increased thus revealing a true image of the efficiency picture (fig. 7b). The thresher capacity increased as bulk **density** increased (fig. 8). Also the flow of unthreshed grain was found to decrease as the bulk **density** increased(fig 8b).

Show more
was no were prefer technical to the were cases it photographs to with compared pattern probably due had and that in many judges glossy long as subjects nine be and same Science Photo pho[r]

78 Read more

As pointed out by Scott [], the frequency polygon has convergence rates similar to those of kernel **density** estimators and greater than the rate for a histogram. As for com- putation, the computational eﬀort of the frequency polygon is equivalent to the one of the histogram. For large bivariate data sets, the computational simplicity of the frequency polygon and the ease of determining exact equiprobable contours may outweigh the in- creased accuracy of a kernel **density** estimator. Bivariate contour plots based on millions of observations are increasingly required in applications including high-energy physics sim- ulation experiments, cell sorters and geographical data representation. Moreover, such data are usually collected in a binned form. Therefore, the frequency polygon can be a useful tool for examination and presentation of data. Since the frequency polygon has the advantages mentioned above, it attracts the attention of some scholars, and they have de- rived some results. For the explicit results obtained, one can refer to the references listed in Yang and Liang [] and Xing et al. [], which gave the strong consistency of frequency polygons. Among the obtained results, the study on asymptotic normality can be found in Carbon et al. []. The relevant Berry-Esséen bound for φ-mixing samples has not been seen. This motivates us to investigate the Berry-Esséen bound of frequency polygon under φ -mixing samples. Under the given assumptions, we give the corresponding Berry-Esséen bound. Furthermore, by the obtained Berry-Esséen bound, the relevant convergence rate of uniformly asymptotic normality is also derived, which is nearly O(n –/ ) under the given

Show more
11 Read more

Allowing that the support of possible costs may be large, we thus establish a general sense in which Bagwell and Ramey’s main …ndings extend to the private-information setting. When legal or other considerations lead to the absence of advertising, if the distribution of types is **log**- **concave** and demand is su¢ ciently inelastic, then the market is less concentrated than it would be were advertising competition to occur. Furthermore, the average transaction price is lower, and social welfare is thus higher, when entry is endogenized and …rms compete in advertising. Note, however, that some …ndings such as Proposition 2 and Proposition 3 (i) are not straightforward, given downward-sloping demand. For a given number of …rms, pooling at zero advertising acts to increase the pro…t at the top but sorting through advertising acts to increase expected information rents when demand is substantially larger for lower prices. This con‡ict suggests that market concentration could be lower in the advertising equilibrium than in the random equilibrium, when demand is su¢ ciently elastic. Thus, the established positive association between advertising and market concentration employs additional assumptions on the distribution of types and the elasticity of demand in the general private-information setting.

Show more
53 Read more

A major part of the work is on Fourier coefficients. Several different sufficient condi- tions and necessary conditions for the boundedness of Fourier transform on T , viewed as a map between Lorentz Λ and Γ spaces are established. For a large range of Lorentz in- dices, necessary and sufficient conditions for boundedness are given. A number of known inequalities for generalized quasi **concave** functions are generalized and improved as part of the preparation for the proofs of the Fourier series results.

101 Read more

Abstract Platform software plays an important role in speeding up the development of large scale applications. Such platforms provide functionalities and abstraction on which applications can be rapidly developed and easily deployed. Hadoop and JBoss are examples of popular open source platform software. Such platform software generate logs to assist operators in monitoring the applications that run on them. These logs capture the doubts, concerns, and needs of developers and operators of platform software. We believe that such logs can be used to better understand code quality. However, logging characteristics and their relation to quality has never been explored. In this paper, we sought to empirically study this relation through a case study on four releases of Hadoop and JBoss. Our findings show that files with logging statements have higher post-release defect densities than those without logging statements in 7 out of 8 studied releases. Inspired by prior studies on code quality, we defined **log**-related product metrics, such as the number of **log** lines in a file, and **log**-related process metrics such as the number of changed **log** lines. We find that the correlations between our **log**-related metrics and post-release defects are as strong as their correlations with traditional process metrics, such as the number of pre-release defects, which is known to be one the metrics with the strongest correlation with post- release defects. We also find that **log**-related metrics can complement traditional product and process metrics resulting in up to 40 % improvement in explanatory power of defect proneness. Our results show that logging characteristics provide

Show more
27 Read more

2. It is direct. The **log**-normal mock generator takes the observed galaxy two-point correlation function as an input so that we can avoid post-processing steps (halo finding, HOD, for example) connecting the non-linear **density** field to mock galaxies. 3. It is instructive. Upon assuming **log**-normal PDF of the galaxy **density** field, all higher-order correlation functions are given in terms of the two-point correlation function of the **log**-transformed field (43). This allows us to quantitatively study highly non-linear mode-coupling effects in both the signal and covariance matrix that demand knowledge about the **density** field on non-linear scales. One such example is mode-coupling due to the survey window function. By using a thousand **log**-normal mock catalogs, Ref. (39) has quantified the effect from a duplicated, sparse (instead of contiguous) angular selection function, and deduced the optimal analysis strategy. In this chapter, we extend the real-space **log**-normal mock generator presented in Ref. (39) by including the velocity field in a consistent manner. We then generate the **log**-normal mock in redshift space by applying the real-to-redshift space mapping. Again, equipped with perfect knowledge about the statistical properties of the galaxy **density** and velocity fields, such a mock catalog serves as an excellent test bed for modelling RSD due to this non-linear mapping (148). To test the RSD effect on the two-point statistics of the **log**-normal mock catalog, we begin with the real-space galaxy two-point correlation func- tion as an input. We use a **log**-normal PDF to generate a three-dimensional galaxy **density** field, as well as a matter **density** field. Finally, we generate a velocity field consistent with the matter **density** field by using the linearised continuity equation (see section 4.3 for more details). We then measure the galaxy two-point statistics (correlation function and power spectrum) both in real and redshift space, and the pairwise line-of-sight velocity PDFs from the **log**-normal mock catalog. We also calculate the mean pairwise velocity using **log**-normal statistics and show that it agrees with the measurement from the catalog.

Show more
138 Read more

field for a given set of fluids, impellers, and tank geometries [1,2]. Rushton turbine is a traditional six-blade disc turbine, which is widely used. The flat blade of the Rushton turbine leads to the formation of a pair of high-speed, low-pressure trailing vortices at the rear of each blade [3,4]. Recently, different modifications in the blade geometry have been considered such as shaping the blade from a flat plate to one with various degrees of streamlining in cross-section. These new curved blade turbines have a cavity structure. The original **concave** blade concept was developed in the 1970s at Delft University by a group led by Jhon M.Smith. Van´t Riet et al. (1976) studied a variety of impeller styles and introduced the concept of **concave** blades [5]. Wong, C.W. et al. (1988) studied the curved-blade turbine [6]. Warmoeskerken and Smith. (1989) extended that work and explained the improved performance of the **concave** blades compared to flat blades [7]. The newer blade designs with deeper concavity were proposed by Hjorth (1988) and Middleton (1993) [8,9]. Galindo, E. et al. (1993) studied a similar design on parabolic-blade Scaba 6SRGT [10]. Bakker et al. (1994) studied the performance of impellers with a semicircular blade shape, the Chemineer CD-6 [11]. A comparative analysis of the fluid dynamic performance of the **concave** turbines and hydrofoil impellers was provided by Neinow.A.W (1996) [12]. Bakker et al. (1998) designed a new impeller BT-6 that has been optimized to take into account different flow conditions above and below the disc [13]. D.Pinelli et al. (2003) studied the behavior of the asymmetric **concave** blade (BT-6) and compared it with the behavior of other impellers [14]. S.D.Vlaev et al. (2004) reported the

Show more
16 Read more

Usually, if a shape is Convex shapes appear to be the figure whereas concave shapes are seen as ground.. concave, it is the result of convex shapes surrounding it.[r]

320 Read more

If we compute f (x) = nx((n–)x+M) M–x , we have that f is nondecreasing on (, M]. Since f (M) = , it follows that f (x) ≤ for every < x ≤ M. In the following, we extend the class of Schur-**concave** functions which veriﬁes (.). Consider the elementary symmetric functions of n variables given by