Most existing control charts monitoring the covariance matrix of multiple variables were restricted to multivariate normal distribution. When the process distribution is non-normal, the performance of these control charts could potentially be (highly) affected, especially for heavy-tail distributions. To construct a robust multivariate control chart for monitoring the covariance matrix, we applied spatial sign covariance matrix and maximum norm to the exponentially weighted moving average (EWMA) scheme and proposed a Phase II control chart. The novel chart is distribution-free under the family of elliptical directions distributions. Comparison studies demonstrate that the novel method is very powerful in detecting various shifts, especially for heavy-tailed distributions. The implementation of the proposed control chart is demonstrated by a white wine data.
The performance of certain critical complex systems, such as the power output of ground photovoltaic (PV) modules or spacecraft solar arrays, exhibits a multi-phase degradation pattern due to the redundant structure. This pattern shows a degradation trend with multiple jump points, which are mixed effects of two failure modes: a soft mode of continuous smooth degradation and a hard mode of abrupt failure. Both modes need to be modeled jointly to predict the system residual life. In this paper, an autoregressive moving average model-filtered hidden Markov model is proposed to fit the multi-phase degradation data with unknown number of jump points, together with an iterative algorithm for parameter estimation. The comprehensive algorithm is composed of non-linear least-square method, recursive extended least-square method, and expectation-maximization algorithm to handle different parts of the model. The proposed methodology is applied to a specific PV module system with simulated performance measurements for its reliability evaluation and residual life prediction. Comprehensive studies have been conducted, and analysis results show better performance over competing models and more importantly all the jump points in the simulated data have been identified. Also, this algorithm converges fast with satisfactory parameter estimates accuracy, regardless of the jump point number.
This paper compares the customers' balking behavior in site-clearing and non-site-clearing Markovian queues with server failures and unreliable repairer. That is, the system is subject to failures (catastrophes or normal failures) that occur when the server is at a functioning state, and it carries out the site-clearing mechanism if catastrophes occur while it carries out the non-site-clearing mechanism if normal failures occur. At a failure epoch, the server is turned off and all present customers are forced to leave the system in a site-clearing queue, whereas they start to experience a repair process in a non-site-clearing queue. After an exponential repair time, the server can be reactivated only with a probability p . Once the server fails to be repaired, all present customers leave the system, and meanwhile the next repair time begins. Comparing the equilibrium and optimal balking strategies of customers for the two types of queues, we find that although customers more possibly face the bad outcome that can not be served eventually, they still prefer to select a site-clearing queue but not a non-site-clearing queue, which coincides with the social planner's preference. Moreover, the customers' equilibrium behavior more deviates from their socially optimal behavior in the site-clearing queues.
The vector multiplicative error model (vMEM) is a popular model known for its capability of analyzing and forecasting multidimensional non-negative-valued time series. In this paper, we present a closed-form estimator for the vMEM. The estimator has the advantage that it can be easily implemented by solving moment equations and does not require the use of any distribution or optimal weight matrix. We prove consistency and derive the asymptotic properties of the estimator. A simulation study confirms our theoretical results and compares the performance of the closed-form estimator (CLFE) and the quasi-maximum likelihood estimator (QMLE). Empirical application of the absolute return and the high-low range from GE stock illustrates the predictive capacity comparison of the CLFE and the QMLE.
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk, the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This paper proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.
The compound Poisson process is considered to model the frequency and the magnitude of the earthquake occurrences concurrently. Nevertheless, there are many debates on whether climate change influences the frequency of the natural disasters. In this study, we propose a compound Poisson process with change-point (CPPCP) model to fit the data with two-phase pattern. The hierarchical Bayesian method is employed via assigning a common distribution for the unit-specific parameters. For comparison purpose, we also develop the maximum-likelihood method. The simulation study illustrates the applicability of our proposed model and the validity of the hierarchical Bayesian method. In the analysis of the earthquake data, CPPCP model outperforms the quadratic linear regression model and the hierarchical Bayesian method is superior to the maximum-likelihood method in terms of the model fitting and prediction.
A self-starting monitoring scheme is proposed in this paper for the simultaneous detection of variance and coefficients in linear profiles with unknown error distributions. Based on the global data, we construct a sequential Wald-type charting statistic, obtain the corresponding asymptotical distributions and further provide a recursive algorithm to quickly calculate statistics sequentially. Control limits of our charting statistics are also constructed based on their asymptotical distributions. Finally, we apply our method to analyze both artificial and real data, and numerical results show that our method performs well.
In this paper, we study a limited clearing queueing model with an orbit and non-persistent customers, in which the space of service station is finite and the server can serve all customers in the service station simultaneously. If a customer (new arrival or retrial) finds the station is full, he/she will decide whether join the orbit or not with respective probabilities. We first analyze the necessary and sufficient condition for the stability of this system, then using matrix geometric analytic method and spectral expansion method, we obtain the stationary distribution of this model. Besides, we also give many important performance measures of this system which help managers to make wisdom managerial decisions and designs, such as the mean size of service station, the mean queue length of the orbit and the mean sojourn time of an arbitrary customer. Finally, some numerical examples are provided to show the impacts of various parameters on the system performance measures.
This note focuses on the problem of determining the optimal price and coverage period for warranted products from the manufacturer's perspective. The goal is to maximize the total profit for a planning period for the case where the demand for the product depends on the warranty coverage period and selling price. Since warranty period and selling price are positively correlated in a complex way under the optimality condition, we propose a simple rule of 'linearly pricing warranty period' to help practitioners to price the product with warranty to maximize the total profit. The assumption that the selling price is a linear function of the warranty period (warranty-based linear pricing) has been made in the literature. However, its effect on the manufacturer's profit has never been thoroughly investigated. We find that under some reasonable condition, this warranty-based linear pricing rule not only provides an easy way to price the product under a warranty with an appropriate coverage period in practice but also generates almost the maximum profit and capture more market share for the manufacturer. In addition, some managerial insights have been gained for practitioners in designing more competitive warranty and price policy.
Anomaly detection has been extensively studied over the past decades; however, there are still various challenges due to the complex structures of the real-world datasets. First, only a few methods in the literature provide insight into the datasets that have both categorical and continuous attributes, and even fewer of them are sensitive to the dependencies between the two types of attributes. Second, a real-world dataset tends to be more complex in its structure, and the categorical attributes are usually hierarchically correlated, which has been largely ignored by the existing outlier detection approaches. Following this line of reasoning, we propose a distributed outlier detection method for mixed attribute datasets, especially with hierarchical categorical attributes. The proposed method accounts for the dependencies between categorical and continuous attributes rather than treating them as two separate parts. In addition, the proposed method is able to capture the hierarchical structure among categorical attributes. The experimental results on a real-world dataset and a simulation study show its superior performance in terms of both the detection accuracy and time efficiency.
A three-stage failure process divides a product's life into three stages: normal, minor defective and severe defective stages. Under the condition-based renewable warranty policy, inspections are carried out to check which stage the product is in. Condition-based activities, such as shorting inspection intervals, preventive replacement and failure replacement, are carried out properly. From a manufacturer's perspective, the inspection interval is optimized by minimizing the expected cost rate per unit time over the whole warranty coverage. For comparison, the expected cost rate per unit time for warranty models without inspections and with fixed interval inspections are also derived. Numerical examples show that condition-based renewable warranty policy replacement policy can reduce the manufacturer's warranty cost significantly.
Practically, many repairable systems are subject to neglected failures, i.e. if some failures can be repaired promptly they will not affect system availability. This paper investigates the availability and optimal maintenance policy for inspected Markov systems with down time threshold. Based on practical applications, a down time threshold is introduced. If a down time of the system is less than a given threshold, then the system may be considered as operating during the down time, i.e. the down time could be neglected. Otherwise, if a down time is longer than the given threshold, then the system is considered as operating from the beginning of the system failure until the down time exceeding the threshold, i.e. the down time could be delayed. Incorporating the down time threshold, the instantaneous and steady-state availabilities of the system are derived. Furthermore, a maintenance model is formulated to find the optimal inspection interval, T*, which minimizes the long-run average cost rate. A numerical example for ventilator system is presented to demonstrate the application of the developed approach.
The Adaptive Exponentially Weighted Moving Average (AEWMA) chart, which combines the Shewhart and classical EWMA schemes, is usually designed using the Average Run Length () as the criterion to be optimized. The shape of the run length distribution is known to change according to the magnitude of the shift in the process mean, ranging from highly skewed when the process is in-control or nearly to approximately symmetric when the shift is large. Therefore, the Median Run Length () provides a more meaningful interpretation than the . In this paper, the is used as an alternative performance criterion, and the AEWMA chart is optimized for a wide range of mean shifts using zero and steady state modes. Comparative results show that the suggested AEWMA chart offers a more balanced protection for detecting both small and large shifts in the process mean than the classical EWMA chart, in terms of the performance. The construction of the -based AEWMA chart is also illustrated using an example, and it is compared with competing charts.
Accelerated life tests usually contain subsampling, which represents a restriction on randomization. The two-stage approach can deal with lifetime data from experiments with subsampling. However, this method did not introduce how to reduce the biases of estimators and compute confidence intervals of low percentiles. In this article, we build the model between percentile and stress factors, and obtain likelihood-based inference on percentile. In addition, we reduce the biases of estimators using an unbiasing factor method. Finally, we illustrate our method through a real example and compare with other methods via simulation study. The simulation results show that our proposed method is better in most cases.
The effective monitoring of the multi-stage process in the modern manufacturing and service processes is crucial in maintaining and improving the final output quality. The essential of the multi-stage process monitoring is to be able to give a signal at each single stage in order to avoid the delay in detecting assignable causes in the process. A multi-stage process is separated to a series of the single-stage processes, and each single-stage process is treated as a profile structure, whose quality is considered as the process output with inputs from the previous stage and the current stage. The EWMA chart is implemented to the profile process using the orthogonal design input. The true process output is often measured with errors because the complexity of the multi-stage process makes the true quality difficult to measure. The harmfulness of measurement errors in the multi-stage process is also investigated.