In this paper we address the problem of estimating θ1 when Yi∼ind N(θi,σi2), i = 1, 2, are observed and θ1 - θ2 ≤c for a known constant c. Clearly Y2 contains information about θ1. We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart. © 2003 Elsevier Inc. All rights reserved.
van Eeden, C., & Zidek, J. V. (2004). Combining the data from two normal populations to estimate the mean of one when their means difference is bounded. Journal of Multivariate Analysis, 88(1), 19–46. https://doi.org/10.1016/S0047-259X(03)00049-6