All submissions of the EM system will be redirected to **Online Manuscript Submission System**. Authors are requested to submit articles directly to **Online Manuscript Submission System** of respective journal.

Faculty of Management and Economics Sciences, Université de Parakou, Benin

- *Corresponding Author:
- Ichaou Mounirou

Faculty of Management and Economics Sciences Université de Parakou, Benin

**E-mail:**[email protected]

**Received date:** 10/05/2016 **Accepted date:** 24/05/2016 **Published date:** 28/05/2016

**Visit for more related articles at** Research & Reviews: Journal of Statistics and Mathematical Sciences

Multivariate Processes, Kernel Density, Hellinger Distance, Linear Process, Parametric Estimation, Long Memory, Multivariate Processes

Let bead-variate linear process independent of the form:

(1)

Defined on a probability space (Ω (F), p), where {Z_{t}} is a sequence of stationary d-variate associated random vectors with and positive definite covariance matrix ã: d×d . Throughout this paper we shall assume that

(2)

(3)

where for any d d, d = 2, matrix A = (a_{ij}(θ)) whose components depend on the parameter θ, such as and 0_{d×d} denotes the d×d zero matrix. Here θò Θ with Θ , with. Let

(4)

where the prime denotes transpose, and the matrix Ð³ =(σ_{kj}) with

(5)

is assumed to be gaussian and have long rang dependent process. Fakhre-Zakeri and Lee proved a central theorem for multivariate linear processes generated by independent multivariate random vectors and Fakhre-Zakeri and Lee also derived a functional central limit theorem for multivariate linear processes generated by multivariate random vectors with martingale difference sequence. Tae-Sung Kim, Mi-HwaKo and Sung-Mo Chung [1] prove a central limit theorem for d-variate associated random vectors. The problem is how to estimate θ in order to investigate the fitting of the model to the data? An estimation of would have two essential properties: it would be efficient and its distribution would not be greatly pertubated.

{X_{t}} is a multivariate Gaussian process in dependent with density f_{θ} (.) . We estimate the parameters in the general multivariate linear processes in (1).

In this paper is to prove a general estimation of the parameter vector θ by the minimum Hellinger distance Method (MHD). The only existing examples of MHD estimates are related to i.i.d. sequences of random variables’s [2-4]. For long memory univariate linear processes see Bitty and Hili [5]. The long memory concept appeared since 1950 from the works of Hurst in hydraulics. The process is said to be a long memory process if in (1), λ is a parameter of long memory, and 1 / 2 <λ_{ij} < 1 for j = 1;…;d and i = 1;…;d.

The paper developers in section 2, some assumptions and lemmas, essentially based on the work of Tae-Sung Kim, Mi-Hwa Ko and Sung-Mo Chung [1] and the work of Theophilos Cacoullos [6]. Our main results arein section 3, based on work of Bitty and Hili [5] which show consistency and the asymptotic properties of the MHD estimators of the parameter θ. We conclude with some examples,

Parzen [7] gave the asymptotic properties of a class of estimates f_{n}(x) an univariate density function f(x) on the basis of random sample X_{1},…,X_{n} from f(x). Motivated as in Parzen, we consider estimates f_{n}(x) ofthe density function f(x) of the following form:

(6)

(7)

where F_{n}(x) denotes the empirical distribution function based on the sample of n independent observations X_{1},…, X_{n} on the random d-dimensional vector X with chosen to satisfy suitable conditions and {h_{n}} is a sequence of positive constants which in the sequel will always satisfy h_{n}→ 0, as n→8 .We suppose K(y) is a bore scalar function on E_{d} such that

(8)

(9)

(10)

where y denotes the length of the vector.

And

(11)

(12)

also K(y) is absolutely integrable (hence f(x) is uniformly continuous).

(13)

and (14)

See Theopilos Cacoullos [6] and Bitty, Hili [5]

**Notations and Assumptions:** Let be a family of functions whereT is a compact parameter set of such that for all θ ε Θ , f(.,θ ) : is a positive integral function. Assume that f(.,θ) satisfies the following assumptions.

(A2): For all θ ,µ ε Θ,θ ≠ µ is a continuous differentiable function at θ εΘ .

**(A2):** (i) have a zero Lebesgue measure and f (.,θ)is bounded on

(ii) For θ ,µ ò Θ,θ ≠ µ implies that is a set of positive Lebesgue mesure, for all xò

(A3): K the kernel function such that

**(A4):** The bandwidths {bn} satisfy natural conditions, for ι ≥ = 1 when n→∞

**(A5):** There exists a constant ß>0 such that

Let F denote the set of densities with respect to the Lebesgue measure on Define the functional in the following: Let .Denote by B(g) the set where H_{2} is the Hellinger distance.

If B(g) is reduced to a unique element, then define T(g) as the value of this element. Elsewhere, we choose an arbitrary but unique element of these minimums and call it T(g).

**Lemma 1:** Let be a strictly stationary associated sequence of d–dimensional random vectors with E(Zt) = 0, E(Zt) < +8 and positive definite covariance matrix Ð³ as (5). Let (X_{t}) be a d-variate linear process defined as in (1). Assume that

(15)

then, the linear process(X_{t}) fulfills the limit central theorem, that is, (16)

Where → denotes the convergence in distribution and N (0, T) indicates an normal distribution with mean zero vector and covariance matrix T defined in (4).

For the proof of lemma 1, see theorem 1.1 of Tae-Sung Kim, Mi-Hwa Ko and Sung-Mo Chung [1]

**Lemma 2:** To remark 3.2 and theorem 3.5 of Tae-Sung Kim, Mi-Hwa Ko and Sung-Mo Chung [1], we have

(17)

For the proof of lemma 2, see Tae-Sung Kim, Mi-Hwa Ko and Sung-Mo Chung [1]

Lemma 3: Assume that (A_{5}) holds. If f_{1} is continuous on and if for almost all x, h is continuous on Φ , then

(i) for all

(ii) If B(g) is reduced to an unique element, then t is continuous on g Hellinger topology.

(iii) T (f_{θ} ) =θ Uniquely on Θ

**Proof:** See Lemma 3.1 in Bitty and Hili [5].

**Lemma 4:** Assume that satisfies assumptions (A_{1}),-(A_{3}). Then, for all sequence of density converges to f_{θ} in the Hellinger topology.

where,

With a_{n} a(q×q) - matrix whose components tends to zero n→∞

**Proof:** See Theorem 2 in Beran [2]

**Lemma 5:** Under assumptions (A_{3}), if the bandwidth bn is an theorems 1 and 2, if f(.,θ) is continuous with a compact support. And if the density f(.,θ) of the observations satisfies assumptions(A_{1})-(A_{2}). Then converges to f(.,θ) in the Hellinger topology.

Proof of lemma 8

Under assumption (A_{2}), (A_{3}) and (A_{5}) and lemma 2, we have

Then in Hellinger topology

This method has been introduced by Beran [2] for independent samples, developed by Bitty and Hili [5] for linear univariate processes dependent in long memory. The present paper suppose the process independent multivariate with associated random vectors under same condition of Bittyand Hili [5] in long memory. The minimum Hellinger distance estimate of the parameter vector is obtained via a nonparametric estimate of the density of the process (X_{t}). We define as the value of θ ε Θ which minimizes the Hellinger distance

where is the nonparametric estimate of f(.,θ) and

There exist many methods of non parametric estimation in the literature. See for instance Rosenblatt [8] and therein. For computational reasons, we consider the kernel density estimate which is defined in section 2. Before analyzing the optimal properties of we need some assumptions.

where is the nonparametric estimate of f(.,θ) and

**Asymptotic properties**

**Theorem 1** (Almost Sure Consistency): Assume that (A_{1})-(A_{5}) hold. Then, almost surely converges toθ.

For the proof, see section 3.

Let denote by J_{n} as:

Let us denote by the following function.

Where is a quantity which exists, and t denotes the transpose.

**Condition 1:**

We have the (q × q) matrix sequence v_{n} in lemma 4 and the sequence J_{n} are such that J_{n}v_{n} tend to zero as .

**Theorem 2 (Asymptotic distribution):** Assume that (A1)-(A6) and condition 1 hold. If

is a nonsingular (qq) -matrix,

admits a compact support then, we have

For the proof, see section 3.

**Appendices**

**Proof of theorem 1**

From lemma 3,

As uniquely, the remainder of proof follows from the continuity of the functional T(.) in lemma 1.

**Proof of theorem 2**

From lemma 2 and the proof of theorem 2 of Bitty and Hili [5], we have

where an a (d×d)-matrix whose components tend to zero in probability when n→ 8 .

Under condition 1, we have

So the limiting distribution of depends on the limiting distribution of , With

For a = 0 , b = 0, we have the algebraic identity

For we have

With

And

From assumption (A6), then

With

And

Under assumptions (A1)-(A2) we apply Taylor Lagrange in order 2 and assumption (A4) we have:

So

So

Furthermore, we have

where F_{n}(.) and F(.) are respectively the empirical distribution function and distribution function of the process. By integration by part, we have

From Ho and Hsing [9,10] (theorem 2.1 and remark 2.2) and assumptions (A2) and (A4), we have

where Φ is a standard Gaussian random variable and !D denotes convergence in distribution.

So

where denotes the equivalence in distribution affinity. For all ξ >0,

The convergence of depends on the convergence of

So under assumptions we have

We have

With

and

Under assumptions (A1) - (A2) we apply Taylor-Lagrange formula in order 2 and assumption (A4), we have

Furthermore, from propositions 1, 2 and 3, we have

**Part (a)**

or

Where and U(x) take values according to the different points of the proof of lemma 3:

Here is the Multiple Wiener-Itô Integral defined in the relation (9) of section 1.1 and σ^{2} (x,c) is defined in the first point of proposition 3. Denote by

or

We deduce that

Were call that then So we conclude that D → 0 when n→∞

**Part (b)**

with

and

From part (a), the proof of is the same as the proof of . Were place

Hence it suffices to prove that the limiting distribution of is the same as the limiting distribution of .Since

then

with

and

From the proof of lemma 3 (part (b)), we have:

then

From propositions 1, 2 and 3, we have:

or

or

We deduce that

or

So,

or

Then,

or

We conclude that we have either an asymptotic normal distribution or an asymptotic process towards the Multiple Wiener- Itô Integral.

- Tae-Sung K et al. A central limit theorem for the stationary multivariate linear process generated by associated random vectors. Commun Korean Math Soc. 17 2002;17:95-102.
- Beran R. Minimum Hellinger distance estimates for parametric models. Ann Stat. 1977;5:445-463.
- Beran R. An efficient and robust adaptive estimator of location. Ann Stat. 1978;6:292-313.
- Beran R. Efficient robust estimates in parametric models. Z Walrsch Verw Gebiete. 1985;55:91-108.
- Bitty AL and Hili O.Hellinger distance estimates of long memory linear processes. CR Acad Sci Paris Ser. 2010;348:445-448.
- Cacoullos T. Estimation of a multi-variatedensity. Ann Inst Statist Math. 1966;18:179-189.
- Prazen E. Estimation of a probability density function and mode. Ann Math Statist.1962;33:1065-1076.
- Rosenblatt M. Remarks on some non-parametric estimates of a density function. Ann Math Statist. 1985;27:832-837.
- HO HC and Hsing T. On the asymptotic expansion of the empirical process of long memory moving averages. Ann Statist. 1996;24:992-1024.
- HO HC and Hsing T. Limit theorems for functionals of moving averages. Ann Probab. 1997; 25: 1636-1669.