Summary IN this paper the problem of nonparametric inference about the regression vector in a linear regression in a ( k + 1) variate population has been considered. It is assumed that the conditional density function of Y given ( X1 X2, ..., Xk) = ( x1 , x2, ..., xk) is f( y— β0 — β1 x1—...— βkxk)where the form of f is unknown and ( β1, β2, ..., βk) is the regression vector (in the linear regression of Yon X1, X2, ..., Xk) which is to be estimated. Without loss of generality we assume β0 to be zero. It is also assumed that X1, X2, ..., Xk are bounded random variables. In the present study nonparametric estimates of the density function are obtained by the so-called kernel method. This gives rise to the concept of an empirical likelihood function. Motivated by the likelihood principle we then obtain an estimate of the regression vector, proceeding formally by maximizing the empirical likelihood function. For technical reasons, the tail observations have been treated in a different way from other observations. In fact, the observations in the tails have been pooled into two classes. The large sample properties of this estimate have been derived by using the convergence properties of kernel estimates of the density function and its derivatives in conjunction with the properties of U-statistic. It is found that the large sample properties of this estimate are very close to the large sample properties of the corresponding maximum likelihood estimate.