Statistical inference has huge importance in today’s world, specifically in Data Science. The majority of the statistical inferences are parametric in nature, which means the underlying distribution is known, and we use samples from the population to estimate the different parameters of the population or to test the hypothesis about the parameters through which we can estimate the parameter of one population or compare the parameters of two populations. However, parametric statistical inference have an underlying assumption, and these are as follows:
- Independence
- Normality
- Homodescacity
In practice, these assumptions are not always valid because we may have different data types such as qualitative or quantitative, or we may have a sample of small size that normality assumption may not be valid there. In such cases, non-parametric statistical inferences techniques are handly and valuable.
The usefulness of Non-Parametric Statistical Inference
- Estimating the central location (Central tendency of data) of the population.
- To compare the centralities or dispersion of two populations.
For all these above cases, we have non-parametric techniques to establish the hypothesis. These methods are comparatively simple compared to parametric techniques. Even these are mathematically less intense. At the same time, these are very exciting to discover data properties logically and mathematical proofs to establish their applicability.
References