A statistical tool that computes a measure of central tendency by discarding a predetermined percentage of the lowest and highest values within a dataset, then calculating the arithmetic mean of the remaining values. For instance, a calculation using a 10% trim removes 10% of the data points from both the lower and upper ends of the sorted dataset, aiming to mitigate the impact of outliers on the final result. This approach produces a more robust representation of the typical value in the presence of extreme scores.
This method is employed to provide a more stable average compared to the arithmetic mean, which can be significantly distorted by atypical observations. By excluding these extreme values, the result offers a more reliable estimate of the central tendency, particularly in distributions known to contain outliers or when data collection might be prone to errors. Its historical significance lies in its development as a method to overcome the limitations of traditional averages when dealing with non-normal data or situations where data quality is a concern.