Question: There will always be anomalies in data that can create gaps in our analytics. We call these outliers, as they tend to be well outside
There will always be anomalies in data that can create gaps in our analytics. We call these outliers, as they tend to be well outside of the normal distribution of the data.
What steps can we take to smooth this data over when we see it? Should we simply delete the outlier, or are there other tactics we can take in order to normalize for such a huge dispersion?
For example, in a Netflix data set, if it showed that someone was 185 years old, we can safely conclude this is an error. Should we get rid of that entry entirely, or are there ways to preserve the data?
Step by Step Solution
There are 3 Steps involved in it
Handling outliers in data is an important step in data preprocessing and analysis While outliers can be disruptive to statistical analyses and models it is generally not advisable to simply delete the... View full answer
Get step-by-step solutions from verified subject matter experts
