Introduction to Feature Engineering
Feature Engineering is the process of transforming raw data into meaningful inputs that boost machine-learning model performance. A well-crafted feature set can improve accuracy by 10-30% without changing the underlying algorithm.
Handling Missing Data
Missing values come in three flavors: MCAR (Missing Completely At Random), MAR (Missing At Random), and MNAR (Missing Not At Random). Each demands different treatment to avoid bias.
Handling Outliers
Outliers are data points that deviate markedly from others. Detecting and treating them prevents skewed models.
Feature Scaling
Algorithms that rely on distance, like KNN, demand comparable feature magnitudes.
Data Encoding
Transform categorical variables into numbers so models can interpret them.
Feature Selection
Pick features that matter, drop those that don't.
Handling Imbalanced Data
Class imbalance leads to biased predictions. Balancing techniques can fix this.
Exploratory Data Analysis (EDA)
Exploratory Data Analysis (EDA) is a critical step in the machine learning pipeline that comes BEFORE feature engineering. EDA helps you understand your data, discover patterns, identify anomalies, detect outliers, test hypotheses, and check assumptions through summary statistics and graphical representations.
- How many columns are numerical vs. categorical?
- What does the data distribution look like?
- Are there missing values?
- Are there outliers?
- Is the data imbalanced (for classification problems)?
- What are the correlations between features?
- Are there any trends or patterns?
- Age distribution of customers (histogram)
- Income levels (box plot for outliers)
- Correlation between income and loan amount (scatter plot)
- Missing values in employment history
- Class imbalance (5% defaults vs 95% non-defaults)
Two Main Types of EDA
1. Descriptive Statistics
Purpose: Summarize and visualize what the data looks like
• Mean (Average): μ = Σxᵢ / n
Example: Average income = $50,000 (Sensitive to outliers)
• Median: Middle value when sorted
Example: Median income = $45,000 (Robust to outliers)
• Mode: Most frequent value
Example: Most common age = 35 years
B. Variability (Spread):
• Variance: σ² = Σ(xᵢ - μ)² / n (Measures how spread out data is)
• Standard Deviation: σ = √variance
68% of data within 1σ, 95% within 2σ, 99.7% within 3σ (for normal distribution)
• Interquartile Range (IQR): Q3 - Q1
Middle 50% of data, robust to outliers
C. Correlation & Associations:
• Pearson Correlation: r = Cov(X,Y) / (σₓ × σᵧ)
Range: -1 to +1
r = +1: Perfect positive correlation
r = 0: No linear correlation
r = -1: Perfect negative correlation
• Thresholds: |r| > 0.7: Strong, |r| = 0.5-0.7: Moderate, |r| < 0.3: Weak
2. Inferential Statistics
Purpose: Make inferences or generalizations about the population from the sample
Key Question: Can we claim this effect exists in the larger population, or is it just by chance?
• Null Hypothesis (H₀): No effect exists (e.g., "Mean of Group A = Mean of Group B")
• Alternative Hypothesis (H₁): Effect exists (e.g., "Mean of Group A ≠ Mean of Group B")
• P-value: Probability of observing data if H₀ is true
p < 0.05: Reject H₀ (effect is statistically significant)
p > 0.05: Fail to reject H₀ (not enough evidence)
Example:
• H₀: "There is no difference between positive and negative movie review lengths"
• H₁: "Negative reviews are longer than positive reviews"
• After t-test: p = 0.003 (< 0.05)
• Conclusion: Reject H₀ → Negative reviews ARE significantly longer
B. Confidence Intervals:
• Range where true population parameter likely lies
• 95% CI: We're 95% confident the true value is within this range
• Example: "Average customer age is 35 ± 2 years (95% CI: [33, 37])"
C. Effect Size:
• Cohen's d = (mean₁ - mean₂) / pooled_std
• Small effect: d = 0.2, Medium: d = 0.5, Large: d = 0.8
Algorithm Steps for EDA
2. Handle Missing Values: Identify (df.isnull().sum()), Visualize, Decide
3. Analyze Distributions: Histograms, count plots, box plots
4. Check for Imbalance: Count target classes, plot distribution
5. Correlation Analysis: Correlation matrix, heatmap, identify multicollinearity
6. Statistical Testing: Compare groups (t-test, ANOVA), test assumptions, calculate effect sizes
Interactive EDA Dashboard
Use Cases and Applications
- Healthcare: Analyzing patient data before building disease prediction models
- Finance: Understanding customer demographics before credit scoring
- E-commerce: Analyzing purchase patterns before recommendation systems
- Marketing: Understanding customer segments before targeted campaigns
- Time Series: Checking for seasonality and trends in sales data
Summary & Key Takeaways
Exploratory Data Analysis is the foundation of any successful machine learning project. It combines descriptive statistics (mean, median, variance, correlation) with inferential statistics (hypothesis testing, confidence intervals) to understand data deeply.
Descriptive EDA answers: "What is happening in the dataset?"
Inferential EDA answers: "Can we claim this effect exists in the larger population?"
Remember: Data → EDA → Feature Engineering → ML → Deployment
Feature Transformation
Feature transformation creates new representations of data to capture non-linear patterns. Techniques like polynomial features, binning, and mathematical transformations unlock hidden relationships.
Mathematical Foundations
• Degree 2 example: For features (x, y) → (1, x, y, x², xy, y²)
• 2 features with degree=2 creates 6 features total
Binning: Convert continuous → categorical
• Equal-width: Divide range into equal intervals
• Quantile: Each bin has equal number of samples
• Example: Age (0-100) → [0-18], [19-35], [36-60], [61+]
Mathematical Transformations:
• Square Root: √x (reduces right skew)
• Log Transform: log(1 + x)
• Box-Cox: λ = 0: log(x), λ ≠ 0: (x^λ - 1)/λ
Use Cases
- Polynomial features for non-linear house price prediction
- Binning age into groups for marketing segmentation
- Log transformation for right-skewed income data
Feature Creation
Creating new features from existing ones based on domain knowledge. Interaction terms, ratios, and domain-specific calculations enhance model performance.
Mathematical Foundations
• Example: advertising_budget × seasonality → total_impact
• Why: Captures how one feature's effect depends on another
Ratio Features: feature₁ / feature₂
• Example: price/sqft, income/age
Domain-Specific Features:
• BMI = weight(kg) / height²(m²)
• Speed = distance / time
• Profit margin = (revenue - cost) / cost
Time-Based Features:
• Extract: year, month, day, weekday, hour
• Create: is_weekend, is_holiday, season
Use Cases
- BMI from height and weight in healthcare prediction
- Click-through rate = clicks / impressions in digital marketing
- Revenue = price × quantity in retail analytics
Dimensionality Reduction
Reducing the number of features while preserving information. PCA (Principal Component Analysis) projects high-dimensional data onto lower dimensions by finding directions of maximum variance.
PCA Mathematical Foundations
1. Standardize data: X_scaled = (X - μ) / σ
2. Compute covariance matrix: Cov = (1/n) X^T X
3. Calculate eigenvalues and eigenvectors
4. Sort eigenvectors by eigenvalues (descending)
5. Select top k eigenvectors (principal components)
6. Transform: X_new = X × PC_matrix
Explained Variance: λᵢ / Σλⱼ
Cumulative Variance: Shows total information preserved
Why PCA Works:
• Removes correlated features
• Captures maximum variance in fewer dimensions
• Components are orthogonal (no correlation)
Use Cases
- Image compression (reduce pixel dimensions)
- Genomics (thousands of genes → few principal components)
- Visualization (project high-D data to 2D for plotting)
- Speed up training (fewer features = faster models)
Common Mistakes
- ⚠️ Applying PCA before train-test split (data leakage)
- ⚠️ Using PCA with categorical features (PCA is for numerical data)
- ⚠️ Losing interpretability (PCs are linear combinations)