Introduction to Feature Engineering

Feature Engineering is the process of transforming raw data into meaningful inputs that boost machine-learning model performance. A well-crafted feature set can improve accuracy by 10-30% without changing the underlying algorithm.

Key Idea: 💡 Thoughtful features provide the model with clearer patterns, like lenses sharpening a blurry picture.

Handling Missing Data

Missing values come in three flavors: MCAR (Missing Completely At Random), MAR (Missing At Random), and MNAR (Missing Not At Random). Each demands different treatment to avoid bias.

Real Example: A hospital's patient records often have absent cholesterol values because certain tests were not ordered for healthy young adults.
💡 Mean/Median work best when data is MCAR or MAR.
⚠️ Using mean imputation on skewed data can distort distributions.
✅ Always impute after splitting into train and test to avoid leakage.

Handling Outliers

Outliers are data points that deviate markedly from others. Detecting and treating them prevents skewed models.

💡 The IQR method is robust to non-normal data.
⚠️ Removing legitimate extreme values can erase important signals.

Feature Scaling

Algorithms that rely on distance, like KNN, demand comparable feature magnitudes.

Data Encoding

Transform categorical variables into numbers so models can interpret them.

Feature Selection

Pick features that matter, drop those that don't.

Handling Imbalanced Data

Class imbalance leads to biased predictions. Balancing techniques can fix this.

Exploratory Data Analysis (EDA)

Exploratory Data Analysis (EDA) is a critical step in the machine learning pipeline that comes BEFORE feature engineering. EDA helps you understand your data, discover patterns, identify anomalies, detect outliers, test hypotheses, and check assumptions through summary statistics and graphical representations.

Key Questions EDA Answers:
  • How many columns are numerical vs. categorical?
  • What does the data distribution look like?
  • Are there missing values?
  • Are there outliers?
  • Is the data imbalanced (for classification problems)?
  • What are the correlations between features?
  • Are there any trends or patterns?
Real-World Example: Imagine you're analyzing customer data for a bank to predict loan defaults. EDA helps you understand:
  • Age distribution of customers (histogram)
  • Income levels (box plot for outliers)
  • Correlation between income and loan amount (scatter plot)
  • Missing values in employment history
  • Class imbalance (5% defaults vs 95% non-defaults)

Two Main Types of EDA

1. Descriptive Statistics

Purpose: Summarize and visualize what the data looks like

A. Central Tendency:
Mean (Average): μ = Σxᵢ / n
  Example: Average income = $50,000 (Sensitive to outliers)
Median: Middle value when sorted
  Example: Median income = $45,000 (Robust to outliers)
Mode: Most frequent value
  Example: Most common age = 35 years

B. Variability (Spread):
Variance: σ² = Σ(xᵢ - μ)² / n (Measures how spread out data is)
Standard Deviation: σ = √variance
  68% of data within 1σ, 95% within 2σ, 99.7% within 3σ (for normal distribution)
Interquartile Range (IQR): Q3 - Q1
  Middle 50% of data, robust to outliers

C. Correlation & Associations:
Pearson Correlation: r = Cov(X,Y) / (σₓ × σᵧ)
  Range: -1 to +1
  r = +1: Perfect positive correlation
  r = 0: No linear correlation
  r = -1: Perfect negative correlation
Thresholds: |r| > 0.7: Strong, |r| = 0.5-0.7: Moderate, |r| < 0.3: Weak

2. Inferential Statistics

Purpose: Make inferences or generalizations about the population from the sample

Key Question: Can we claim this effect exists in the larger population, or is it just by chance?

A. Hypothesis Testing:
Null Hypothesis (H₀): No effect exists (e.g., "Mean of Group A = Mean of Group B")
Alternative Hypothesis (H₁): Effect exists (e.g., "Mean of Group A ≠ Mean of Group B")
P-value: Probability of observing data if H₀ is true
  p < 0.05: Reject H₀ (effect is statistically significant)
  p > 0.05: Fail to reject H₀ (not enough evidence)

Example:
• H₀: "There is no difference between positive and negative movie review lengths"
• H₁: "Negative reviews are longer than positive reviews"
• After t-test: p = 0.003 (< 0.05)
• Conclusion: Reject H₀ → Negative reviews ARE significantly longer

B. Confidence Intervals:
• Range where true population parameter likely lies
• 95% CI: We're 95% confident the true value is within this range
• Example: "Average customer age is 35 ± 2 years (95% CI: [33, 37])"

C. Effect Size:
• Cohen's d = (mean₁ - mean₂) / pooled_std
• Small effect: d = 0.2, Medium: d = 0.5, Large: d = 0.8

Algorithm Steps for EDA

1. Load and Inspect Data: df.head(), df.info(), df.describe()
2. Handle Missing Values: Identify (df.isnull().sum()), Visualize, Decide
3. Analyze Distributions: Histograms, count plots, box plots
4. Check for Imbalance: Count target classes, plot distribution
5. Correlation Analysis: Correlation matrix, heatmap, identify multicollinearity
6. Statistical Testing: Compare groups (t-test, ANOVA), test assumptions, calculate effect sizes

Interactive EDA Dashboard

💡 EDA typically takes 30-40% of total project time. Good EDA reveals which features to engineer.
⚠️ Common Mistakes: Skipping EDA, not checking outliers before scaling, ignoring missing value patterns, overlooking class imbalance, ignoring multicollinearity.
✅ Best Practices: ALWAYS start with EDA, visualize EVERY feature, check correlations with target, document insights, use both descriptive and inferential statistics.

Use Cases and Applications

  • Healthcare: Analyzing patient data before building disease prediction models
  • Finance: Understanding customer demographics before credit scoring
  • E-commerce: Analyzing purchase patterns before recommendation systems
  • Marketing: Understanding customer segments before targeted campaigns
  • Time Series: Checking for seasonality and trends in sales data

Summary & Key Takeaways

Exploratory Data Analysis is the foundation of any successful machine learning project. It combines descriptive statistics (mean, median, variance, correlation) with inferential statistics (hypothesis testing, confidence intervals) to understand data deeply.

Descriptive EDA answers: "What is happening in the dataset?"
Inferential EDA answers: "Can we claim this effect exists in the larger population?"

Remember: Data → EDA → Feature Engineering → ML → Deployment

Feature Transformation

Feature transformation creates new representations of data to capture non-linear patterns. Techniques like polynomial features, binning, and mathematical transformations unlock hidden relationships.

Real Example: Predicting house prices with polynomial features (adding x² terms) improves model fit for non-linear relationships between square footage and price.

Mathematical Foundations

Polynomial Features: Transform (x₁, x₂) → (1, x₁, x₂, x₁², x₁x₂, x₂²)
• Degree 2 example: For features (x, y) → (1, x, y, x², xy, y²)
• 2 features with degree=2 creates 6 features total

Binning: Convert continuous → categorical
• Equal-width: Divide range into equal intervals
• Quantile: Each bin has equal number of samples
• Example: Age (0-100) → [0-18], [19-35], [36-60], [61+]

Mathematical Transformations:
• Square Root: √x (reduces right skew)
• Log Transform: log(1 + x)
• Box-Cox: λ = 0: log(x), λ ≠ 0: (x^λ - 1)/λ
💡 Polynomial features capture curve fitting, but degree=3 on 10 features creates 286 features!
⚠️ Always scale features after polynomial transformation to prevent magnitude issues.
✅ Start with degree=2 and visualize distributions before/after transformation.

Use Cases

  • Polynomial features for non-linear house price prediction
  • Binning age into groups for marketing segmentation
  • Log transformation for right-skewed income data

Feature Creation

Creating new features from existing ones based on domain knowledge. Interaction terms, ratios, and domain-specific calculations enhance model performance.

Real Example: E-commerce revenue = price × quantity. Profit margin = (selling_price - cost_price) / cost_price. These derived features often have stronger predictive power than raw features.

Mathematical Foundations

Interaction Terms: feature₁ × feature₂
• Example: advertising_budget × seasonality → total_impact
• Why: Captures how one feature's effect depends on another

Ratio Features: feature₁ / feature₂
• Example: price/sqft, income/age

Domain-Specific Features:
• BMI = weight(kg) / height²(m²)
• Speed = distance / time
• Profit margin = (revenue - cost) / cost

Time-Based Features:
• Extract: year, month, day, weekday, hour
• Create: is_weekend, is_holiday, season
💡 Interaction terms are especially powerful in linear models - neural networks learn them automatically.
⚠️ Creating features without domain knowledge leads to meaningless combinations.
✅ Always check correlation between new and existing features to avoid redundancy.

Use Cases

  • BMI from height and weight in healthcare prediction
  • Click-through rate = clicks / impressions in digital marketing
  • Revenue = price × quantity in retail analytics

Dimensionality Reduction

Reducing the number of features while preserving information. PCA (Principal Component Analysis) projects high-dimensional data onto lower dimensions by finding directions of maximum variance.

Real Example: Image compression and genome analysis with thousands of genes benefit from PCA. First 2-3 principal components often capture 80%+ of variance.

PCA Mathematical Foundations

Algorithm Steps:
1. Standardize data: X_scaled = (X - μ) / σ
2. Compute covariance matrix: Cov = (1/n) X^T X
3. Calculate eigenvalues and eigenvectors
4. Sort eigenvectors by eigenvalues (descending)
5. Select top k eigenvectors (principal components)
6. Transform: X_new = X × PC_matrix

Explained Variance: λᵢ / Σλⱼ
Cumulative Variance: Shows total information preserved

Why PCA Works:
• Removes correlated features
• Captures maximum variance in fewer dimensions
• Components are orthogonal (no correlation)
💡 PCA is unsupervised - it doesn't use the target variable. First PC always captures most variance.
⚠️ Not standardizing before PCA is a critical error - features with large scales will dominate.
✅ Aim for 95% cumulative explained variance when choosing number of components.

Use Cases

  • Image compression (reduce pixel dimensions)
  • Genomics (thousands of genes → few principal components)
  • Visualization (project high-D data to 2D for plotting)
  • Speed up training (fewer features = faster models)

Common Mistakes

  • ⚠️ Applying PCA before train-test split (data leakage)
  • ⚠️ Using PCA with categorical features (PCA is for numerical data)
  • ⚠️ Losing interpretability (PCs are linear combinations)