📊 What is Statistics & Why It Matters
The science of collecting, organizing, analyzing, and interpreting data
Introduction
What is it? Statistics is a branch of mathematics that deals with data. It provides methods to make sense of numbers and help us make informed decisions based on evidence rather than guesswork.
Why it matters: From business forecasting to medical research, sports analysis to government policy, statistics powers nearly every decision in our modern world.
When to use it: Whenever you need to understand patterns, test theories, make predictions, or draw conclusions from data.
Imagine Netflix deciding what shows to produce. They analyze viewing statistics: what genres people watch, when they pause, what they finish. Statistics transforms millions of data points into actionable insights like "Create more thriller series" or "Release episodes on Fridays."
Two Branches of Statistics
Descriptive Statistics
- Summarizes and describes data
- Uses charts, graphs, averages
- Example: "Average class score is 85"
Inferential Statistics
- Makes predictions and inferences
- Tests hypotheses
- Example: "New teaching method improves scores"
Use Cases & Applications
- Healthcare: Clinical trials testing new drugs, disease outbreak tracking
- Business: Customer behavior analysis, sales forecasting, A/B testing
- Government: Census data, economic indicators, policy impact assessment
- Sports: Player performance metrics, game strategy optimization
🎯 Key Takeaways
- Statistics transforms raw data into meaningful insights
- Two main branches: Descriptive (what happened) and Inferential (what will happen)
- Essential for decision-making across all fields
- Combines mathematics with real-world problem solving
👥 Population vs Sample
Understanding the difference between the entire group and a subset
Introduction
What is it? A population includes ALL members of a defined group. A sample is a subset selected from that population.
Why it matters: It's usually impossible or impractical to study entire populations. Sampling allows us to make inferences about large groups by studying smaller representative groups.
When to use it: Use populations when you can access all data; use samples when populations are too large, expensive, or time-consuming to study.
Think of tasting soup. You don't need to eat the entire pot (population) to know if it needs salt. A single spoonful (sample) gives you a good idea—as long as you stirred it well first!
Interactive Visualization
Key Differences
| Aspect | Population | Sample |
|---|---|---|
| Size | Entire group (N) | Subset (n) |
| Symbol | N (uppercase) | n (lowercase) |
| Cost | High | Lower |
| Time | Long | Shorter |
| Accuracy | 100% (if measured correctly) | Has sampling error |
Biased Sampling: If your sample doesn't represent the population, your conclusions will be wrong. Example: Surveying only morning shoppers at a store will miss evening customer patterns.
For a sample to be representative, use random sampling. Every member of the population should have an equal chance of being selected.
🎯 Key Takeaways
- Population (N): All members of a defined group
- Sample (n): A subset selected from the population
- Good samples are random and representative
- Larger samples generally provide better estimates
📈 Parameters vs Statistics
Population measures vs sample measures
Introduction
What is it? A parameter is a numerical characteristic of a population. A statistic is a numerical characteristic of a sample.
Why it matters: We usually can't measure parameters directly (populations are too large), so we estimate them using statistics from samples.
When to use it: Parameters are what we want to know; statistics are what we can calculate.
You want to know the average height of all students in your country (parameter). You can't measure everyone, so you measure 1,000 students (sample) and calculate their average height (statistic) to estimate the population parameter.
Common Parameters and Statistics
| Measure | Parameter (Population) | Statistic (Sample) |
|---|---|---|
| Mean (Average) | μ (mu) | x̄ (x-bar) |
| Standard Deviation | σ (sigma) | s |
| Variance | σ² | s² |
| Proportion | p | p̂ (p-hat) |
| Size | N | n |
The Relationship
Statistic → Estimates → Parameter
We use statistics (calculated from samples) to estimate parameters (unknown population values).
Scenario: A factory wants to know the average weight of cereal boxes.
- Population: All cereal boxes produced (millions)
- Parameter: μ = true average weight of ALL boxes (unknown)
- Sample: 100 randomly selected boxes
- Statistic: x̄ = 510 grams (calculated from the 100 boxes)
- Inference: We estimate μ ≈ 510 grams
Confusing symbols! Greek letters (μ, σ, ρ) refer to parameters (population). Roman letters (x̄, s, r) refer to statistics (sample).
🎯 Key Takeaways
- Parameter: Describes a population (usually unknown)
- Statistic: Describes a sample (calculated from data)
- Greek letters = population, Roman letters = sample
- Statistics are used to estimate parameters
🔢 Types of Data
Categorical, Numerical, Discrete, Continuous, Ordinal, Nominal
Introduction
What is it? Data comes in different types, and understanding these types determines which statistical methods you can use.
Why it matters: Using the wrong analysis method for your data type leads to incorrect conclusions. You can't calculate an average of colors!
When to use it: Before any analysis, identify your data type to choose appropriate statistical techniques.
Data Type Hierarchy
Categorical Data
Represents categories or groups (qualitative)
Nominal
Categories with NO order
- Colors: Red, Blue, Green
- Gender: Male, Female, Non-binary
- Country: USA, India, Japan
- Blood Type: A, B, AB, O
Ordinal
Categories WITH meaningful order
- Education: High School < Bachelor's < Master's
- Satisfaction: Poor < Fair < Good < Excellent
- Medal: Bronze < Silver < Gold
- Size: Small < Medium < Large
Numerical Data
Represents quantities (quantitative)
Discrete
Countable, specific values only
- Number of students: 25, 30, 42
- Number of cars: 0, 1, 2, 3...
- Dice roll: 1, 2, 3, 4, 5, 6
- Number of children: 0, 1, 2, 3...
Can't have 2.5 students!
Continuous
Can take any value in a range
- Height: 165.3 cm, 180.7 cm
- Weight: 68.5 kg, 72.3 kg
- Temperature: 23.4°C, 24.7°C
- Time: 3.25 seconds
Infinite precision possible
Ask yourself:
- Is it a label/category? → Categorical
- Is it a number? → Numerical
- Can you count it? → Discrete
- Can you measure it? → Continuous
- Does order matter? → Ordinal (else Nominal)
| Data | Type | Reason |
|---|---|---|
| Zip codes | Categorical (Nominal) | Numbers used as labels, not quantities |
| Test scores (A, B, C, D, F) | Categorical (Ordinal) | Categories with clear order |
| Number of pages in books | Numerical (Discrete) | Countable whole numbers |
| Reaction time in milliseconds | Numerical (Continuous) | Can be measured to any precision |
Just because something is written as a number doesn't make it numerical! Phone numbers, jersey numbers, and zip codes are categorical because they identify categories, not quantities.
🎯 Key Takeaways
- Categorical: Labels/categories (Nominal: no order, Ordinal: has order)
- Numerical: Quantities (Discrete: countable, Continuous: measurable)
- Data type determines which statistical methods to use
- Always identify data type before analysis
📍 Measures of Central Tendency
Mean, Median, Mode - Finding the center of data
Introduction
What is it? Measures of central tendency are single values that represent the "center" or "typical" value in a dataset.
Why it matters: Instead of looking at hundreds of numbers, one central value summarizes the data. "Average salary" tells you more than listing every employee's salary.
When to use it: When you need to summarize data with a single representative value.
Imagine finding the "center" of a group of people standing on a field. Mean is like finding the balance point where they'd balance on a seesaw. Median is literally the middle person. Mode is where the most people are clustered together.
Mathematical Foundations
Where:
- μ (mu) = population mean or x̄ (x-bar) = sample mean
- Σx = sum of all values
- n = number of values
Steps:
- Add all values together
- Divide by the count of values
If odd number of values: Middle value
If even number of values: Average of two middle values
Steps:
- Sort values in ascending order
- Find the middle position: (n + 1) / 2
- If between two values, average them
The value(s) that appear most frequently
Types:
- Unimodal: One mode
- Bimodal: Two modes
- Multimodal: More than two modes
- No mode: All values appear equally
Interactive Calculator
Dataset: Test scores: 65, 70, 75, 80, 85, 90, 95
Mean:
Sum = 65 + 70 + 75 + 80 + 85 + 90 + 95 = 560
Mean = 560 / 7 = 80
Median:
Already sorted. Middle position = (7 + 1) / 2 = 4th value
Median = 80
Mode:
All values appear once. No mode
When to Use Which?
Use Mean
- Data is symmetrical
- No extreme outliers
- Numerical data
- Need to use all data points
Use Median
- Data has outliers
- Data is skewed
- Ordinal data
- Need robust measure
Use Mode
- Categorical data
- Finding most common value
- Discrete data
- Multiple peaks in data
Mean is affected by outliers! In salary data like $30K, $35K, $40K, $45K, $500K, the mean is $130K (misleading!). The median of $40K better represents typical salary.
For skewed data (like income, house prices), always report the median along with the mean. If they're very different, your data has outliers or is skewed!
🎯 Key Takeaways
- Mean: Sum of all values divided by count (affected by outliers)
- Median: Middle value when sorted (resistant to outliers)
- Mode: Most frequent value (useful for categorical data)
- Choose the measure that best represents your data type and distribution
⚡ Outliers
Extreme values that don't fit the pattern
Introduction
What is it? Outliers are data points that are significantly different from other observations in a dataset.
Why it matters: Outliers can indicate data errors, special cases, or important patterns. They can also severely distort statistical analyses.
When to use it: Always check for outliers before analyzing data, especially when calculating means and standard deviations.
In a salary dataset for entry-level employees: $35K, $38K, $40K, $37K, $250K. The $250K is an outlier—maybe it's a data entry error (someone added an extra zero) or a special case (CEO's child). Either way, it needs investigation!
Detection Methods
IQR Method
Most common approach:
- Calculate Q1, Q3, and IQR = Q3 - Q1
- Lower fence = Q1 - 1.5 × IQR
- Upper fence = Q3 + 1.5 × IQR
- Outliers fall outside fences
Z-Score Method
For normal distributions:
- Calculate z-score for each value
- z = (x - μ) / σ
- If |z| > 3: definitely outlier
- If |z| > 2: possible outlier
Never automatically delete outliers! They might be: (1) Valid extreme values, (2) Data entry errors, (3) Important discoveries. Always investigate before removing.
🎯 Key Takeaways
- Outliers are extreme values that differ significantly from other data
- Use IQR method (1.5 × IQR rule) or Z-score method to detect
- Mean is heavily affected by outliers; median is resistant
- Always investigate outliers before deciding to keep or remove
📏 Variance & Standard Deviation
Measuring spread and variability in data
Introduction
What is it? Variance measures the average squared deviation from the mean. Standard deviation is the square root of variance.
Why it matters: Shows how spread out data is. Low values mean data is clustered; high values mean data is scattered.
When to use it: Whenever you need to understand data variability—in finance (risk), manufacturing (quality control), or research (reliability).
Mathematical Formulas
Where N = population size, μ = population mean
Where n = sample size, x̄ = sample mean. We use (n-1) for unbiased estimation.
Same units as original data, easier to interpret
Dataset: [4, 8, 6, 5, 3, 7]
Step 1: Mean = (4+8+6+5+3+7)/6 = 5.5
Step 2: Deviations: [-1.5, 2.5, 0.5, -0.5, -2.5, 1.5]
Step 3: Squared: [2.25, 6.25, 0.25, 0.25, 6.25, 2.25]
Step 4: Sum = 17.5
Step 5: Variance = 17.5/(6-1) = 3.5
Step 6: Std Dev = √3.5 = 1.87
🎯 Key Takeaways
- Variance measures average squared deviation from mean
- Standard deviation is square root of variance (same units as data)
- Use (n-1) for sample variance to avoid bias
- Higher values = more spread; lower values = more clustered
🎯 Quartiles & Percentiles
Dividing data into equal parts
Introduction
What is it? Quartiles divide sorted data into 4 equal parts. Percentiles divide data into 100 equal parts.
Why it matters: Shows relative position in a dataset. "90th percentile" means you scored better than 90% of people.
The Five-Number Summary
- Minimum: Smallest value
- Q1 (25th percentile): 25% of data below this
- Q2 (50th percentile/Median): Middle value
- Q3 (75th percentile): 75% of data below this
- Maximum: Largest value
SAT scores: If you score 1350 and that's the 90th percentile, it means you scored higher than 90% of test-takers. Percentiles are perfect for standardized tests!
🎯 Key Takeaways
- Q1 = 25th percentile, Q2 = median, Q3 = 75th percentile
- Percentiles show relative standing in a dataset
- Five-number summary: Min, Q1, Q2, Q3, Max
- Useful for understanding data distribution
📦 Interquartile Range (IQR)
Middle 50% of data and outlier detection
Introduction
What is it? IQR = Q3 - Q1. It represents the range of the middle 50% of your data.
Why it matters: IQR is resistant to outliers and is the foundation of the 1.5×IQR rule for outlier detection.
The 1.5 × IQR Rule
Upper Fence = Q3 + 1.5 × IQR
Any value outside these fences is considered an outlier
🎯 Key Takeaways
- IQR = Q3 - Q1 (range of middle 50% of data)
- Resistant to outliers (unlike standard deviation)
- 1.5×IQR rule: standard method for outlier detection
- Box plots visualize IQR and outliers
📉 Skewness
Understanding data distribution shape
Introduction
What is it? Skewness measures the asymmetry of a distribution.
Why it matters: Indicates whether data leans left or right, affecting which statistical methods to use.
Types of Skewness
Negative (Left) Skew
Tail extends to the left
Mean < Median < Mode
Example: Test scores when most students do well
Symmetric (No Skew)
Perfectly balanced
Mean = Median = Mode
Example: Normal distribution
Positive (Right) Skew
Tail extends to the right
Mode < Median < Mean
Example: Income data, house prices
🎯 Key Takeaways
- Skewness measures asymmetry in distribution
- Negative skew: tail to left, Mean < Median
- Positive skew: tail to right, Mean > Median
- Symmetric: Mean = Median = Mode
🔗 Covariance
How two variables vary together
Introduction
What is it? Covariance measures how two variables change together.
Why it matters: Shows if variables have a positive, negative, or no relationship.
Formula
Interpretation
- Positive: Variables increase together
- Negative: One increases as other decreases
- Zero: No linear relationship
- Problem: Scale-dependent, hard to interpret magnitude
🎯 Key Takeaways
- Covariance measures joint variability of two variables
- Positive: variables move together; Negative: inverse relationship
- Scale-dependent (unlike correlation)
- Foundation for correlation calculation
💞 Correlation
Standardized measure of relationship strength
Introduction
What is it? Correlation coefficient (r) is a standardized measure of linear relationship between two variables.
Why it matters: Always between -1 and +1, making it easy to interpret strength and direction of relationships.
Pearson Correlation Formula
Covariance divided by product of standard deviations
Interpretation Guide
- r = +1: Perfect positive correlation
- r = 0.7 to 0.9: Strong positive
- r = 0.4 to 0.6: Moderate positive
- r = 0.1 to 0.3: Weak positive
- r = 0: No correlation
- r = -0.1 to -0.3: Weak negative
- r = -0.4 to -0.6: Moderate negative
- r = -0.7 to -0.9: Strong negative
- r = -1: Perfect negative correlation
Study hours vs exam scores typically show r = 0.7 (strong positive). More study hours correlate with higher scores.
🎯 Key Takeaways
- r ranges from -1 to +1
- Measures strength AND direction of linear relationship
- Scale-independent (unlike covariance)
- Only measures LINEAR relationships
💪 Interpreting Correlation
Correlation vs causation and common pitfalls
The Golden Rule
Just because two variables are correlated does NOT mean one causes the other!
Common Scenarios
- Direct Causation: X causes Y (smoking causes cancer)
- Reverse Causation: Y causes X (not the direction you thought)
- Third Variable: Z causes both X and Y (confounding variable)
- Coincidence: Pure chance with no real relationship
Ice cream sales correlate with drowning deaths.
Does ice cream cause drowning? NO! The third variable is summer weather—more people swim in summer (more drownings) and eat ice cream in summer.
🎯 Key Takeaways
- Correlation shows relationship, NOT causation
- Always consider third variables (confounders)
- Need controlled experiments to prove causation
- Be skeptical of correlation claims in media
🎲 Probability Basics
Foundation of statistical inference
Introduction
What is it? Probability measures the likelihood of an event occurring, ranging from 0 (impossible) to 1 (certain).
Why it matters: Foundation for all statistical inference, hypothesis testing, and prediction.
Basic Formula
Key Rules
- Range: 0 ≤ P(E) ≤ 1
- Complement: P(not E) = 1 - P(E)
- Addition (OR): P(A or B) = P(A) + P(B) - P(A and B)
- Multiplication (AND): P(A and B) = P(A) × P(B) [if independent]
Rolling a die:
P(rolling a 4) = 1/6 ≈ 0.167
P(rolling even) = 3/6 = 0.5
P(not rolling a 6) = 5/6 ≈ 0.833
🎯 Key Takeaways
- Probability ranges from 0 to 1
- P(E) = favorable outcomes / total outcomes
- Complement rule: P(not E) = 1 - P(E)
- Foundation for all statistical inference
🔷 Set Theory
Union, intersection, and complement
Introduction
What is it? Set theory provides a mathematical framework for organizing events and calculating probabilities.
Key Concepts
- Union (A ∪ B): A OR B (either event occurs)
- Intersection (A ∩ B): A AND B (both events occur)
- Complement (A'): NOT A (event doesn't occur)
- Mutually Exclusive: A ∩ B = ∅ (can't both occur)
🎯 Key Takeaways
- Union (∪): OR operation
- Intersection (∩): AND operation
- Complement ('): NOT operation
- Venn diagrams visualize set relationships
🔀 Conditional Probability
Probability given that something else happened
Introduction
What is it? Conditional probability is the probability of event A occurring given that event B has already occurred.
Formula
Read as: "Probability of A given B"
Drawing cards: P(King | Red card) = ?
P(Red card) = 26/52
P(King and Red) = 2/52
P(King | Red) = (2/52) / (26/52) = 2/26 = 1/13
🎯 Key Takeaways
- P(A|B) = probability of A given B occurred
- Formula: P(A|B) = P(A and B) / P(B)
- Critical for Bayes' Theorem
- Used in machine learning and diagnostics
🎯 Independence
When events don't affect each other
Introduction
What is it? Two events are independent if the occurrence of one doesn't affect the probability of the other.
Test for Independence
OR equivalently:
Examples
- Independent: Coin flips, die rolls with replacement
- Dependent: Drawing cards without replacement, weather on consecutive days
🎯 Key Takeaways
- Independent events don't affect each other
- Test: P(A and B) = P(A) × P(B)
- With replacement → independent
- Without replacement → dependent
🧮 Bayes' Theorem
Updating probabilities with new evidence
Introduction
What is it? Bayes' Theorem shows how to update probability based on new information.
Why it matters: Used in medical diagnosis, spam filters, machine learning, and countless applications.
The Formula
- P(A|B) = posterior probability
- P(B|A) = likelihood
- P(A) = prior probability
- P(B) = marginal probability
Disease affects 1% of population. Test is 95% accurate.
You test positive. What's probability you have disease?
P(Disease) = 0.01
P(Positive|Disease) = 0.95
P(Positive|No Disease) = 0.05
P(Positive) = 0.01×0.95 + 0.99×0.05 = 0.059
P(Disease|Positive) = (0.95×0.01)/0.059 = 0.161
Only 16.1% chance you have the disease!
🎯 Key Takeaways
- Updates probability based on new evidence
- P(A|B) = [P(B|A) × P(A)] / P(B)
- Critical for medical testing and machine learning
- Counter-intuitive results common (base rate matters!)
📊 Probability Mass Function (PMF)
Probabilities for discrete random variables
Introduction
What is it? PMF gives the probability that a discrete random variable equals a specific value.
Why it matters: Used for countable outcomes like dice rolls, coin flips, or number of defects.
Properties
- 0 ≤ P(X = x) ≤ 1 for all x
- Sum of all probabilities = 1
- Only defined for discrete variables
- Visualized with bar charts
P(X = 1) = 1/6
P(X = 2) = 1/6
... and so on
Sum = 6 × (1/6) = 1 ✓
🎯 Key Takeaways
- PMF is for discrete random variables
- Gives P(X = specific value)
- All probabilities sum to 1
- Visualized with bar charts
📈 Probability Density Function (PDF)
Probabilities for continuous random variables
Introduction
What is it? PDF describes probability for continuous random variables. Probability at exact point is 0; we calculate probability over intervals.
Key Differences from PMF
- For continuous (not discrete) variables
- P(X = exact value) = 0
- Calculate P(a < X < b) = area under curve
- Total area under curve = 1
🎯 Key Takeaways
- PDF is for continuous random variables
- Probability = area under curve
- P(X = exact point) = 0
- Total area under PDF = 1
📉 Cumulative Distribution Function (CDF)
Probability up to a value
Introduction
What is it? CDF gives the probability that X is less than or equal to a specific value.
Formula: F(x) = P(X ≤ x)
Properties
- Always non-decreasing
- F(-∞) = 0
- F(+∞) = 1
- P(a < X ≤ b) = F(b) - F(a)
🎯 Key Takeaways
- CDF: F(x) = P(X ≤ x)
- Works for both discrete and continuous
- Always increases from 0 to 1
- Useful for finding percentiles
🪙 Bernoulli Distribution
Single trial with two outcomes
Introduction
What is it? Models a single trial with two outcomes: success (1) or failure (0).
Examples: Coin flip, pass/fail test, yes/no question
Formula
Mean = p, Variance = p(1-p)
🎯 Key Takeaways
- Single trial, two outcomes (0 or 1)
- Parameter: p (probability of success)
- Mean = p, Variance = p(1-p)
- Building block for binomial distribution
🎰 Binomial Distribution
Multiple independent Bernoulli trials
Introduction
What is it? Models the number of successes in n independent Bernoulli trials.
Requirements: Fixed n, same p, independent trials, binary outcomes
Formula
C(n,k) = n! / (k!(n-k)!)
Mean = np, Variance = np(1-p)
Flip coin 10 times. P(exactly 6 heads)?
n=10, k=6, p=0.5
P(X=6) = C(10,6) × 0.5^6 × 0.5^4 = 210 × 0.000977 ≈ 0.205
🎯 Key Takeaways
- n independent trials, probability p each
- Counts number of successes
- Mean = np, Variance = np(1-p)
- Common in quality control and surveys
🔔 Normal Distribution
The bell curve and 68-95-99.7 rule
Introduction
What is it? The most important continuous probability distribution—symmetric, bell-shaped curve.
Why it matters: Many natural phenomena follow normal distribution. Foundation of inferential statistics.
Properties
- Symmetric around mean μ
- Bell-shaped curve
- Mean = Median = Mode
- Defined by μ (mean) and σ (standard deviation)
- Total area under curve = 1
The 68-95-99.7 Rule (Empirical Rule)
- 68% of data within μ ± 1σ
- 95% of data within μ ± 2σ
- 99.7% of data within μ ± 3σ
IQ scores: μ = 100, σ = 15
68% of people have IQ between 85-115
95% have IQ between 70-130
99.7% have IQ between 55-145
🎯 Key Takeaways
- Symmetric bell curve, parameters μ and σ
- 68-95-99.7 rule for standard deviations
- Foundation for hypothesis testing
- Central Limit Theorem connects to sampling
⚖️ Hypothesis Testing Introduction
Making decisions from data
Introduction
What is it? Statistical method for testing claims about populations using sample data.
Why it matters: Allows us to make evidence-based decisions and determine if effects are real or due to chance.
The Two Hypotheses
- Null Hypothesis (H₀): Status quo, no effect, no difference
- Alternative Hypothesis (H₁ or Hₐ): What we're trying to prove
Decision Process
- State hypotheses (H₀ and H₁)
- Choose significance level (α)
- Collect data and calculate test statistic
- Find p-value or critical value
- Make decision: Reject H₀ or Fail to reject H₀
Claim: New teaching method improves test scores
H₀: μ = 75 (no improvement)
H₁: μ > 75 (scores improved)
🎯 Key Takeaways
- H₀ = null hypothesis (status quo)
- H₁ = alternative hypothesis (what we test)
- We either reject or fail to reject H₀
- Never "accept" or "prove" anything
🎯 Significance Level (α)
Setting your error tolerance
Introduction
What is it? α (alpha) is the probability of rejecting H₀ when it's actually true (Type I error rate).
Common values: 0.05 (5%), 0.01 (1%), 0.10 (10%)
Interpretation
- α = 0.05: Willing to be wrong 5% of the time
- Lower α: More stringent, harder to reject H₀
- Higher α: More lenient, easier to reject H₀
- Confidence level: 1 - α (e.g., 0.05 → 95% confidence)
🎯 Key Takeaways
- α = probability of Type I error
- Common: α = 0.05 (5% error rate)
- Set before collecting data
- Trade-off between Type I and Type II errors
📊 Standard Error
Measuring sampling variability
Introduction
What is it? Standard error (SE) measures how much sample means vary from the true population mean.
Formula
or estimate: SE = s / √n
Key Points
- Decreases as sample size increases
- Measures precision of sample mean
- Lower SE = better estimate
- Used in confidence intervals and hypothesis tests
🎯 Key Takeaways
- SE = σ / √n
- Measures sampling variability
- Larger samples → smaller SE
- Critical for inference
📏 Z-Test
Hypothesis test for large samples with known σ
When to Use Z-Test
- Sample size n ≥ 30 (large sample)
- Population standard deviation (σ) known
- Testing population mean
- Normal distribution or large n
Formula
x̄ = sample mean
μ₀ = hypothesized population mean
σ = population standard deviation
n = sample size
🎯 Key Takeaways
- Use when n ≥ 30 and σ known
- z = (x̄ - μ₀) / SE
- Compare z to critical value or find p-value
- Large |z| = evidence against H₀
🎚️ Z-Score & Critical Values
Standardization and rejection regions
Z-Score (Standardization)
Converts any normal distribution to standard normal (μ=0, σ=1)
Critical Values
- α = 0.05 (two-tailed): z = ±1.96
- α = 0.05 (one-tailed): z = 1.645
- α = 0.01 (two-tailed): z = ±2.576
🎯 Key Takeaways
- Z-score standardizes values
- Critical values define rejection region
- |z| > critical value → reject H₀
- Common: ±1.96 for 95% confidence
💯 P-Value Method
Probability of observing data if H₀ is true
Introduction
What is it? P-value is the probability of getting results as extreme as observed, assuming H₀ is true.
Decision Rule
- If p-value ≤ α: Reject H₀ (statistically significant)
- If p-value > α: Fail to reject H₀ (not significant)
Interpretation
- p < 0.01: Very strong evidence against H₀
- 0.01 ≤ p < 0.05: Strong evidence against H₀
- 0.05 ≤ p < 0.10: Weak evidence against H₀
- p ≥ 0.10: Little or no evidence against H₀
P-value is NOT the probability that H₀ is true! It's the probability of observing your data IF H₀ were true.
🎯 Key Takeaways
- P-value = P(data | H₀ true)
- Reject H₀ if p ≤ α
- Smaller p-value = stronger evidence against H₀
- Most common approach in modern statistics
↔️ One-Tailed vs Two-Tailed Tests
Directional vs non-directional hypotheses
Two-Tailed Test
- H₁: μ ≠ μ₀ (different, could be higher or lower)
- Testing for any difference
- Rejection regions in both tails
- More conservative
One-Tailed Test
- Right-tailed: H₁: μ > μ₀
- Left-tailed: H₁: μ < μ₀
- Testing for specific direction
- Rejection region in one tail
- More powerful for directional effects
🎯 Key Takeaways
- Two-tailed: testing for any difference
- One-tailed: testing for specific direction
- Choose before collecting data
- Two-tailed is more conservative
📐 T-Test
Hypothesis test for small samples or unknown σ
When to Use T-Test
- Small sample (n < 30)
- Population σ unknown (use sample s)
- Population approximately normal
Formula
Same as z-test but uses s instead of σ
Follows t-distribution with df = n - 1
🎯 Key Takeaways
- Use when σ unknown or n < 30
- t = (x̄ - μ₀) / (s / √n)
- Follows t-distribution
- More variable than z-distribution
🔓 Degrees of Freedom
Independent pieces of information
Introduction
What is it? Degrees of freedom (df) is the number of independent values that can vary in analysis.
Common Formulas
- One-sample t-test: df = n - 1
- Two-sample t-test: df ≈ n₁ + n₂ - 2
- Chi-squared: df = (rows-1)(cols-1)
Why It Matters
- Determines shape of t-distribution
- Higher df → closer to normal distribution
- Affects critical values
🎯 Key Takeaways
- df = number of independent values
- For t-test: df = n - 1
- Higher df → distribution closer to normal
- Critical for finding correct critical values
⚠️ Type I & Type II Errors
False positives and false negatives
The Two Types of Errors
| H₀ True | H₀ False | |
|---|---|---|
| Reject H₀ | Type I Error (α) | Correct! |
| Fail to Reject H₀ | Correct! | Type II Error (β) |
Definitions
- Type I Error (α): Rejecting true H₀ (false positive)
- Type II Error (β): Failing to reject false H₀ (false negative)
- Power = 1 - β: Probability of correctly rejecting false H₀
Type I Error: Telling healthy person they're sick (false alarm)
Type II Error: Telling sick person they're healthy (missed diagnosis)
🎯 Key Takeaways
- Type I: False positive (α)
- Type II: False negative (β)
- Trade-off: decreasing one increases the other
- Power = 1 - β (ability to detect true effect)
χ² Chi-Squared Distribution
Distribution for categorical data analysis
Introduction
What is it? Chi-squared (χ²) distribution is used for testing hypotheses about categorical data.
Properties
- Always positive (ranges from 0 to ∞)
- Right-skewed
- Shape depends on degrees of freedom
- Higher df → more symmetric
Uses
- Goodness of fit test
- Test of independence
- Testing variance
🎯 Key Takeaways
- Used for categorical data
- Always positive, right-skewed
- Shape depends on df
- Foundation for chi-squared tests
✓ Goodness of Fit Test
Testing if data follows expected distribution
Introduction
What is it? Tests whether observed frequencies match expected frequencies from a theoretical distribution.
Formula
O = observed frequency
E = expected frequency
df = k - 1 (k = number of categories)
Testing if die is fair:
Roll 60 times. Expected: 10 per face
Observed: 8, 12, 11, 9, 10, 10
Calculate χ² and compare to critical value
🎯 Key Takeaways
- Tests if observed matches expected distribution
- χ² = Σ(O-E)²/E
- Large χ² = poor fit
- df = number of categories - 1
🔗 Test of Independence
Testing relationship between categorical variables
Introduction
What is it? Tests whether two categorical variables are independent or associated.
Formula
E = (row total × column total) / grand total
df = (rows - 1)(columns - 1)
Are gender and color preference independent?
Create contingency table, calculate expected frequencies, compute χ², and test against critical value.
🎯 Key Takeaways
- Tests independence of two categorical variables
- Uses contingency tables
- df = (r-1)(c-1)
- Large χ² suggests association
📏 Chi-Squared Variance Test
Testing claims about population variance
Introduction
What is it? Tests hypotheses about population variance or standard deviation.
Formula
n = sample size
s² = sample variance
σ₀² = hypothesized population variance
df = n - 1
🎯 Key Takeaways
- Tests claims about variance/standard deviation
- χ² = (n-1)s²/σ₀²
- Requires normal population
- Common in quality control
📊 Confidence Intervals
Range of plausible values for parameter
Introduction
What is it? A confidence interval provides a range of values that likely contains the true population parameter.
Why it matters: More informative than point estimates—shows precision and uncertainty.
Formula
For z: CI = x̄ ± z* × (σ/√n)
For t: CI = x̄ ± t* × (s/√n)
Common Confidence Levels
- 90% CI: z* = 1.645
- 95% CI: z* = 1.96
- 99% CI: z* = 2.576
Sample: n=100, x̄=50, s=10
95% CI = 50 ± 1.96(10/√100)
95% CI = 50 ± 1.96 = (48.04, 51.96)
🎯 Key Takeaways
- CI = point estimate ± margin of error
- 95% CI most common
- Wider CI = more uncertainty
- Larger sample = narrower CI
± Margin of Error
Measuring estimate precision
Introduction
What is it? Margin of error (MOE) is the ± part of a confidence interval, showing the precision of an estimate.
Formula
MOE = z* × (σ/√n) or t* × (s/√n)
Factors Affecting MOE
- Sample size: Larger n → smaller MOE
- Confidence level: Higher confidence → larger MOE
- Variability: Higher σ → larger MOE
🎯 Key Takeaways
- MOE = critical value × SE
- Indicates precision of estimate
- Inversely related to sample size
- Trade-off between confidence and precision
🔍 Interpreting Confidence Intervals
Common misconceptions and proper interpretation
Correct Interpretation
"We are 95% confident that the true population parameter lies within this interval."
This means: If we repeated this process many times, 95% of the intervals would contain the true parameter.
- WRONG: "There's a 95% probability the parameter is in this interval."
- WRONG: "95% of the data falls in this interval."
- WRONG: "We are 95% sure our sample mean is in this interval."
Using CIs for Hypothesis Testing
- If hypothesized value is INSIDE CI → fail to reject H₀
- If hypothesized value is OUTSIDE CI → reject H₀
- 95% CI corresponds to α = 0.05 test
Report confidence intervals instead of just p-values! CIs provide more information: effect size AND statistical significance.
🎯 Key Takeaways
- Correct interpretation: confidence in the method, not the specific interval
- 95% refers to long-run success rate
- Can use CIs for hypothesis testing
- More informative than p-values alone