🤖 Algorithm Literacy
How AI Really Makes Decisions | Unit 7: Digital Tech & AI Ethics | Years 9-10
AI Isn't Magic—It's Math
When TikTok shows you the perfect video, when Netflix recommends a show, when your bank approves a loan— it feels like magic. But it's not. It's algorithms: step-by-step instructions that computers follow. Understanding how they work is the key to questioning them.
Part 1: What is an Algorithm?
Algorithm: A set of rules for solving a problem or completing a task. Think of it like a recipe: input ingredients → follow steps → get output.
Simple Algorithm Example: Making Toast
- INPUT: Bread slice
- STEP 1: IF bread is frozen → defrost for 30 seconds
- STEP 2: Place bread in toaster
- STEP 3: Set timer based on bread type (white = 2 min, whole grain = 3 min)
- STEP 4: IF toast is burnt → throw away and start over
- OUTPUT: Toasted bread
AI Algorithm Example: Recommending Videos
- INPUT: Your watch history, likes, time spent on each video
- STEP 1: Analyze which videos you watched longest
- STEP 2: Find similar videos (same creator, topic, hashtags)
- STEP 3: IF other users with similar history liked it → rank higher
- STEP 4: Predict which video will keep you scrolling longest
- OUTPUT: Personalized video feed
_____________________________________________________________________________
_____________________________________________________________________________
Part 2: Where Algorithms Go Wrong
🚨 Real Example: Racist Hiring Algorithm
Amazon built an AI hiring tool to screen resumes. They trained it on data from past successful hires. Problem: Most past hires were men, so the algorithm learned to penalize resumes with the word "women" (e.g., "women's chess club captain"). It discriminated automatically.
Result: Amazon scrapped the tool. But how many companies use similar algorithms without telling us?
The 3 Main Algorithm Problems:
1️⃣ Biased Training Data
If you train AI on biased data (e.g., mostly white faces, mostly men's writing), it will reproduce that bias. Garbage in = Garbage out.
2️⃣ Opaque Decision-Making (Black Box)
You're denied a loan, but the bank won't explain why—"the algorithm decided." Even the programmers sometimes can't explain AI decisions. How do you appeal?
3️⃣ Optimizing for the Wrong Thing
YouTube's algorithm is optimized for "watch time," not "truth" or "wellbeing." Result: It recommends conspiracy theories and extreme content because that keeps people watching longest.
Algorithm: ___________________________________________
Optimizing for: ___________________________________________
Who benefits: ___________________________________________
Potential harms: ___________________________________________
Part 3: How to Question Algorithms
When you encounter an AI decision, ask these 5 questions:
- What data was it trained on? (Who was included/excluded?)
- What is it optimizing for? (Profit? Engagement? Accuracy?)
- Who designed it? (Were diverse voices included?)
- Can I see how it made this decision? (Is there transparency?)
- What harm could it cause? (Who is most at risk?)
Practice: Questioning an Algorithm
Scenario: School Uses AI to Predict Student "Risk"
Your school implements an AI system that predicts which students are "at risk" of dropping out or getting in trouble. It uses data like attendance, grades, behavioral incidents, and socioeconomic status to flag students for "intervention."
1. Training data concerns: ___________________________________________
2. What's it optimizing for? ___________________________________________
3. Who designed it? Missing voices? ___________________________________________
4. Can students/whānau see their "risk score"? ___________________________________________
5. Potential harms: ___________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
🌟 Extension: Design an Ethical Algorithm
Design an algorithm for a real-world problem (e.g., matching students to internships, allocating school resources, moderating online comments). Write out the steps. Then identify: What could go wrong? How would you prevent bias?
Algorithm steps:
1. ___________________________________________
2. ___________________________________________
3. ___________________________________________
Bias prevention: ___________________________________________
📚 Teacher Notes:
- Resources: "Coded Bias" documentary, AlgorithmWatch.org, AI Now Institute reports
- Extension: Students research NZ's Algorithm Charter (government transparency initiative)
- Connect to: Facial recognition bias, predictive policing, credit scoring algorithms
- NZC Links: Technology (Computational Thinking), Social Sciences (Power & Decision-Making), Key Competency (Thinking)