Student Edition
Level 2

A self-paced course on thinking critically, creating responsibly, and using AI with real understanding. Enter your name to begin โ€” your progress saves automatically on this device.

๐Ÿ’พ Progress is saved automatically on this device.
Unit 4 of 7

The Ethics of AI

AI raises genuinely hard questions about fairness, authorship, bias, and transparency. This unit gives you the vocabulary and the frameworks to think through those questions โ€” not just follow rules.

๐Ÿ“– 4 Lessons
โฑ๏ธ ~55โ€“70 min
๐Ÿ“ Graded Quiz
๐ŸŽฏ 80% to pass
CCR Focus
Critical โ€” Identify ethical issues in AI use โ€” bias, authorship, fairness, transparency โ€” and reason through them carefully
Creative โ€” Explore how creative and intellectual work changes when AI is involved โ€” and how to maintain integrity
Responsible โ€” Develop your own ethical framework for AI use that goes beyond "don't cheat"

Fairness & Bias in AI

AI doesn't just make factual mistakes. It can make unfair ones.

Start Here

You've learned how training data shapes AI outputs. Now it's time to look directly at the fairness implications of that. When AI systems encode historical biases, those biases don't stay in a lab โ€” they affect real people's lives, opportunities, and experiences.

How Bias Enters AI Systems

Bias in AI is not usually the result of a deliberate decision to be unfair. It typically emerges from three sources: biased training data that reflects historical inequalities; design choices that prioritize certain outcomes over others; and evaluation processes that miss certain types of errors.

When a facial recognition system is trained mostly on lighter-skinned faces, it performs worse on darker-skinned faces โ€” not because anyone chose to discriminate, but because the data was unrepresentative. When a hiring algorithm is trained on historical hiring data from a company that rarely hired women, it learns to associate certain patterns with successful candidates โ€” and those patterns may include demographic proxies that have nothing to do with job performance.

The consequences are not theoretical. Documented cases include hiring algorithms that systematically disadvantaged women, healthcare algorithms that gave lower risk scores to Black patients, and recidivism algorithms used in criminal sentencing that were more likely to incorrectly flag Black defendants as high risk.

Representation and What Gets Left Out

Bias is not only about what AI gets wrong โ€” it is also about what gets left out entirely. AI systems trained on data from certain regions, languages, or cultural contexts may simply have no meaningful knowledge of others. A language model trained primarily on English-language internet content will perform significantly better in English than in Swahili or Navajo โ€” not because one language is more valuable, but because one language is far more represented in internet text.

This representation gap matters whenever AI is deployed in contexts that affect diverse populations. A health information tool that works well for middle-class Americans but gives poor advice to users with different cultural contexts or healthcare systems is not neutral โ€” it is unevenly helpful in a way that tracks existing inequalities.

Key Idea

Asking "who is this AI good for?" and "who does it underserve?" is a fair and important question for any AI tool deployed in a public-facing context.

What To Do About Bias

Recognizing bias is necessary but not sufficient. The more important question is what to do about it. As a user, the most important responses are: scrutinize AI outputs that involve people or groups, especially on topics where historical bias has been documented; actively seek out additional sources representing underrepresented perspectives; push back on AI-generated content that reproduces stereotypes; and when using AI for creative work, deliberately prompt for diverse representation rather than accepting defaults.

As a citizen, understanding AI bias means recognizing that these systems are being deployed in consequential domains โ€” hiring, healthcare, criminal justice, education โ€” and that demanding accountability from organizations that deploy them is entirely reasonable.

Read & Analyze

โš–๏ธ The Algorithm and the Scholarship

A school district implements an AI screening tool to identify students for an advanced academic program. The tool is trained on historical data from students who have previously succeeded in the program. Program administrators are told the tool is "objective" because it uses data rather than human judgment.

A parent notices that the tool is recommending significantly fewer students from the district's newer, more diverse schools โ€” even students with strong grades and test scores. A district data analyst investigates and finds that the historical training data was drawn entirely from older program cohorts that were less demographically diverse. The tool had learned to associate certain zip codes, extracurricular patterns, and school names with program success โ€” patterns that tracked demographics more than academic potential.

The administrator who implemented the tool defends it: "It's just following the data. There's no discrimination โ€” it's math." The data analyst disagrees.

1
Why is the administrator's defense โ€” "it's just math" โ€” wrong? What is it missing?
2
What specific type of AI bias does this scenario illustrate? Use the vocabulary from the reading.
3
What would a fair process for identifying students for this program look like? How should AI fit into it, if at all?
4
If you were a parent or student affected by this tool, what would you want the school district to do?
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Ask "who is this good for and who does it underserve?" about any AI system that affects real people.

Creative

Use what you know about AI bias to prompt more deliberately for diverse, representative outputs in your own work.

Responsible

Fairness is a responsibility โ€” as a user, as a student, and eventually as a professional who may deploy or use AI systems.

Authorship, Integrity & Transparency

Who owns AI-assisted work โ€” and what do you owe your audience?

Think First

If you write an essay with AI help, who wrote it? What do you owe your teacher, your readers, your institution โ€” in terms of disclosure? These are not simple questions. And the answers matter more than many people realize.

What Academic Integrity Means in the AI Era

Academic integrity has always been about more than following rules โ€” it is about the honest representation of your own knowledge, effort, and growth. When you submit work, the implicit claim you are making is: "This represents what I know and can do." AI use that undermines that representation is a form of dishonesty โ€” not primarily because it violates a policy, but because it misrepresents you.

This matters for your own development as much as for fairness. An essay written by AI does not develop your writing. A problem solved by AI does not develop your analytical thinking. A research summary written by AI does not develop your ability to synthesize sources. The tasks assigned in school are not primarily about producing outputs โ€” they are about developing the skills that make those outputs possible. Bypassing the task with AI is not a time-saving shortcut; it is opting out of the development.

The Spectrum of AI Use: From Legitimate to Problematic

AI use in academic work exists on a spectrum, not a binary. At one end: completely appropriate uses that enhance rather than replace your thinking โ€” using AI to brainstorm, to get feedback on your draft, to check grammar, to understand a concept you are struggling with. In the middle: uses that require judgment and context โ€” using AI to help structure an argument you then develop independently, using AI-generated research as a starting point you then verify and extend. At the far end: clearly problematic uses โ€” submitting AI-generated work as your own, using AI to fabricate data or citations, using AI to complete assessments designed to measure your understanding.

The line between legitimate and problematic use is not fixed โ€” it depends on the context, the assignment, and the explicit or implicit expectations of the situation. The right questions to ask: Am I being honest about how I produced this? Am I developing the skills this assignment is designed to develop? If asked to explain my work without AI assistance, could I do so?

Key Idea

The test is not "did I use AI?" The test is: "Does this work honestly represent my thinking, my effort, and my understanding โ€” and am I being transparent about how I created it?"

Transparency as an Ethical Practice

Transparency about AI use is increasingly recognized as a core ethical practice โ€” not just in school but in professional and public contexts. Authors who use AI to assist in writing, researchers who use AI in their analysis, journalists who use AI tools in their reporting โ€” all are developing norms around disclosure.

In your own work, transparency means being clear about what you did, what AI did, and how you engaged with AI outputs. It does not necessarily mean adding a formal citation every time you used autocomplete. It does mean being honest when asked, proactively disclosing when the AI's role was significant, and never representing AI work as entirely your own when it was not.

Read & Analyze

โœ๏ธ The College Essay Question

Dani has spent weeks working on her college application personal statement. She has a clear story to tell and strong ideas โ€” but her writing feels stiff and she is worried about how it sounds. She uses an AI tool to get feedback on her draft and to suggest ways to make her writing flow better.

She is careful: she reviews every suggestion the AI makes, accepts only the ones that genuinely improve her expression of her own ideas, and rewrites every suggested sentence in her own voice rather than copying the AI's language. When she is done, the essay is more polished โ€” but every idea, every specific memory, every emotional beat is hers.

Her friend takes a different approach: he gives the AI a list of bullet points about his experiences and asks it to write a polished first draft. He then makes a few edits for length and submits it.

Both used AI. Are both approaches equally acceptable? Does it matter that Dani's work is in her own voice and her friend's isn't?

1
Apply the three test questions from the reading to Dani's approach and her friend's approach. What does the analysis reveal?
2
Is there a meaningful ethical difference between these two uses of AI? Make the strongest argument you can for each position, then state your own view.
3
What would you tell Dani's friend if he asked for your honest opinion about his approach?
4
What is your personal policy for AI use in your own schoolwork? Write it out specifically โ€” not just "don't cheat" but what you will and won't do.
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Critical thinking about authorship means asking honestly: does this work represent me, or does it represent an AI?

Creative

Transparency is a creative and intellectual practice. Owning your process โ€” including AI's role in it โ€” makes your work more authentic.

Responsible

Responsible AI use starts with honesty. Be honest with others about how you created your work, and honest with yourself about what you actually learned.

When AI Use Is โ€” and Isn't โ€” Okay

Building your own ethical framework โ€” beyond just following rules.

Think First

Rules about AI use change faster than they can be written down. A school policy written this year may be outdated by next year. That means you cannot rely on rules alone to navigate AI ethics โ€” you need principles that will still apply when the rules don't exist or haven't caught up yet.

The Problem With Rules-Only Thinking

Rules about AI use in schools tend to be either too vague ("use AI responsibly") or too rigid ("no AI at all"). Too-vague rules don't help students navigate genuinely ambiguous situations. Too-rigid rules often don't reflect the reality of how AI is used in the world students are being prepared for. Neither approach builds the judgment that students will actually need.

The goal of AI literacy is not rule-following โ€” it is developing the capacity to reason about novel situations using sound principles. A student who has internalized good principles can navigate a situation their school's policy doesn't address. A student who only follows rules has no compass when the rules run out.

A Framework for Ethical AI Use

Several questions, asked consistently, provide a reliable framework for thinking through AI use in any context.

Does using AI in this way help me develop the skills this task is supposed to develop โ€” or does it let me skip that development? There is a difference between using AI to get feedback on writing and using AI to do the writing.

Am I being honest about AI's role with the people who will receive or evaluate my work? If you would feel uncomfortable telling your teacher exactly how you used AI, that discomfort is informative.

Does this use of AI serve me and my goals โ€” or am I serving the AI, accepting whatever it gives me without critical engagement? Tools should serve their users.

If the most important people in my life could see exactly how I produced this work, would I be comfortable with what they saw? This is not about avoiding consequences โ€” it is about integrity.

Key Idea

Ethical AI use is ultimately about who is in charge: your judgment, your values, and your goals โ€” or the path of least resistance that happens to involve a chatbot.

Gray Areas and Honest Judgment

Many real situations are genuinely gray โ€” and that is okay. A student who uses AI to brainstorm ideas for an essay that the assignment said should be "original" is in a gray area. A student who uses AI to help understand a concept she is stuck on and then answers the homework question herself is in a different gray area.

Honest judgment means not looking for the interpretation of a rule that lets you do what you want to do โ€” it means genuinely asking whether what you are doing is consistent with the spirit and purpose of the assignment or situation. It also means being willing to ask, when uncertain: talking to a teacher, a parent, or a trusted person about how to navigate a situation is itself an ethical act, not a sign of weakness.

Read & Analyze

๐Ÿค” The Group Project and the AI Charter

A four-person group is working on a major history project. The teacher's instructions say students can "use AI as a research tool" but the policy is not more specific than that.

One student, Layla, uses AI to generate a complete first draft of the entire presentation without telling her group. When the group sees it, two members are relieved โ€” "great, it's done!" โ€” but one member, James, is uncomfortable. He points out that they haven't actually learned the material, and if the teacher asks them to explain their slides during the presentation, they won't be able to.

Layla argues that she just used the AI to "get started" and they can learn the material afterward. James argues that they should have developed the content together, using AI only for specific tasks they each agreed to.

The group needs to decide: use Layla's AI draft as a foundation, or start over with a collaborative process.

1
Apply the four-question ethical framework from the reading to Layla's decision to use AI without telling her group. What does each question reveal?
2
James's concern is that they won't be able to explain their slides. Why is this a meaningful ethical concern โ€” not just a practical one?
3
What would an ethically sound approach to AI use for this project look like? Be specific about what AI would and wouldn't do.
4
Write your own "AI Code" โ€” a short set of principles (not rules) for how you will use AI in your academic and creative work.
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Ethical reasoning requires asking hard questions, not looking for loopholes. Apply your framework consistently even when it's inconvenient.

Creative

An AI Code is a creative document โ€” it reflects your values, your learning goals, and your vision for the kind of person you want to be.

Responsible

Responsibility means making choices you can explain and defend โ€” to your teachers, your peers, and yourself.

Unit Quiz & Final Reflection

Show what you know โ€” then show what you think.

Answer all 8 questions, then submit. You need 80% (7/8) to pass. If you don't pass, you'll be directed back to review before retaking.
Question 1 of 8
A hiring algorithm trained on historical data from a company that rarely promoted women learns to associate certain patterns with "promotion-worthy" candidates. What type of problem is this?
Question 2 of 8
A language model performs much better in English than in Swahili. What does this most likely indicate?
Question 3 of 8
Which of the following is the best test for whether AI use in schoolwork maintains academic integrity?
Question 4 of 8
A student uses AI to get feedback on her draft essay and revises based on suggestions that genuinely improve her expression of her own ideas, rewriting each suggestion in her own voice. Which assessment best describes this use?
Question 5 of 8
What does transparency about AI use mean in practice?
Question 6 of 8
Which question from the ethical AI framework asks whether a tool is serving you or whether you are serving the tool?
Question 7 of 8
A student feels uncomfortable disclosing to her teacher exactly how she used AI in an assignment. What does the ethical framework suggest this discomfort indicates?
Question 8 of 8
What is the limitation of relying only on school rules to navigate AI ethics?
Answer all 8 questions to submit.
out of 8
๐ŸŽ“

Unit 4 Complete!

You've finished The Ethics of AI. Below are your certificate and digital badge.

CCR
Student Edition ยท Level 2 ยท Kerszi.com ยท #AILiteracyAndEthics
This certifies that
Student Name
has successfully completed Unit 4 of the AI Literacy & Ethics course, demonstrating understanding of AI ethics including fairness, authorship, bias, academic integrity, and transparency and the critical thinking skills required to navigate an AI-shaped world.
CriticalCreativeResponsible
Unit 4: The Ethics of AI
AI Literacy & Ethics ยท Level 2 ยท Secondary ยท #AILiteracyAndEthics
โ€”Date Completed
โ€”Quiz Score
Kathi KersznowskiCourse Author

Your downloadable digital badge โ€” shareable on LinkedIn, email signatures, or portfolios.

Ready to continue?