Student Edition
Level 2

A self-paced course on thinking critically, creating responsibly, and using AI with real understanding. Enter your name to begin — your progress will be saved automatically on this device.

💾 Your progress is automatically saved on this device. You can close this page and return anytime.
Unit 1 of 7

What Is Artificial Intelligence?

Before you can think critically about AI, you have to understand what it actually is — not the movie version, not the hype, but the real truth about a technology already shaping your world.

📖 4 Lessons
⏱️ ~60–80 min
📝 Graded Quiz
🎯 80% to pass
CCR Focus
Critical — Recognize when and how AI influences your world
Creative — Consider how AI expands what you can build and express
Responsible — Use AI with awareness and genuine intention

AI Is Already in Your Life

And most people have no idea how much.

Start Here

Before you read further, take a moment to estimate: How many times did you interact with AI in the last 24 hours? Think about every app you opened, every recommendation you clicked, every search you ran. Hold that number. By the end of this lesson, you will almost certainly revise it — by a lot.

The Invisible Technology You Use Every Day

When most people picture artificial intelligence, they imagine something dramatic — robots, science fiction, a computer that speaks in an unsettling human voice. The reality is far less cinematic and far more present. AI is not a futuristic thing waiting to arrive. It is already woven into the fabric of daily life, quietly shaping what you see, hear, buy, and believe.

Think about the last time you opened a streaming app and were greeted by a list of shows you might enjoy. That list was not assembled by a human who knows your taste. It was generated by an AI system that analyzed your watch history, your rating patterns, the viewing behavior of thousands of people with similar preferences, and a set of predictions about what will keep you watching. The same logic applies to the music at the top of your playlist, the videos that autoplay, and the posts that appear first in your social media feed.

None of this happens by accident, and none of it happens through simple rule-following. These are not systems someone programmed with a list of "if you like X, show Y" instructions. They are systems that learn, adjust, and improve over time based on patterns in massive amounts of data. That is what makes them artificial intelligence — and that is why understanding AI is not just a tech topic. It is a life topic.

Key Idea

AI is not just something you "use" by typing into a chatbot. It is embedded in the tools and platforms you interact with every day, often making decisions on your behalf without asking for your awareness or consent.

Where AI Shows Up Right Now

The range of places where AI operates in everyday life is broader than most people realize. Social media platforms use AI algorithms to decide which posts, ads, and accounts to surface for you specifically. Navigation apps use AI to predict traffic patterns and recommend the fastest route in real time. Email tools use AI to filter spam, suggest autocomplete phrases, and flag urgent messages. Online stores use AI to suggest products and adjust prices based on demand patterns.

Search engines use AI to interpret the meaning of your query — not just match keywords — so that a search for "good places to eat nearby" returns results tailored to your location, your past searches, and what other users with similar profiles have clicked on. Even autocorrect on your keyboard is a machine learning model trained on enormous amounts of text to predict what word you most likely intended to type.

This matters because all of these systems are making choices. They are choosing what to show you and what to hide. They are making predictions about what you want, and those predictions shape your experience of the internet, of shopping, of entertainment, and of information. When you understand this, you stop being a passive user and start being someone who can ask: Who designed this system? What is it optimizing for? Is that the same thing I actually want?

Why Awareness Is the Starting Point

There is a significant difference between using AI and understanding AI. Billions of people use AI-powered tools every single day without thinking about what those tools are doing, how they work, or how they affect them. That kind of passive use is not inherently bad — you do not need to understand every algorithm to open a streaming app. But when you have no awareness of AI's role in your experience, you lose something important: the ability to think critically about whether that experience is actually serving you.

Consider what it means for an algorithm to decide what news you see. If the system is designed to prioritize content that generates strong emotional reactions — because emotional content gets more clicks — then you may be seeing a version of the news that is angrier, more alarming, and more polarizing than a balanced picture of what is actually happening. The algorithm is not lying to you. It is simply optimizing for engagement. But that optimization has real consequences for what you believe about the world.

Awareness is the starting point for taking back some of that control. Once you recognize that AI is shaping your information environment, you can begin to ask better questions, seek out additional sources, and make more deliberate choices about which tools you rely on — and how much. That is exactly what this course is designed to help you do.

Read & Analyze

📱 "That's Just Apps"

Taylor is a junior who considers herself pretty tech-savvy. She uses Google Maps, Spotify, Instagram, and a few productivity apps for school. When a classmate mentions that AI is "everywhere," Taylor shrugs. "I don't really use AI," she says. "I just use my apps."

In a single typical day: Taylor uses Google Maps to avoid a traffic slowdown, watches four consecutive autoplay videos, gets a personalized workout playlist, sees a sponsored post for shoes she searched for two days ago, accepts three autocomplete text suggestions without reading them, and uses a writing assistant to polish a paragraph for English class.

That evening, she searches for a news story she heard at school. The first four results are articles from sources that closely align with opinions she has already expressed online. She reads two of them and feels confident she now understands the full picture.

1
In what specific ways is Taylor interacting with AI throughout her day, even though she believes she "doesn't use AI"?
2
Taylor feels confident she understands the full picture of the news story after reading two results. What concerns does this raise?
3
Why do you think so many people fail to recognize when AI is part of their experience? What makes it "invisible"?
4
Does it matter whether Taylor recognizes AI's role in her daily life? What difference would awareness actually make?
0 / 150 characters minimum Minimum response required to continue
✏️ Please write at least 150 characters before continuing. Take a moment to genuinely engage with the questions above.

🔎 CCR Connection

Critical

Notice when AI is influencing your choices, information, and habits — even when invisible. Ask: What is this system optimizing for? Is that what I actually want?

Creative

Once you understand how recommendation systems work, you can use them more intentionally — seeking out perspectives your algorithm might not show you.

Responsible

Awareness is an act of responsibility. When you recognize AI's role, you are better positioned to make thoughtful choices about how much you rely on it.

What AI Really Is

Not magic. Not human. Something more interesting than either.

Think First

When someone says a machine is "intelligent," what do they mean? Is it the same kind of intelligence humans have? Can it actually understand, feel, or care about things — or does it only appear to? Hold those questions as you read.

Defining Artificial Intelligence (For Real This Time)

Artificial intelligence refers to computer systems designed to perform tasks that normally require some form of human-like reasoning. This definition is intentionally broad because AI is not one thing — it is a wide category of technologies that includes everything from the spam filter in your email to voice assistants to systems that can generate entire essays or photorealistic images in seconds.

What unites these very different technologies is a shared approach: rather than being programmed with a fixed set of rules for every possible situation, AI systems are designed to process information, detect patterns, and produce outputs based on what they have learned from data. A spam filter is not given a list of every possible spam message. It is trained on thousands of examples until it develops the ability to classify new messages it has never seen before.

This distinction — rules versus learning — is the core of what makes something AI rather than traditional software. A calculator follows rules: it will always produce the same output for the same input. An AI system learns: it can handle situations it has not explicitly encountered before, and its responses may vary based on context and pattern-matching rather than predetermined logic.

What Generative AI Is — and Why It Feels Different

Among the many types of AI, the one that has captured the most attention in recent years is generative AI. Generative AI is a category of systems that can create new content — text, images, music, code, video — in response to a prompt. Chatbots that write paragraphs, image generators that produce artwork from a text description, and coding assistants are all forms of generative AI.

What makes generative AI feel so different is the quality and apparent fluency of its outputs. Earlier AI was easier to identify as "machine-made" — outputs were often choppy or obviously patterned. Generative AI can produce work that sounds natural, thoughtful, and even emotionally resonant. It can write in different styles. It can match a specific tone. On first read, its output may look indistinguishable from something a human created.

This is both impressive and important to think clearly about. The fluency of the output is not evidence of deep understanding. Generative AI does not "know" what it is writing in any meaningful sense. It is producing outputs that statistically resemble good writing based on patterns in an enormous training dataset. This is why generative AI can write a convincing paragraph about a historical event and get a key fact completely wrong — the system is not checking for truth. It is generating text that fits the pattern of what plausible-sounding writing on this topic looks like.

Critical Thinking Moment

Generative AI can sound confident even when it is wrong. "Confident" and "correct" are not the same thing. An AI that presents false information in clear, polished sentences is more dangerous in some ways than one that is obviously confused — because its fluency creates unwarranted trust. Verifying AI outputs is not optional. It is a core skill.

AI Is Powerful, But It Is Not Magic — and Not Human

One of the most common mistakes people make is attributing human-like qualities to AI. When a chatbot responds with warmth, clarity, and apparent empathy, it is easy to feel as though the system actually cares about you, understands your situation, or has a genuine stake in giving you good advice. It does not. The warmth is a stylistic pattern. The clarity is statistical fluency. The apparent empathy is the system generating text that resembles how a caring person might respond.

This does not mean AI is malicious or trying to deceive you. It means the system is doing what it was designed to do — generate appropriate-sounding text — without any of the actual understanding, values, or intentions that come with genuine human communication. AI has no lived experience. It has no beliefs, desires, or moral commitments. It cannot be offended, tired, or inspired. It is producing outputs based on patterns, not from a self that cares about outcomes.

When people project human qualities onto AI, they tend to trust it in ways that are not warranted. Understanding that AI simulates human-like outputs without actually being human is one of the most important critical thinking skills you can develop.

A Spectrum: Not All AI Is the Same

It helps to think of AI not as a single technology but as a spectrum. At one end are narrow AI systems — designed to do one specific task extremely well. Spam filters, chess programs, facial recognition systems, fraud detection algorithms — these are all narrow AI. They can be extraordinarily powerful within their domain, but they cannot transfer that capability to other tasks. The chess program cannot write a poem.

Generative AI tools sit somewhere along this spectrum — more flexible, capable of handling a wide range of language-based tasks. But they still have significant limitations: they hallucinate, they lack real-time information unless specifically designed for it, they reflect biases in training data, and they have no genuine understanding of the world they describe.

The concept of artificial general intelligence — a system that could match or exceed human intelligence across all domains — remains theoretical. Despite the impressive capabilities of current AI tools, there is no AI system today that understands the world the way a person does, or that carries human-like awareness, judgment, and values. The gap between "impressive generative AI" and "truly general intelligence" is enormous.

Read & Analyze

💬 The Convincing Answer

Marcus is writing a history paper and uses a chatbot to help with research. The chatbot provides a clear, detailed, well-organized response — names, dates, quotes, context. It reads like a textbook. Marcus is impressed. He quotes two "historical figures" cited by the chatbot directly in his paper.

When his teacher marks the paper, she flags one of the quotes as fabricated. The person cited never said that. The chatbot had invented the quote entirely — a behavior known as "hallucination." The information was wrong, but the delivery was so polished and confident that Marcus never thought to question it.

"But it sounded so sure," Marcus tells his teacher. She nods. "That's exactly the problem," she says.

1
Why did Marcus trust the chatbot's answer so readily? What assumptions was he making about how it works?
2
The teacher says "that's exactly the problem." What is she pointing to, and why is confident-sounding output a particular risk?
3
How should Marcus have used the chatbot differently? What process would have caught the error?
4
In your own words, explain why AI "hallucination" happens based on what you learned in the reading.
0 / 150 characters minimum Minimum response required to continue
✏️ Please write at least 150 characters before continuing.

🔎 CCR Connection

Critical

Never confuse fluency with accuracy. Polished, confident-sounding output requires verification, not admiration. Ask: Is this actually true? How do I know?

Creative

Understanding how generative AI creates output helps you use it more effectively — leveraging its strengths while compensating for its weaknesses.

Responsible

When you use AI-generated content in schoolwork or public communication, you carry responsibility for verifying it. The AI cannot be accountable. You can.

How AI Learns

Understanding what's under the hood — and why it matters to you.

Think First

When you learn something new, you bring your entire self to that process: your experiences, your values, your judgment about what matters. AI also "learns," but through a completely different process. What's different about it — and why does that difference have real-world consequences for you?

Training: What It Actually Means When AI "Learns"

When we say that an AI system "learned" something, we do not mean what we mean when we say a person learned something. A person who learns to drive does so through physical experience, feedback, judgment calls in real situations, and a growing intuition about navigating the unexpected. A person who learns history develops an understanding of cause and effect, human motivation, and the weight of real events.

AI learning — technically called machine learning — works differently. An AI model is trained on a large dataset. The system processes that data, identifies statistical patterns and correlations, adjusts its internal parameters, and gradually becomes better at predicting the right output for a given input. A language model is trained on enormous amounts of text — books, articles, websites, conversations — and through that process becomes capable of predicting, with increasing accuracy, what words should come next in a sequence.

This is impressive, but it is fundamentally different from human understanding. The language model does not know what words mean in the way a person who has lived in the world knows what they mean. It has learned patterns of word usage, not meaning. The image recognition system does not understand what a cat is in any experiential sense. It has learned the pixel patterns associated with what humans have labeled "cat."

The Data Problem: AI Is Only as Good as What It Learned From

Because AI systems learn from data, they are fundamentally shaped by the data they are trained on. If the training data contains errors, those errors become part of what the system learned. If the training data reflects historical biases — racial disparities, gender stereotypes, geographic imbalances — the system may reproduce those biases in its outputs. If the training data is out of date, the system may give answers that were accurate at one time but are no longer current.

These are not theoretical concerns. They are documented realities in the history of AI deployment. Facial recognition systems have been shown to perform significantly worse for darker-skinned faces because training datasets contained disproportionately few examples of them. Hiring algorithms trained on historical data have reproduced patterns of preference for male candidates because men were historically hired more often for certain roles. Language models trained on text from certain regions may reflect the assumptions and blind spots common in that body of text.

This matters because it means AI is not a neutral tool. Every AI system encodes the choices, assumptions, and limitations of the data it was trained on — and of the people who made decisions about what data to use, how to label it, and what outcomes to optimize for. Recognizing this is part of thinking critically about AI.

Key Vocabulary

Hallucination — AI generates false information with apparent confidence. Bias — systematic patterns in AI outputs reflecting imbalances in training data. Training data — the body of examples a model learns from. Parameters — internal settings adjusted during training to improve predictions.

Why AI Makes Mistakes — and What Kind

Understanding how AI learns also helps explain the specific kinds of errors AI systems tend to make. When a person is uncertain, they usually know they are uncertain. They might say "I'm not sure, but I think..." AI systems, particularly generative AI, often do not express uncertainty in this way. They are designed to produce outputs, and they do so even when the patterns they have learned are insufficient to support a reliable answer. The result is confident-sounding output that may be partially or entirely wrong.

This behavior — hallucination — is not a bug that will eventually be fixed. It is an inherent consequence of how these systems work. Because they generate text based on statistical patterns rather than verified facts, they will sometimes generate text that fits the pattern of a good answer without actually being one. A question about an obscure historical event might yield a beautifully structured response full of invented names, dates, and context — all generated because that is what responses to questions about historical events generally look like.

AI also makes systematic errors that are not random but reflect consistent weaknesses or biases. A model trained primarily on text from one cultural context may consistently underperform on topics from other cultures. A system optimized for engagement may consistently generate content that is more provocative than accurate. These systematic errors are often harder to detect precisely because they form a coherent, internally consistent pattern.

The Human Choices Behind AI

AI systems are sometimes discussed as if they are autonomous and objective. In reality, every significant AI system reflects an enormous number of human choices: what data to train on, how to clean and label that data, what objectives to optimize for, how to evaluate performance, who gets to use it, and what limits to put in place. Each of these choices involves values, priorities, tradeoffs, and the possibility of getting things wrong.

The organizations that build AI systems have commercial incentives that do not always align perfectly with users' best interests. The data they work with reflects the world as it has been — including all of its inequities, errors, and imbalances. Thinking about AI critically means holding onto this reality: AI systems are not neutral arbiters of truth. They are human-made tools that reflect the choices, priorities, and limitations of the humans who made them. That does not make them bad or useless. It makes them tools that deserve the same thoughtful evaluation you would give to any source of information.

Read & Analyze

🔬 The Research Shortcut

Destiny is writing a report about representation in technology. She uses a chatbot to research historical data on who has held leadership roles in major tech companies. The chatbot produces a confident, detailed summary — specific statistics, company names, percentages for different demographic groups.

Destiny uses several of these statistics in her report. Her teacher asks her to cite the source for one of the percentages. Destiny asks the chatbot, which provides a URL for a research report. When she clicks it, the link does not exist. Digging deeper, she finds the actual statistics tell a more complicated story — and that some of the chatbot's framing reflected assumptions researchers in the field have specifically pushed back against.

1
What specific failures does this scenario illustrate? Use concepts from the reading to explain what likely happened.
2
The chatbot's framing reflected "assumptions researchers have pushed back against." What does this suggest about how training data shapes outputs?
3
What are the stakes when AI-generated but inaccurate data about representation enters public discourse?
4
Design a verification approach Destiny could use for AI-assisted research going forward.
0 / 150 characters minimum Minimum response required to continue
✏️ Please write at least 150 characters before continuing.

🔎 CCR Connection

Critical

AI outputs reflect the data they were trained on — including its biases, gaps, and limitations. Scrutinize where information comes from, especially on topics where accuracy deeply matters.

Creative

Understanding AI's limitations helps you use it more strategically — knowing when to lean on it for brainstorming and when to rely on verified sources for accuracy-critical claims.

Responsible

When AI errors enter your work — especially on sensitive topics — they can cause real harm. Verification is not extra work. It is a core responsibility of anyone using AI.

Unit Quiz & Final Reflection

Show what you know — then show what you think.

Answer all 8 questions, then submit for grading. You need 80% (6 out of 8) to pass. If you don't pass, you'll be directed to revisit specific lessons before retaking. Each question shows feedback after you submit.
Question 1 of 8
Which best explains what makes a system "artificial intelligence" rather than traditional software?
Question 2 of 8
Which of the following is an example of generative AI at work?
Question 3 of 8
Why is it a mistake to assume that a confident, well-written AI response is accurate?
Question 4 of 8
An AI hiring algorithm was trained on 10 years of historical data from a company that mostly hired men for engineering roles. What is the most likely risk?
Question 5 of 8
What does the term "hallucination" mean in the context of AI?
Question 6 of 8
Which statement most accurately describes the difference between narrow AI and generative AI?
Question 7 of 8
A social media platform's algorithm shows you content that generates strong emotional reactions because that content gets more clicks. What concern does this raise?
Question 8 of 8
Why is it important to understand that AI systems reflect human choices about data, design, and optimization?
Answer all 8 questions to submit.
out of 8
🎓

Unit 1 Complete!

You have finished What Is AI? — the foundation for everything that follows. Below are your earned certificate and digital badge.

CCR
Student Edition · Level 2 · Kerszi.com · #AILiteracyAndEthics · #AILiteracyAndEthics
This certifies that
Student Name
has successfully completed Unit 1 of the AI Literacy and Ethics course, demonstrating foundational understanding of artificial intelligence, generative AI, machine learning, and the critical thinking skills required to navigate an AI-shaped world.
Critical Creative Responsible
Unit 1: What Is Artificial Intelligence?
AI Literacy and Ethics · Level 2 · Secondary · #AILiteracyAndEthics
Date Completed
Quiz Score
Kathi KersznowskiCourse Author

Your downloadable digital badge — share on LinkedIn, email signatures, or portfolios.

Ready for the next unit?