Skip to Main Content

ENGL 2A: Critical Thinking and Writing (Stone)

This guide supports students in ENGL 2A with Professor Stone.

Terms

What is Generative AI?

Generative AI refers to a type of artificial intelligence designed to create new content. Instead of just analyzing or organizing existing information, generative AI can produce things like text, images, music, videos, or even computer code. For example, tools like ChatGPT (for text) or DALL-E (for images) are generative AI tools because they generate content based on the prompts or instructions you give them.


How Does Generative AI Work?

Generative AI works by learning patterns from large datasets (for example, millions of books, articles, or images) and then using that knowledge to create new content. Here’s a basic breakdown of how it works:

  1. Training: Generative AI is trained on massive amounts of data (e.g., text from the internet or images) so it learns to recognize patterns, structures, and styles in the data.
  2. Input: You give it a prompt—like a question, sentence, or image description.
  3. Output: The AI uses its training to generate new content based on the input you provided. This could be an answer to a question, a piece of writing, or an image.

Examples of Generative AI

  • Text Generation: Tools like ChatGPT generate essays, emails, or even poetry based on a prompt you give it. For example, you could ask ChatGPT to write a paragraph about climate change, and it will produce text based on patterns it has learned from many sources.

  • Image Generation: Tools like DALL-E can create new images from simple descriptions. If you type in "a cat wearing sunglasses," it will generate an entirely new image of a cat with sunglasses.

  • Music and Art: Some generative AI models can even create original music or digital artwork by learning patterns from existing compositions or paintings.


Why Should You Care About Generative AI?

As a first-year student, generative AI can be both a helpful tool and something you need to use ethically.

  • Brainstorming: Generative AI can help you generate ideas, organize your thoughts, or get started on an essay.

  • Research Assistance: It can help you quickly find information, summarize articles, or explain complex topics in simpler language.

  • Creativity: If you’re working on creative projects, generative AI can help you create drafts or get creative inspiration.

However, it’s important to remember that generative AI is not perfect. It can sometimes create incorrect or biased information, and you must always critically evaluate what it produces.


How to Use Generative AI Ethically

While generative AI is a powerful tool, it’s important to use it responsibly:

  • Don’t Use It for Plagiarism: You shouldn’t use AI to write your entire paper or project and present it as your own work. That’s academic dishonesty. AI should be used as a tool to help you brainstorm or organize ideas, not replace your creativity and critical thinking.

  • Fact-Check: AI can produce information that sounds true but may actually be incorrect or outdated. Always verify the information you get from generative AI, especially for academic assignments.

  • Cite Your Sources: If you use AI-generated content in your work, be sure to cite the tool properly. This shows that you used the AI ethically and helps avoid plagiarism.


In Summary:

  • Generative AI is a type of artificial intelligence that creates new content, like text, images, or music.
  • It works by learning patterns from huge amounts of data and then using those patterns to generate responses or creations based on what you ask.
  • While generative AI can be a helpful tool for brainstorming, research, and creativity, it’s important to use it ethically—don’t rely on it to do your work and always fact-check the information it provides.

By understanding generative AI, you can use it effectively to support your studies, while making sure you maintain academic integrity and critical thinking in your work.

 

Citation: OpenAI. What is Generative AI? ChatGPT, 06 Jan. 2025, https://chat.openai.com/.

What Are Large Language Models (LLMs)?

Large Language Models (LLMs) are AI systems designed to understand and generate human language. They're trained on massive amounts of text data—think books, articles, websites, and more—so they can learn patterns in language, like how words fit together and what makes sense in a sentence.

How Do They Work?

  1. Training on Text: LLMs read huge amounts of text to learn how words, phrases, and sentences usually work together.

  2. Predicting Text: When you ask a question, the AI doesn’t “know” the answer but predicts the most likely response based on the patterns it has learned. It’s kind of like how autocorrect guesses what word you meant to type.

  3. Generating Responses: After analyzing your input, the model generates an answer or a piece of text, often sounding very human-like.

Why "Large"?

The "large" refers to the vast amount of data these models are trained on and the number of "parameters" (rules) they use to generate language. The bigger the model, the more complex and accurate its answers can be.

Why Are LLMs Cool?

  • Versatile: LLMs can answer questions, summarize articles, and even create poetry.
  • Contextual: They understand context well, so they can give relevant and coherent responses.
  • Creative: They can even come up with new ideas based on what they’ve learned.

Limitations to Keep in Mind:

  • No True Understanding: LLMs don’t actually understand what they say—they just mimic patterns in language.
  • Bias: If the data they’re trained on has biases, the model might reflect those too.
  • Inaccuracy: Sometimes they make up facts (called "hallucinations"), so it’s important to double-check their answers.

In Short:

LLMs are AI tools that generate text by learning from huge amounts of data. They’re powerful and versatile but not perfect. They're great for writing help, brainstorming, and answering questions—but always remember to verify the info they give you and to cite the information they provide! 

Citation: OpenAI. "What Are Large Language Models (LLMs)?" ChatGPT, 06 Jan. 2025, https://chat.openai.com/.

What are AI Hallucinations? 

In the world of artificial intelligence (AI), "hallucinations" refer to situations where an AI generates information that isn't true, accurate, or doesn't exist. It's like when an AI makes things up or gives you answers that sound plausible but are completely wrong. The term "hallucination" is used because, just like a person might see or believe things that aren’t real, the AI "imagines" things that aren’t based in fact.

Imagine you're using an AI to help you write a paper. You ask it about a historical event, but instead of giving you real, verified information, the AI provides details that sound convincing but aren’t correct—like mentioning a battle that never happened or a person who was never involved. These are "hallucinations."

Why does this happen? AI, especially large language models like the one you're interacting with, work by predicting the most likely word or phrase to come next based on patterns in data they've been trained on. They don’t actually understand or verify the truth—they just generate text that fits the patterns they’ve seen before, even if those patterns are inaccurate or misleading.

Key Points:

  1. Hallucinations are errors – The AI is producing false or made-up information.
  2. They sound real but aren't – The AI’s answers may seem credible but are often fabricated.
  3. Why it happens – The AI doesn’t "know" things like a human does; it works by pattern recognition, not by reasoning or fact-checking.

Why Is This Important for You?

When you're using AI as a tool in your writing or research, it’s important to double-check the information it gives you. Just because an AI says something confidently doesn't mean it’s true. Always verify facts, especially when using AI for academic purposes, to avoid spreading misinformation or relying on false details in your work.

In short, AI hallucinations are like the little "errors" or "mistakes" that happen when AI tries to answer a question, but doesn’t get it right. Keep an eye out for these, and always cross-check with reliable sources!

CITATION: OpenAI. "What Are AI Hallucinations?" ChatGPT, 06 Jan. 2025, https://chat.openai.com