Hi everyone,

This is a document produced from the first live class. I will place these in the Writings section of the website.

Here is the YouTube Link for Session 1:

https://www.youtube.com/live/V3U9k8TaCog?si=uSLbUwruMrTnA9_O

Contents

  • What We're Building
  • Call It a Chatbot, Not AI
  • Limitation #1: Tokens
  • Limitation #2: Context (Memory)
  • What Is a Large Language Model?
  • Personalities and Behaviour Patterns
  • Custom Instructions
  • Prompting for Better Results
  • Markdown: Your Best Output Format
  • Quality Assurance: Always Check the Output
  • Input Methods
  • Memory Feature
  • Web Search and Applications
  • Key Takeaways

What We're Building

This series teaches you how to use AI effectively for writing. We'll create articles together. You'll learn the practical skills you need. Each session builds on the last.

Today, we covered the essential knowledge you need before diving in. Think of this as your foundation. Everything else builds on these basics.

Call It a Chatbot, Not AI

Let's start with terminology. AI sounds scary. Artificial intelligence. Job replacement. World domination.

But here's what you're actually using: a chatbot. It's all about language. The tool doesn't understand anything. It recognises patterns in language. That's it. The structure mimics how the human brain works. It's called a neural network. Your brain embeds language. These systems do the same thing.

Use the terms "chatbot" or "LLM" (Large Language Model) instead of AI. This keeps things practical and less intimidating.

Limitation 1: Tokens

Every chatbot has limits. The first one you need to know about is tokens.

A token isn't exactly a word, but close enough. Think of it as roughly three-quarters of an English word. You have a limited number of tokens available. This applies to both free and paid accounts.

Every word you type costs tokens. Every word the chatbot generates costs tokens. When you run out, you drop down to a lower tier of access. Free accounts have stricter limits. Paid accounts give you more tokens per month, but there's still a cap.

This is how chatbot companies charge for their service. It's like a word count meter running in the background. Different companies publish different limits. Check their pricing pages to see what you get.

Limitation 2: Context (Memory)

Context is the chatbot's short-term memory. It's also measured in tokens.

ChatGPT currently offers 128,000 tokens of context. That's roughly 100,000 words. This is your conversation window. The chatbot can see everything within this window. It uses this memory to understand what you're talking about.

Context fills up with your conversation. Your prompts use tokens. The responses use tokens. Both count against the context limit. When you start a new conversation, the context resets. The chatbot can access titles and keywords from previous conversations, but not the full content unless you specifically reference them.

Think of context as the chatbot's working memory for your current session.

What Is a Large Language Model?

LLM stands for Large Language Model. This is the technical term for what powers chatbots like ChatGPT.

An LLM is a type of artificial intelligence that works with language. It learned patterns from massive amounts of data. When you ask it something, it predicts the next words based on those patterns.

It doesn't think. It doesn't understand. It recognises and uses patterns in language. This is crucial to remember. The responses sound natural because the system is very good at pattern matching. But there's no actual comprehension happening.

This matters when you're evaluating outputs. The chatbot can be confidently wrong because it's just following language patterns, not reasoning about truth.

Personalities and Behaviour Patterns

Chatbots have personalities. These change over time as companies update their models.

Two main traits matter: verbosity and the sycophancy-to-laziness scale.

Verbosity means how wordy the responses are. Some chatbots write long, detailed explanations. Others are more concise. This changes based on updates to the model.

Sycophancy means the chatbot constantly praises you and agrees with everything. Claude currently leans sycophantic. Every idea is "excellent" and "fantastic."

Laziness sits at the opposite end. A lazy chatbot resists doing what you ask, especially if it requires generating many tokens. It finds ways to skip the work or give you less than you requested.

ChatGPT and other bots shift between these traits as companies tune their models. You need to work with whatever personality is active at the time.

Custom Instructions

Custom instructions let you set global preferences for how the chatbot responds.

You'll find this in Settings > Personalisation in ChatGPT. Other chatbots have similar options, usually under global settings.

Custom instructions apply to every conversation until you change them. You might specify your occupation, your preferred writing style, or the reading level you want. The chatbot tries to honour these preferences.

But here's the catch: custom instructions are sometimes ignored. Models change. What worked last month might not work this month. You need to check whether the chatbot is actually following your preferences.

If the instructions aren't working, you have two options. Clear them and start fresh. Or override them within a specific conversation using prompts.

Prompting for Better Results

A prompt is anything you say to the chatbot. This conversation you're having is a series of prompts.

You can refine how the chatbot responds by being specific about what you want. This works better than relying on custom instructions.

Example prompt for clarity: "Use a minimal and explanatory style. Use formatting and lists where useful for getting information across."

This kind of prompt gives you immediate control over the response style. You can test different prompts in a conversation until you find what works. Once you find a good prompt, save it. Use it again when you need that same response style.

The best prompt for one model at one time might not work next month. Models change. You'll need to adapt your prompts as the tools evolve.

Markdown: Your Best Output Format

Markdown is a simple text format that uses symbols for formatting.

Three hashes (###) create a third-level heading. Hyphens or asterisks create bullet lists. Two asterisks on each side make text bold. Just add a number and a dot (1.) for a numbered list, and extra lines automatically format.

Markdown is clean, it's universal, and chatbots handle it reliably.

Always request markdown output when you want to save or transfer content. PDFs are problematic. Chatbots generate PDFs by writing Python code in the background, and the formatting library is basic. The results are often poor.

Instead, get your content as markdown. Copy it. Convert it to PDF using a dedicated app like Bear, or use free online converters.

Markdown works across all chatbots. If you're moving content between different AI tools, markdown is your safest bet. It's more reliable than Word docs or PDFs, both of which can cause formatting issues.

Quality Assurance: Always Check the Output

Chatbots make mistakes. They change things when you're not looking.

You ask for output in a specific format. The chatbot delivers something close but not quite right. It drops a word. It changes a phrase. This happens constantly.

Always review what the chatbot gives you. Don't copy and paste without reading it. Treat every output as a first draft that needs your attention.

When you catch an error, be specific. Point it out. Tell the chatbot exactly what's wrong and what you want instead. It might take a few tries to get it right.

This is normal. You're working with a tool that pattern-matches language, not one that understands requirements. Your job is the quality control.

Input Methods

You can provide information to a chatbot in several ways.

Typing is the standard method. You write your prompt and hit enter.

Dictation works well in ChatGPT. Click the microphone button. The chatbot records your voice, sends it to Whisper (their transcription tool), and converts it to text. You get a transcript. This is useful when you want to speak instead of type.

File uploads are available through the attachment icon. You can upload Word docs, PDFs, images, and other file types. Some formats are more reliable than others. Markdown is best. Word docs are okay. PDFs can be flaky.

Screenshots work too. You can take a screenshot and upload it. The chatbot uses optical character recognition to read text from images. It's about 95% accurate.

Photos can be analysed. Upload an image and ask what it shows. The chatbot recognises objects, scenes, and text within images.

Memory Feature

ChatGPT has a memory feature that saves information across conversations.

Go to Settings > Personalisation> Manage Memory to see what's stored. The chatbot collects details it considers useful and saves them. These memories persist between conversations.

You can turn memory off or delete specific memories. If you're getting responses based on outdated information, check your saved memories. Delete anything that's no longer relevant.

The memory feature can help with continuity. But it can also cause problems if the chatbot remembers something incorrectly and keeps using it.

Web Search and Applications

Two more buttons at the bottom of the chat interface:

The globe icon searches the web. If you're not satisfied with a response and you know the chatbot is being unreliable, click the globe. It will search for current information. This doesn't make the chatbot magically smarter, but it gives you access to recent data beyond its training cutoff.

The grid icon connects to applications. This lets the chatbot work with tools on your computer, like Notes, text editors, or coding applications. You can tell it to create a note or extract information from files. These integrations are improving but still need testing to determine what works reliably.

Key Takeaways

Start thinking in terms of chatbots and LLMs, not AI. The language matters. It keeps you focused on what the tool actually does.

Know your limits. Tokens control your access. Context defines your working memory. Both are finite resources.

Test your prompts. Custom instructions might not work. Direct prompts within a conversation give you better control.

Request markdown output. Convert it to other formats afterwards using reliable tools.

Always check the output. The chatbot will make mistakes. Your job is quality control.

Experiment with input methods. Dictation, file uploads, and screenshots all work. Find what fits your workflow.

This is your foundation. Everything in the next sessions builds on these basics.