Hi everyone,

I shall send out the link for class 2 at 1 pm shortly. Here is an article which covers everything I included in last week's class.


AI writing tools can transform how you create content. But before you dive in, you need to understand how they actually work. This isn't about artificial intelligence taking over your writing. It's about learning to use a powerful chatbot that can help you break through writer's block and speed up your process. Here's what matters most when you start using tools like ChatGPT.

Understanding Tokens: How Chatbots Charge You

Tokens are the basic units that control your access to AI tools. Think of a token as approximately a word—the relationship isn't exact, but it's close enough for practical purposes. Every message you send consumes tokens. Every response the chatbot generates consumes tokens. Your subscription gives you a limited number of tokens per month, whether you're using a free account or paying for premium access.

This matters because when you run out of tokens, you lose access until your limit resets. The chatbot tracks every word in your conversation. Type a question? That's tokens. Get a response? More tokens. Even if you have a paid account, you're working within limits. Understanding this helps you use the tool more efficiently. You'll think twice before asking the chatbot to write endless variations of the same thing.

The specific limits vary by service and subscription level. ChatGPT offers around 128,000 tokens in what's called the "context window." That's roughly 100,000 words of conversation history the chatbot can reference. Other chatbots have different limits. Check the documentation for whatever tool you're using to know your boundaries.

Context: The Chatbot's Memory

Context is how the chatbot remembers your conversation. Every token you use stays in the context window until you run out of space. When you're five messages into a conversation, the chatbot can still reference what you said at the beginning. This creates the illusion of memory, but it's really just accessing recent conversation history.

The context window has a fixed size. When you exceed it, older messages drop out. The chatbot literally forgets what happened earlier in your conversation. For most writing tasks, 128,000 tokens gives you plenty of room to work. But if you're working on a long project with multiple sessions, the chatbot won't remember yesterday's conversation unless you explicitly remind it.

Some chatbots can access titles and keywords from previous conversations. They might pull in relevant information from past sessions. But they don't automatically load your entire history. That would fill up the context window immediately, leaving no room for your current work. Think of context as short-term working memory, not long-term storage.

What LLMs Actually Do

LLM stands for Large Language Model. It's a more accurate term than "artificial intelligence" because it describes what's actually happening. These tools work with language patterns, not intelligence. They don't think. They don't understand. They recognise patterns in text and predict what words should come next.

The chatbot learned these patterns by analysing massive amounts of text data. When you ask it a question, it's not searching a database for an answer. It's generating a response based on statistical patterns in language. This makes it fast and fluent. It also makes it confident even when it's wrong.

The chatbot generates responses that sound natural and authoritative regardless of accuracy. This is crucial to understand. It will invent dates, statistics, and quotes with the same smooth tone it uses for real information.

Your job is to verify specific claims, especially numbers, dates, quotes, and anything attributed to real people or organisations. The chatbot is a writing assistant, not a fact-checker.

Controlling the Chatbot's Personality

Chatbots have personality traits that affect how they respond. The two that matter most are verbosity and laziness. Verbose chatbots write too much. They pad responses with unnecessary words and phrases. Lazy chatbots resist generating long outputs, even when you need them. They'll give you shortened versions or skip parts of what you asked for.

These personality traits shift as companies update their models. A chatbot that was concise last month might be verbose this month. You can't rely on consistent behaviour over time. What you can do is give the chatbot clear instructions about how you want it to respond.

Tell it explicitly: "Use a minimal and explanatory style. Use formatting and lists where useful for getting information across." This type of prompt helps control verbosity. If the response is still too long, refine your instruction. Say "Give me a one-paragraph explanation" or "Keep responses under 100 words." You're training it within each conversation.

Some chatbots let you set custom instructions that apply to all your conversations. In ChatGPT, you'll find this under Settings> Personalisation. You can specify your preferred response style, your occupation, and relevant context about how you work. These settings help, but they're not reliable. The chatbot frequently ignores custom instructions, especially after model updates. You might set clear preferences about writing style or complexity level, only to find the chatbot completely disregarding them in its responses.

The Right Way to Prompt

Prompting is how you guide the chatbot to give you useful output. Think of it as a conversation, not a search query. Start with your request, see what you get, then refine it. The first response is rarely what you actually need. It's generic because the chatbot doesn't yet know your specific context.

Ask for what you want in plain language. Skip the complicated prompt engineering tricks. Say "Write a 200-word introduction to meditation for beginners" instead of "Generate content about mindfulness." Be specific about length, tone, format, and audience. The more context you provide, the better the output will be.

When the response misses the mark, don't start over. Build on what you got. Say "Make it simpler" or "Add an example about workplace stress" or "Cut this to three paragraphs." Each refinement teaches the chatbot what you're actually looking for. After a few exchanges, you'll dial in the right prompt for your needs.

You can also prompt for style within a conversation. Try this: "For the rest of this conversation, use a minimal and explanatory style. Use formatting and lists where useful." The chatbot will adjust its responses accordingly. This gives you more control than the global custom instructions because it applies immediately to your current work.

Getting Output You Can Use

Chatbots don't handle PDFs well. When you request a PDF, the output often has formatting issues, overlapping text, or missing content. The chatbot generates PDFs using a simple Python library that produces messy results. Don't fight with it.

Request markdown instead. Markdown is a simple text format that uses symbols for formatting. A hash symbol (#) creates a heading. Two asterisks (text) make text bold. Hyphens create bullet lists. It's easy to read as plain text and easy to convert to other formats.

Copy the markdown output and paste it into a text editor or note-taking app. Many apps can convert markdown to PDF, Word documents, or HTML. Bear is one option for Mac users. There are free online converters that work just as well. This two-step process gives you clean, properly formatted output without wrestling with the chatbot's PDF generator.

Markdown also works better as input. If you're feeding information to the chatbot, give it markdown when possible. It's more precise than PDFs, which the chatbot often misreads. When you're moving content between chatbots or building on previous work, markdown maintains formatting without introducing errors.

Working with Files and Voice

Most chatbots let you upload files directly. Look for the upload button in the interface—usually a plus sign or paperclip icon. You can upload Word documents, PDFs, images, and text files. The chatbot will extract the content and work with it.

Images deserve special mention. The chatbot can read text in images using optical character recognition. Take a photo of a handwritten note or a printed page, upload it, and the chatbot will transcribe it. It typically gets around 95% of the text correct, which is good enough for most purposes. You can also upload images and ask "What is this?" for identification and analysis.

For dictation, use the microphone button. ChatGPT's dictation system is among the best available. It sends your audio to a service called Whisper, which transcribes it accurately and returns the text. This works well for capturing thoughts quickly without typing. You get a transcript you can copy and use elsewhere.

Some chatbots also offer a conversation mode. Instead of typing or dictating individual messages, you can have a spoken conversation. The chatbot responds with voice, creating a more natural interaction. This works well for brainstorming or talking through ideas when typing feels too slow.

Memory and Custom Instructions

Chatbots can remember information across conversations if you enable the memory feature. When the chatbot notices something important—your writing style, your preferences, details about your work—it can save that as a memory. These memories carry over to future conversations, making responses more personalised.

You control this in the settings. In ChatGPT, go to Settings > Personalisation> Memory. You can view saved memories, delete specific ones, or turn the feature off entirely. Turning it off means each conversation starts fresh with no reference to previous sessions.

Custom instructions work differently. They're global settings you create to tell the chatbot how you want it to behave. You might specify that you're an online educator, that you prefer concise responses, or that you want explanations without jargon. The chatbot applies these instructions to every conversation.

The catch: chatbots don't always follow custom instructions consistently. Model updates can change how well the instructions work. What worked perfectly last month might be ignored this month. This is frustrating but manageable. When you notice the chatbot ignoring your instructions, prompt it directly within the conversation instead. That tends to work more reliably.

What to Watch Out For

The chatbot will confidently state things that aren't true. It invents facts, misremembers dates, attributes quotes to the wrong people, and creates plausible-sounding statistics that don't exist. This happens because it's a pattern-matching language, not a system for retrieving verified information. It doesn't know the difference between fact and fiction in its training data.

Never trust specific claims without verification. If the chatbot gives you a statistic, check it. If it quotes someone, find the source. If it describes a historical event, confirm the details. This applies especially to anything involving numbers, dates, names, or direct quotes. The more specific the claim, the more likely it is to need fact-checking.

The chatbot also changes your content when reformatting it. Ask it to convert something to markdown or create an image? It'll often rephrase sentences, drop details, or add its own interpretations. Always compare the output to your original. Don't assume the reformatted version matches what you asked for.

Finally, be aware of token costs. It's easy to burn through your allocation quickly when you're having long conversations or generating multiple drafts. Keep an eye on your usage if you're on a limited plan. When you hit your limit, you'll lose access until it resets, which can disrupt your work.

The Bottom Line

Chatbots are powerful writing tools when you understand their limitations. They break through writer's block. They generate first drafts quickly. They help you refine ideas through conversation. But they don't think, don't fact-check, and don't replace human judgment.

Use them to handle the initial heavy lifting, then apply your expertise to make the output actually good. Prompt clearly. Refine through conversation. Request markdown for clean output. Verify specific claims. Edit everything before using it. The best results come from treating the chatbot as a collaborative tool, not an autopilot for your work.