According to the Internet, generative AI services like ChatGPT are inflating a massive economic bubble that’s about to pop. Simultaneously, Microsoft sees so much potential it’s resurrecting a whole-ass nuclear power plant, and OpenAI has a valuation in excess of 150 billion dollars.
Which take is right? I suppose we’ll find out.
But, like…what can ChatGPT do? What are you getting if you go to chatgpt.openai.com and sign up for a $20/month subscription?
In my opinion: quite a lot. ChatGPT is now a core service I use every day. Frankly, I’d be willing to pay $50 a month, or more (but don’t tell OpenAI that).
Here’s how I’m using it, and how you can, too.
Brainstorming
Thinking is hard.
I imagine most people would intuitively agree with that. We’ve all had the experience of standing up after an eight-hour stretch sitting in front of a computer…and feeling utterly exhausted.
And it’s not just our minds playing tricks on us. Thinking hard really does tire us. There’s debate on what, exactly, that means (is willpower really a limited resource, or not…? how much energy does our brain consume, exactly…?) But, in general, thinkin’ ain’t easy.
ChatGPT can help.
Look: I’m not saying it’s “intelligent.” I find that debate tiring, since no one can even agree what counts as intelligence to begin with, so it all comes down to how you frame it.
What ChatGPT can do, though, is generate things in response to a question or prompt.
And I often find things are better than nothing. It’s an icebreaker, a block stomper, a thought starter.
In fact, brainstorming with ChatGPT helped me kick off this newsletter.
In the past, I’d approach it by starting with an empty Word document. I’d think up some names, outline some post ideas, and come up with a basic strategy. The result would be a general outline of what the newsletter is about.
But that’s not what I did this time. Instead, asked ChatGPT: “I want to make a Substack about home office tech and trends. Do you have any suggestions?”
Here’s a Google Doc that shows its response.
Nothing it suggested was revolutionary. But nothing I might’ve outlined would’ve been, either, and I easily could’ve spent half a day hemming and hawing over it. It got my gears moving and helped me take a step towards turning an idea into reality.
ChatGPT did offer a suggestion for the title of the newsletter that I like, though. It became the title of this newsletter.
More recently, I’ve also taken to using ChatGPT’s Advanced Voice Mode for brainstorming or outlining an idea while I take a walk. It’s a nice way to pass the time if the podcast I was listening to turns out a bit dull.
How to do it: This doesn’t require a specific prompt. You can type or say whatever you’d like and ChatGPT will respond. But if you’ve a specific goal in mind for your brainstorming session, informing ChatGPT of your goal before you get started will help. The “context window” of modern ChatGPT is quite lengthy, so you can chat with it for a long time before it begins to forget the goal you set.
Dictation
My guide to buying a standing desk briefly covers the health benefits it provides. Put simply: standing up a bit throughout the day is correlated with positive health outcomes.
But it’s not a miracle solution. I still use a keyboard, mouse, and display all day. I suspect most people reading this newsletter do, too. Wrist pain and eye strain, among other problems, remain an issue.
That’s why I use ChatGPT for dictation. If my hands or wrists feel sore, or my eyes feel tired, I don’t need to type. I can talk.
I do this by prompting ChatGPT to, well, act like a dictation assistant. I used the “Create a GPT” feature with a prompt that “corrects dictation errors without altering the original meaning.”
I dictate to ChatGPT using the dictation feature built into macOS or Windows, submitting the dictation as a prompt for this GPT. It then corrects errors, removes pauses or filler words, and generally polishes up what I said.
What’s amazing about ChatGPT, and the big advantage it has over older dictation methods, is its ability to “know” specific terms or words that dictation tools struggle with.
I recently reviewed a gaming monitor from the Xiaomi brand. Most dictation software never gets “Xiaomi” right. ChatGPT does.
And if it didn’t, I can tell it to fix that with a simple, natural language instruction, such as, “Actually, I’m talking about the Xiaomi.” Yes, past dictation software can make such corrections, too. But it’s not as simple as typing “hey, I said Xiaomi, actually.”
Oh, and it gets better.
A particular trick that blows my mind is this: I sometimes ask ChatGPT to perform calculations in the middle of dictation.
For example, I might say something like, “Sharpness is a perk. The monitor’s 3,840 x 2,160 resolution, spread across a 32-inch panel, works out to, ChatGPT please calculation the pixel density and insert it at this point in the copy, which is excellent.”
It’s amazing, really, and helps me write when my hands are cramped and tired, or while I do a few stretches away from the keyboard.
Follow-up prompts, like “Please remove filler words,” can also cut out excessive wordiness and long pauses. Which is useful, because, uh, I tend to, you know, use a lot of filler words, and stuff.
How to do it: Use ChatGPT’s “Create a GPT” feature to pre-prompt it. Set up the GPT as a “friendly, helpful dictation assistant,” with the goal to “edit dictation to fix any errors in the text, but do not edit the style, tone, or substance of the text.” That’s important, as ChatGPT tends to edit dictation heavily by default.
Once set up, you can dictation using any method available on your computer, phone, or tablet. Mac, Windows, Android, and iOS all have built-in speech-to-text features.
Search
Google’s generative AI search snippets were savaged by memes when people discovered they could make ridiculous recommendations, like gluing cheese to your pizza so it doesn’t fall off.
In most cases, though, generative AI tools, including ChatGPT, are great at search. Perplexity built a search engine around this fact. But I’ve not been using Perplexity much lately because ChatGPT often works as well.
At its most basic, ChatGPT is capable of search because it can browse the web. If you ask it a question where search might be useful, it will crawl several top-ranking websites (discovered though its partnership with Microsoft’s search engine, Bing), and summarizes the information it found.
Which, in most cases, is going to turn out a decent answer.
But ChatGPT can go much deeper than that. I’ve learned to rely on it for finding academic sources when reporting on niche topics.
A moment ago, I described how thinking is, like, hard. But what studies back up that statement? ChatGPT had some suggestions, which led me to the links I included.
You might think you could just Google that. But these days, trying to find studies on Google is perilous. It’s not impossible, but boy, can it be difficult. Because, well, they’re not supported by ads, and so not often the top result.
AI search can get things wrong. It might cite a paper or academic who doesn’t exist. But that isn’t really a problem unless I just take ChatGPT’s reply and run with it. Apparently, a few lawyers did exactly that. Which really makes me think I need to start a consulting business, because if those folks are making $1,000 an hour…but, anyway.
In reality, I wouldn’t do that, any more than I’d trust the first search result off Reddit to diagnose a weird bit of skin on my elbow. I didn’t log on yesterday.
So, give ChatGPT a try for search. You might be surprised by how good its results prove to be.
How to do it: Just ask ChatGPT a question. If you’d like it to search the web, include that in your prompt.
Follow-up questions are valuable, too. If the result isn’t what you expected, ask for a more specific answer. If you’re skeptical, challenge ChatGPT to defend the results of its search with more specific information and citations. “Cite that, please,” will usually help.
Keep in mind that ChatGPT can “hallucinate,” which means it will sometimes make up an answer to satisfy your query. This seems more likely to happen if you’re looking for extremely specific information. Make sure to double-check answers when accuracy is important.
Editing
ChatGPT is this newsletter’s editor.
The way this works is the same as how ChatGPT works for dictation. I provide it with some text and tell ChatGPT to look for any errors and make corrections. The only difference is that, when dictating, the text comes from an operating system’s built-in speech-to-text. When editing, the text I provide is my writing.
Generative AI tools are quite good at this. Their ability to predict what should come next allows them to find situations where the text is surprising. And surprising text is often wrong.
It’s not perfect, of course. Left to its own preferences, ChatGPT aggressively suggests correcting informal language to a more formal tone.
That’s probably a good idea if you’re writing an email to some work colleagues. A little bit of formality is a way to exercise caution and ensure that you’re being clear. But for here? I mean, we’re all friends here, right?
Still, ChatGPT frequently finds mistakes I fail to see. It’s usually no more than a few mistakes in an entire article of 1,000 to 2,000 words, but it’s enough to make a difference.
Pay attention to your prompt. ChatGPT is a good editor, but you need to be specific. Telling it to edit text may cause it to reproduce the text with edits included. That means you don’t have the opportunity to reject edits you don’t like. When I instruct ChatGPT to highlight and break out the edits, however, it does so. I can then look over the edits and adopt the edits I agree with.
You can also prompt it to be more or less severe. Telling it to be extremely picky and demanding, for example, can lead to some really harsh (and unnecessary!) edits. But it might be helpful if you really want to take yourself to task.
How to do it: Again, the “Create a GPT” feature is helpful. You can pre-prompt the GPT to edit in the style you’d prefer. For this newsletter, I tell ChatGPT to focus on grammar and spelling issues and allows informal tone, style, and word choices.
Also, make sure to inform the GPT you want it to “highlight edits” and “avoid reproducing the entire text provided for edit.” Otherwise, it may re-produce your text with edits included, which makes it hard to see what it changed!
Problem-solving
ChatGPT might not be “intelligent” in a way we associate with a conscious, living entity, but it sure can solve some brain teasers.
I recently wrote an article about large language models with long context windows. The article references a paper in which researchers use a long context window to improve an AI model’s ability to retrieve specific text. I also spoke with Ziyan Jiang, one of the paper’s three authors.
But my editor at the publication, IEEE Spectrum, spotted an error in my draft. The Answer Recall rates I included, though found in Table 2, weren’t the ones the researchers highlighted in the abstract. My editor thought that was strange.
I agreed, but it led me to a problem. I now wasn’t sure I understood what Table 2 told me. I had interpreted it as listing three tasks alongside the Answer Recall rate for each. But, on second thought, that didn’t make sense.
So, I turned to ChatGPT for help. I uploaded the table and asked what it meant.
It turned out ChatGPT understood the paper better than I had. But, just to be sure, I presented it with my initial takeaway, which I presumed was wrong, and told it to correct me if I was wrong. It did. Not only that, but it explained why I was wrong. To quote ChatGPT:
The specific recall rate example you mention—improving from 52% to 91%—is slightly off. In the paper, the answer recall rate for passages improved from 52.24% to about 71.69% when increasing the retrieval unit size from passages to grouped documents. However, the example of improving to 91% seems to come from a different context in the paper related to increasing the number of retrieved short units to 200, not long units.
And yeah. That was my mistake. I didn’t understand the table’s data labels, which meant I had the context of the table’s data wrong, which led me to the wrong conclusion.
If you don’t see the mistake yourself, don’t worry. It’s a dense paper. I didn’t understand it properly, either. Not until I used ChatGPT to help me understand why the numbers my editor pointed out were wrong.
Having witnessed ChatGPT in action, I now use its reasoning model, o1-preview, quite often in these scenarios. I don’t look for it to provide the right answer, as I know it can be wrong (and you should always remember this, too). But I do use it to verify my understanding.
If I read a paper, ask o1-preview to summarize it, and then can’t decide whether o1-preview is right or wrong…well, clearly, I didn’t understand what I read.
How to do it: Using ChatGPT to solve problems works best if you have a good idea of what, exactly, you’re trying to solve. A poorly defined problem is a lot more likely to be met with an incorrect answer.
At the moment, o1-preview can’t accept image or PDF input, which is a big problem for summarizing and querying documents. But you can still copy-and-paste the text from a PDF into o1-preview. Or, if you’re working with an image, you can ask GPT-4o to convert the image to text, and then switch to o1-preview to solve your problem.
What you definitely shouldn’t use ChatGPT for
So, look: I think ChatGPT is a pretty great tool. But it’s not always the best tool for the job, and in some situations, using it can get you in a lot of trouble. Don’t use it for…
Working with private, sensitive, and/or legally protected data: Avoid sharing sensitive or confidential information with ChatGPT, especially in the consumer version.
Relying on ChatGPT as your single source of truth: Don’t rely solely on ChatGPT for factual accuracy; always verify important information from trusted sources.
When use is specifically prohibited: Keep a lookout for company policies or contractual restrictions on using generative AI. It’s not uncommon for companies or contracts to specifically call out generative AI as prohibited: one of my clients, Ziff-Davis, has such a clause in their contract.
ChatGPT might not be useful for long
While I like ChatGPT, and I think it’s useful, I’m not sure how long ChatGPT—as in, the chatbot you can access online—will remain so.
I expect all five tasks described in this newsletter will soon be solved by AI-powered features in common software and operating systems.
Some, like AI-powered dictation, are already well on their way. Others, like brainstorming, may require a more specific and original UX design to break through, but I imagine will get there eventually. As these AI features become baked into the devices and software we use every day, they’ll gradually become invisible, fading into the background of our everyday work.
It won’t be long until most people don’t even think about using AI for writing, editing, programming, data analysis, or any other task.
Absent taking specific measures to avoid it—which, to be sure, is a path some will take, as was true with many technologies through history—generative AI will just be there, hanging out, without the user needing to know or think about it.
For now, though, ChatGPT is a great preview for an incredible constellation of upcoming features, services, and apps that will pick out specific tasks and enhance them with AI.
Building the UX to present these features and testing them to make sure they do what they say on the tin, takes time. Years. Decades, even. But as major feature launches like Apple Intelligence have shown, it’s already happening.