The Most Important ChatGPT PromptThe Most Important ChatGPT Prompt

The Most Important ChatGPT PromptThe Most Important ChatGPT Prompt

The first time I introduced ChatGPT to my brother, it took less than 5 minutes for him to say, “This thing is stupid and useless.” This was about 8 months ago. He asked me to write a lab report for him from the ground up. I assured him that it would be a great job. He put in his lab instructions, report questions, his experimental data, and so on. To put it succinctly, the lab report output from ChatGPT was garbage. It was full of disconnected sections, full of hallucinations, and full of data stuffing. In fact, it was so bad that it was useless from the start. My brother quit ChatGPT and went back to manually writing reports. I had to admit that I was disappointed too.

A contextual realization

Reflecting on this experience, I wondered why it performed so poorly when I had assured him that it would not. I thought of all my successes with GPT and I suddenly realized that ChatGPT works best with information. ChatGPT is very good at semantics, it understands the deep relationships between almost all textual information, but it has a big weakness: it can’t infer. ChatGPT’s real job is to produce text that looks like what a real person would write, and it will do its best to make you look like a real person (including making you look like a liar). If you ask ChatGPT to produce something, especially sensitive and complex writing like a lab report, and you do not give it enough information, it will be a disaster. How do you know if you have given ChatGPT enough information? Most of the time, ChatGPT will assume that it has all the information it needs, even if it does not.

Leave a Reply

Your email address will not be published. Required fields are marked *