24 Comments

This is a wonderful post, thank you for the deep dive.

I've had similar experiences with PC in AI and I'm not confident that it's an issue many AI companies are willing to try and deal with, at least at scale.

Readers interested in PC may be interested in this Munk debate featuring Stephen Fry, Jordan Petersen, Michael Dyson, and Michelle Goldberg. I was particularly moved by Stephen Fry's performance -> https://youtu.be/GxYimeaoea0?si=AFF-ZyG2PMBLALQ8

Expand full comment
author
Mar 26Author

Thanks for your kind feedback and the fantastic link. I'm glad you enjoyed this deep dive. Listening now :)

Expand full comment

At the start of this post I was very against your whole point but by the time I got to the end I thought you provided some great examples and critical thinking to justify your decision and enjoyed hearing your perspective!

two comments I had

"My answer is ‘no’. ChatGPT doesn't learn or retain information from interactions in the way humans do. It can’t update its knowledge or learn from interactions in real-time"

This can be done by uploading previous content and setting instructions via uploaded documents into the knowledge base of a GPT. Also, Inflection's Pi model seems to have an nearly infinite context wind and has learned a lot about me of the months and knows my preferences in how to response (Also it is free!). Upload a few chess books into GPT and then play chess and see how it does?

"We present ourselves as remarkably finely tuned machines. After all, we are the culmination of billions of years of evolution."

I think the culmination of billions of years of evolution is a bug not a feature. Evolution was a random walk for trail and error for our species to higher intelligence and that is not how AI will evolve it looks since engineers are focused on improving weak points with each model update.

I highlight doubt these issues will show up in GPT-5 which will be released in the next quarter. GPT-4 is based on 2022 technology, so I think there will be a pretty significant upgrade in reasoning.

Expand full comment

This is fantastic deep dive, Nat!

Expand full comment

Hey, Nat,

I was using GPT-4 last night for some higher level editing. It was a multi-pronged effort--managing voice, multiple-shot training, the whole kitchen sink. Then, in a moment of brilliance, GPT took things to the next level. An incredible synthesis, but on top of that it pushed things into a new creative direction that just amazed me. As I gasped with wonder, the whole system started glitching. GPT got locked into a circular refiring of an additional prompt. I decided to shut things down because I didn't want to use up my limited allotment of prompts per 3 hours. When I reopened GPT, I had no access to any particular model (3.5 vs. 4), search history, etc. Just a prompt window and all prompts timed out before yielding results. At that point, I decided to give things a break for a night. Today, GPT is up and running again on my end. Weird stuff. Thought you might find this story useful.

Expand full comment
author
Feb 27Author

Thanks for the update, Nick. I’m not sure, but my gut suggests that OpenAI is onto something significant. Over the past two days, the model, presumably GPT-4 Turbo, has repeatedly stated in separate sessions that its knowledge was last updated in September 2021. Such glitches seem to be commonplace nowadays. For instance, one of my chat sessions vanished entirely. I tried to recover it, but the chat simply disappeared. I anticipate the arrival of GPT-5 soon.

Expand full comment
Feb 26Liked by Nat

Well, Nat. You never cease to surprise me. This was massive. I thought AI was good at reading research papers and now I realise I need to double-check every output :/

Expand full comment
author
Feb 26·edited Feb 26Author

You should give it a try :) well and thanks for reading, Sergio!

Expand full comment

I honestly wonder how we're going to remember this so-called emergence of generative AI? I think I'm mostly going to remember the hype and how nvidia's stock behaved.

Expand full comment
author
Feb 26Author

there is a big elephant in the room: I think the election will provide a perfect answer to this question

Expand full comment

I ask GPT4 to proofread most of the things I write, and then ignore 75% of its suggestions. Definitely collaborative, not hands-off!

Expand full comment
author
Feb 26Author

Hi, Andrew. The problem is not only proofreading, I faced some major issues with it. My main focus was on proofreading because it simply makes me sound ‘neutral’, and politically correct, and practically forces me to avoid personal opinions. That’s a serious issue!

Expand full comment

Oh, absolutely. I'm just sharing the way I use it personally.

There are some things the tool is great for, but all of it necessitates double checking, even basic research. Knowing these limitations is really important for folks, and I'm glad you're making sure folks know here.

Expand full comment
author
Feb 26Author

You're right, all of it requires double-checking. I must admit, however, it used to work better in the past.

Expand full comment

To me, GPT4 hasn't really ever been better than it is right now, but I'm also clearly using the tool for different things than you are. I wonder if some features have improved or stayed static, while others have gotten much worse for various reasons. Just my 2 cents here as a daily user!

Expand full comment

AI proofreading is incredibly frustrating because it should work better than it does. Maybe we just need to figure it out, but I agree with it trying to rewrite everything and often changing the tone and meaning.

Expand full comment
author
Feb 26Author

Thanks for the feedback, Logan. Actually, Bing is the only AI that I recommend to everyone for proofreading. Look at this > Bing in action:

“Influencing” implies a positive or neutral impact, while “affecting” implies a negative or undesirable impact. Since you are talking about a problem or issue, “affecting” would be more appropriate.

Expand full comment

I'll have to check this out. I'm at the point where I'm debating training my own model specifically for proofreading. 🤣

Expand full comment
Feb 26Liked by Nat

This was phenomenal!!!!

Expand full comment
author
Feb 26Author

Thanks for your kind words, Nana 🙌

Expand full comment

Great read Nat. It's astonishing that learning seems to be a static process locked to a time rather than a dynamic, progressive and potentially remedial process.

Expand full comment
author
Feb 26Author

Thanks for reading, Paul. Glad you loved it. Unfortunately, that's what we have now

Expand full comment

An enjoyable read as always, Nat.

Could it be that you've experienced ChatGPT during the brief "Gone Wild" phase several days ago, before it was hastily patched up by OpenAI? (https://www.thedailybeast.com/openais-chatgpt-went-completely-off-the-rails-for-hours)

As for the "Powers of 10" video, it's great. One of my favorite YouTube channels, Kurzgesagt, did an interesting take that's a bit reminiscent of that approach: https://www.youtube.com/watch?v=Z_1Q0XB4X0Y

Expand full comment
author
Feb 26Author

Thanks for the valuable feedback, Daniel. Something serious might have happened, since GPT-4 Turbo has claimed several times that its knowledge was cut off in September 2021.

Expand full comment