Full disclosure: I have only quickly read the article you linked, not the original paper. That said, I find it to be the usual highly publishable, slightly dramatic, click-generating, almost-sensible-but-not-quite semi-nonsense regarding AI targeted at an audience which is not knee-deep into the topic already.
To answer your question: No, writing ChatGPT prompts is not a viable strategy for solving the P-NP problem, or any other unresolved question. There is nothing under the hood even remotely close to being related to the issue at hand.
- The GPT "knows" nothing about any kind of semantic content regarding the questions. It has no knowledge, it knows no facts(*), it has no logic engine or anything like that. It is literally, only, simply blabbering words. "GPT" means "Generative Pre-Trained Transformer". It transforms words and sentences, it does not have the singlest clue what it is talking about. So if you ask it something related to a topic, it will - due to its unfathomably vast amount of training data - simply spew responses that have been recombobulated from some input text that was parsed during training. And not because the training stage somehow "understands" the input, but simply because a statistical algorithm concludes which words are fitting best. It is not, conceptionally, much different from the YouTube algorithm generating a list of videos on the right-hand side of your screen, to induce you to watch more.
- Even, if by some magic intervention, or by incredibly unlikely coincidence, the GPT should indeed stumble across the correct answer (i.e. because the input data contained the answer for some reason, without any human having recognized it so far), it is impossible to extract the reasoning behind it. That is, even if the GPT says "the answer is THIS", there is no way for it to lay down a formal proof which would make us believe it. Not even that, but no matter how well you write your proofs, by design and out of principle, GPTs "hallucinate" - i.e. their answers always sound 100% sure, even if they are blatantly wrong.
To be completely untechnical: imagine a GPT like a person that's sitting in their basement, consuming some social media site (Reddit, Facebook etc.) day-in day-out, and learning everything they can about a certain topic from that, which they have zero previous experience or knowledge about, and which they never experience even once in real life. That person will soon be an expert in repeating information about that topic, and will often then repeat said information on the social media site or in real-life (i.e., the classical echo chamber). The GPT is very much like that, just with a "brain" that has zero thought processes going on, it only babbles what it heard. Its basis, though, is not just a single reddit sub, but a good high percentage of the complete textual content of the Internet as of a few years ago, in all forms (science documents, social media, forums, etc. etc.).
It is certainly still applicable to call it an AI (artificial intelligence) in the sense that the technology has become so good that it seems or simulates intelligence, so much so that lay people have a hard time disbelieving that it actually is intelligent. If you separate the concept of "intelligence" into many small parts, then you may attribute a few to them to the GPT (i.e. being able to parse and generate language, and memorize ungodly amounts of data). But aside from that, in fact, there is not a iota of intelligence in the sense required to solve any kind of problem. As of 2023, it is simply a tool for humans to wield - an awesome tool, very powerful and incredibly fascinating, but still a tool.
Some people argue that the GPT is based on ML (machine learning) with the usual aspect that we don't really know how that particular black box works. They try to imply that there could be some kind of "spark" inside the ML part of the GPT which somehow, spontaneously, creates something else altogether - a kind of AI like in the SciFi movies which suddenly is much much more than what we thought. This is not the case though. Yes, we cannot track what's going on inside the model itself (which is the reason why we cannot ever trust its arguing completely), but the scientists implementing GPTs absolutely do know how everything works. Yes, the size of the LLM within the GPT is unfathomable, but we know what it is. Basically, a humongous amount of statistics about words, nothing more and nothing less. The wonder comes from the fact that even though it's only that, we get those great results. The real achievements is the human programmers who figured out how to implement all the stuff surround it, to make it seem so real.
(*) This is slightly simplified, there are techniques to inject actual facts into the process by embedding them in prompts; this is ongoing research, and very useful i.e. to be able to upload a long document into a GPT and ask meaningful questions about that ("what statement does this document make about X"), but definitely not something about the P=NP issue which would magically lead to solving it.