I have been experimenting with ChatGPT and, like many of you, I am super critical of the way it sounds overly confident while being completely wrong, I am reconciling with this slowly. First, it is important to note that a high percentage of humans suffer from the same malady. I also have to remind myself that we see result returned from our favorite search engines that need a healthy dose of skepticism. We have over the years learned to manage sources that might contain information of dubious veracity.
I am also seeing a lot of developers admitting to a certain level of fear and trepidation in the face of this technology, so let me say this:
If you think of software development purely as an exercise in generating lines of code then ChatGPT, or some future version of it, will most certainly do that. If software development is an expression of your values, your employer's values, your business goals, and perspectives, then ChatGPT will not in any near future independently do that instead of you.
Contrary to popular opinion ChatGPT is not actually intelligent. For that, it would need an internal model of the world. The words it uses would reflect that internal model. ChatGPT, however, has a model of the "words" we use, which is incredibly impressive but should not be confused with our own brand of intelligence.