I have been experimenting with ChatGPT and, like many of you, I am super critical of the way it sounds overly confident while being completely wrong. It is important to note that a high percentage of humans suffer from the same malady. I also remind myself that we see result returned from our favorite search engine that need a healthy dose of skepticism. We have learned to manage sources that might container information of dubious veracity.
A lot of developers have admitted to a certain level of fear and trepidation in the face of this technology, so let me say this:
If you think of software development purely as an exercise in generating lines of code then ChatGPT, or some future version of it, will most certainly do that. If software development is an expression of your values, or your employer's values, your business goals, and perspectives, then ChatGPT will not in any near future independently do that instead of you.
Contrary to popular opinion ChatGPT is not actually intelligent. For that, it would need an internal model of the world. The words it uses would reflect that internal model. ChatGPT, however, has a model of the "words" we use, which is incredibly impressive but should not be confused with our own brand of intelligence.