I'm a bit of a dinosaur when it comes to software development. I've been on the rollercoaster chasing the highs from working with a new language or new toolset. I've ridden through the lows when a technology I happen to really enjoy working with ends up getting abandoned. (Silverlight, don't get started, for a website? Never, for Intranet web apps? Chef's kiss)
I'm honestly not that worried about AI tools in software development. As an extension to existing development tools, if it makes your life simpler, all the more power to you. Personally I don't see myself ever using it for a few reasons. One reason is the same as why I don't use tools like Resharper. You know, the 50,000 different hotkey combinations that can insert templated code, etc. etc. The reason I don't use tools like that is because for me, coding is about 75% thinking and 25% actual writing. I don't like to, nor want to write code faster because I don't need to. Often in thinking about code I realize better ways to do it, or in some cases, that I don't actually need that code at all. Having moar code fast can be overwhelming. Sure, AI tools are trained (hopefully) on best practices and should theoretically produce better code the first time around, not needing as much re-factoring, but the time to think and tweak is valuable to me. It's a bit like the tortoise and the hare. Someone with AI assistance will probably produce a solution far faster than someone without one, but at the end of the day, what good is speed if you're zipping along producing the wrong solution? Call me selfish but I also think any developer should see the writing on the wall that if a tool saves them 50% of their time, employer expectations are going to be pushing for 100% more work out of them in a day.
The second main reason I don't see myself using AI is when it comes to stuff I don't know, or need to brush back up on, I want to be sure I fully understand the code I am responsible for, not just requesting something from an LLM. Issues like "impostor syndrome" are already a problem in many professions. I don't see the situation getting anything but worse when a growing portion of what you consider "employment" is feeding and changing the diapers on a bot. I have the experience behind me to be able to look at the code an LLM generates and determine whether it's fit for purpose, or the model's been puffing green dragon. What somewhat scares me is the idea of "vibe coding" where people that don't really understand coding use LLMs in a form of trial and error to get a solution done. Building a prototype? Great idea. Something you're going to convince people or businesses to actually use with sensitive data or decisions with consequences? Bad, bad idea.
Personally I see the value in LLM-based code generation plateauing rather quickly in terms of usefulness. It will get better, to a point, as it continues to learn from samples and corrections written and reviewed by experienced software developers. However, as Github starts to get filled by AI-generated code, and sites like StackOverflow die off with the new generation of developer consulting LLMs for guidance and "get this working for me" rather than "explain why this doesn't work", the overall quality of generated code will start to slip. With luck it's noticeable before major employers dive all-in, giving up on training new developers to understand code & problem solve, and all of us dinosaurs retire.
Until then I look forward to lucrative contracts sorting out messes that greenhorns powered by ChatGPT get themselves into. ;)