With the rapid advancement of AI and the development of new tools and software built entirely from scratch using AI, we’re seeing a huge shift in the types of jobs available. I believe AI will create more high-quality jobs in AI-related fields and eliminate many repetitive, manual jobs. This shift could pose a difficult transition for some sectors.
What do you think? Will AI be a net job creator or a job destroyer in the long run? And how can we best prepare for these changes?
No AI will not be replacing developers. It will continue to make up incorrect data because that is how language models are designed.
Learn to code without it and complete tasks without any of its input. If you don’t, it’ll become an addiction and you’ll be unable to complete things yourself.
I understand where you’re coming from, and I do agree with some of your points. However, as AI models continue to be trained and fine-tuned, they will improve at generating correct answers and reducing errors over time. The accuracy still depends on the data input and how developers use these tools.
To me, AI functions more like a personal tutor. It can help break down complex coding parts into simpler concepts, which helps in the learning process. But as with any new technology, I think it’s best to embrace AI and learn to use it effectively while also maintaining a strong foundation in coding skills.
LLMs that back current “AI”, as that term is commonly understood, are merely statistical token combiners. They don’t actually have any ability to reason. If the job requires thinking, it’s still safely in the realm of humans for now.
Agreed. Beyond reasoning and logic, many LLM models are trained on specific datasets and rely heavily on their memory. The real test is how these models respond to unfamiliar questions. They definitely need to be more robust in their capabilities. But I think this will improve gradually as the technology progresses. Check out this article Reasoning skills of large language models are often overestimated, it’s really interesting.
LLMs don’t have a memory. LLMs have training data.
These models do not handle data outside of their training set very in well because statistics is inherently good at interpolation and bad at extrapolation. That’s just how the math works and it’s not something ‘technology’ can just bypass.