New thinking in 2026

No, humans translating is going to be the norm for freeCodeCamp Programming Tutorials: Python, JavaScript, Git & More (even if we have an AI assisted workflow for people that want to do that too, but it has to be considered a pre-translation and needs to be reviewed quite heavily - volunteer translators at least for the Italian contributions I saw seem to prefer translating on their own tho), but with curriculum translations we have never been able to be even with the released material with volunteer translations. And every time a challenge is updated it needs to be translated again, and every time new material is released we need to translated that… (I was working on Italian translations for a good while, it’s sisyphean)

We are working on Spanish and Portuguese to start, so far I have seen the same complaints on the translated parts that can happen with volunteer translations (like parts that should stay in English accidentally translated, like text to put in the code, it happened quite often even when there were volunteer translations only), but there are more people completing the courses.

2 Likes

According to you, which new idea or trend will have the biggest impact in 2026?

The realization that AI isn’t actual intelligence.

Continuing the thread of thought above, while addressing the actual topics, AI is “artificially” intelligent. In that LLMs can know a lot of stuff, but it isn’t actually what most people would consider “smart” or able to perform the role that a “smart” person would be able to perform.

Something like mental health is too nuanced that even experts don’t have all the answers. An LLM based AI will have the same shortcomings as experts, along with its own short comings that are vast.

Going beyond just addressing mental health, even basic tasks LLMs fall apart due to simply being a machine imitating thought. (see vending machine project). There’s also a lot of indication most companies are using AI completely incorrectly, and seeing huge amounts of “failed projects”. (ref) This is probably due to the sheer inability for LLMs to solve real-world problems by themselves and the widespread attempt at adoption, even without justification.

Ultimately, I believe 2026 will be a year that a divergence will occur between those that understand how to use AI correctly, and those that do not. This has been true but the hype train powered both sides equally, with more clear research being done into how/why AI has more or less failed to live up to the hype, actual work will get done in the realms it actually works. This goes for companies, and consumers.

Specific companies will see pushback from investors as all signs are pointing in the wrong direction in regards to “AI taking over jobs”. Others will see increased investments as companies realize how to correctly leverage this technology, along with stabilizing finances (see price changes/increases)

Consumers themselves are a little more tricky as they follow the hype, but even your average “AI user” already knows some limitations and sees the “seams”. No longer does anyone who uses AI expects it to get substantially better faster, nor expects to see some exponential explosion of capability. It might be in the headlines/titles but practically the “smarts” of AI are at its peak.

Which leaves me at what I believe the average new developer should focus on (the target audience of this forum/platform)

Its understanding the underpinnings, advantages and disadvantages of LLM powered AI. The future might still be AI powered, but it will be written in code, with an LLM being only one aspect of the stack. Even today, the most successful AI native systems are ones with the LLM only playing a small key part with humans and software code there to keep things deterministic.

Finally, keep questioning everything and stay curious. AI/LLMs are here to stay, but perceptions will change, the hype train will continue to chug, but at the end of the day all of this is done on the backs of human software engineers, so keep learning and keep building, and keep questioning!

2 Likes

I had written a post but didn’t want to get into this endless debate. However, this is exactly what I had written, so thanks for sharing that!

I think calling this technology “AI” is extremely premature and has led many people to treat it as if it’s Data from Star Trek or a holographic AI from a movie. The phrase is extremely primed by Hollywood and what we’re working with now couldn’t really be considered “intelligent” at all in that way.

It only seems intelligent because we seem to converse with it. We’ve even had to now invent the term AGI because what we’ve called “AI” falls short.

People assume it has some kind of superior way to process data that allows it to draw correct conclusions from advanced pattern recognition by analyzing petabytes of data. The problem is that it kind of does, sometimes, in a way, so it’s misleading.

It’s useful, but not intelligent and so it’s not AI but it’s something

This was extremely eye-opening into how ML / LLM (I cannot call it AI for the above reasons) is used correctly by someone who understands it well

https://x.com/bcherny/status/2007179832300581177

It seems quite different to how most people think of it and use it. Probably more professional coders have it integrated into their workflow in a similar useful way?

According to Wikipedia:

The Turing test, originally called the imitation game by Alan Turing in 1949,[2] is a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine’s ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[3]

Saying that something is incorrect / wrong is not considered a valid premise for failing a Turing test. AI could pass a Turing test, perhaps not by hallucinating, as long as the evaluator cannot distinguish between AI and human.

If the content / output gets past the prompter, QC checks, QA checks, the legal team, and finally makes its way to the customer, then maybe it is an ‘artificial’ intelligence, perhaps the fifth type of intelligence? Though not for 2026.

I agree most people will begin to realise the limitations of AI, LLM models, and chatbots.

There is also a trend for job openings with higher salaries to have AI skills across various industries.

This is kind of like “I prefer the horse and cart method”, but those using cars and trucks can overtake you because they go further, faster, and can carry larger and heavier loads. Sure there will be crashes, but that is called progress. Perhaps laws to limit daily screen time, regardless of age.

1 Like

Wouldn’t it be unethical to sell an intelligent being to a customer? :sweat_smile:

Turing test is for sure interesting but I don’t think it’s really a good test of intelligence. I guess the human evaluator is trained in some way and it depends how to define intelligence. I think the Turing Test was more of a thought experiment and not meant to be a real benchmark.

In any case chatbots have been passing Turing test competitions for years (PC Therapist in 1991, Eugene Goostman 2001) https://en.wikipedia.org/wiki/Eugene_Goostman . I don’t think those were referred to as “AI” ?

It’s only the current crop of machine learning models that are referred to as “AI”. Maybe because neural networks are inspired by the human brain.

Here’s some interesting info about PC Therapist: http://www.cis.umassd.edu/~ivalova/Spring09/cis412/Old/therapist.pdf

"The Talking PC Therapist from Thinking Software, Inc Woodside, N.Y. is an Artificial Intelligence program which demonstrates Natural Language processing, speech synthesis, and machine learning on any PC or compatible. It employs AI sentence parsing and knowledebase technology, plus a 70,000 word vocabulary

They do use the phrase AI and machine learning here!

LLM can write something in twenty seconds, but I doubt you’d want to publish it. I think it might be a good research assistant with a lot of back and forth with a good writer and someone to do fact checking. Similar to the workflow described by Boris Cherny.

For sure a useful tool if used correctly but I don’t think it’s intelligent and should not be called “AI”.

Now that I think of it we’ve also called video game enemies logic “AI” but I don’t think we mean the same thing. No one has ever expected a video game enemy to write a program or report.

I think it’s recently that people have been fooled into thinking it’s really intelligence.

Smart phones existed before AI. But they did not have the same processing and computational powers as modern phones, which integrates AI. But modern smart phones do not think. Though they can learn gestures, predict user behaviours (based on data, sensors, usage patterns), monitor health (heart rate, steps), act as virtual assistants, and give advice on how to take photos. Deep blue beat the world chess champion last millennium, is considered a symbolic AI.

I think goal posts keep changing. The smart phone from ten years ago had more advanced technology than the computer systems NASA used for the Apollo moon landing.

AI generated content is sold to customers. I read an article where gamers demanded their money back for the slop they were sold.

Newspapers do use AI. I saw a picture of an article where the last few paragraphs was the agent making suggestions about formatting and putting in some additional info.

Academia is currently engaged in redefining intelligence and defining AGI (Artificial General Intelligence), due to advancements in AI. The one after that could be ASI (Artificial Superintelligence).

For programmers in 2026: less hype (discovering the limits and benefits of AI for those that choose to use the technology), more type (writing quality code).

1 Like

The term AI is not new, LLMs are a new pattern that entirely falls into AI, so its valid by itself.

That said, “AI” is a loaded term, as AI could be used to describe any of the following:

  • skynet/terminator - existential threat of AI
  • A* algorithm - path finding algorithm
  • LLM powered chatbots/AI native systems/deeply trained systems/etc

LLMs are what most people describe “AI” to be today, or more practically and accurately “AI” is software that has an LLM/deep-learning model behind it to handle arbitrary input and provide non-deterministic output.

Something like “AGI" or artificial general intelligence is what skynet falls under, and it itself is what people can think of when they think of “AI”. That doesn’t mean any of the three examples provided above are actually equal.

The lack of distinction is semantical, but that difference and vagueness can be lost or outright ignored for any sort of reasons, from marketing or misunderstanding.

Anyone who claimed/said/described something like chatGPT to be nearing AGI tries to equate the terms together, but that doesn’t change the fact an LLM comes with a bunch of core limitations that will prevent them from ever being close to what most consider AGI. But it does sound sexy and good for investors, but this is also part of my predication for 2026.

To quote Buffet:
”Only when the tide goes out do you discover who’s been swimming naked.”

1 Like