“Hello everyone!
As the title suggests, this topic focuses on ‘New thinking in 2026’.
As we step into 2026, it’s the perfect time to think about what new ideas or approaches we can adopt to improve the world, our work, or our personal lives. Whether it’s advances in AI, new solutions for climate change, or new ways of working, everyone might have an idea to share.
According to you, which new idea or trend will have the biggest impact in 2026?
Share your thoughts in the comments below and join the discussion!”
I usually look at researches and they shows increasing loneliness and depression, so I think (and hope) the tech world will adapt AI methods to prevent those affected to harm/injure anybody.
Like teaching an AI chat how to recognise sign of depression and suggest a path gently to resolve that.
For example: If this AI chat is on a work seeking web portal and the user shows depression sympthoms, then the chat suggests a community (maybe an online one) where people with the same and similar problems gathered to help each other or something specific method from that portal.
It´s already a problem for years, so I hope the indrusty starts to do something about it this year.
LLMs have generated text encouraging suicidal people to kill themselves, so they really shouldn’t be used in any mental health capacity. That’s a really dangerous idea.
The way to help with loneliness is for humans to connect with humans.
any one else experiencing code problems?
If you have a question about a specific freeCodeCamp challenge as it relates to your written code for that challenge and need some help, click the Get Help > Ask for Help button located on the challenge.
The Ask for Help button will create a new topic with all code you have written and include a link to the challenge also. You will still be able to ask any questions in the post before submitting it to the forum.
Thank you.
Please read my last post and do what it says
The same happens when new technologies come out: companies try to implement them as fast as possible, then the bad effects kick in badly—like how algorithms changed political life (The Cambridge Analytica Story) or created new bad behaviors on the internet (like doom-scrolling).
My idea is not replacing the real psychiatrist, it’s adapting to the situation as much as possible, like Apple did when they made it so deleting an icon from the home screen actually deletes the app itself too, just like many boomers thought on PC.
Instead of forcing them to learn how to handle a computer, they adapted the operating system. (I know it´s not responsibility, but adapting to user behaviour)
LLM-created text is different because we can never really be sure how it will respond to certain inputs. That makes your comment right for me - we can’t fully control it, thus it can’t be the final solution(human to human interaction), which is clearly not something LLMs are for.
But looking at the big picture: companies implementing AI everywhere possible leads to decreasing mental health, which is now an international concern, so the moral should be taking responsibility for it as much as possible. Even if they can’t fix it, they should try, like how tobacco companies offer help to quit smoking on their packaging, even very expensive cigars have that info on them. They can´t force to quit smoking (as LLM can´t force people to meet other people). This is how we pay for our freedom.
Just think about how rushed the neurotransmitter research is. Nobody knows for sure what kind of damage it will cause to mankind. It will be something too that we have to adapt to.
Sorry for the very long response, my thoughts are just that complex and long in my mind.
I just don’t see any ethical way to use LLMs in a mental health context considering the real harm they have caused to people in mental health crisises.
LLM’s are already used in clinical settings.
https://www.nature.com/articles/s41746-025-01611-4
They can supplement, but do not replace human therapists.
I think that machine learning models can be used by trained humans to help with diagnostic tasks. An LLM handed directly to a patient seems like a terrible idea
As current evidence does not fully support their use as standalone interventions, more rigorous development and evaluation guidelines are needed for safe, effective clinical integration.
Seems like the study doesn’t recommend it either.
Me too, that´s why I wrote the tobacco example.
When somebody already smoking, the company can only show directions for a healthier life with info on the package. Otherwise they are against personal freedom.
I see the same when users already use ChatGPT as a psychologist. (Maybe one just simply can´t afford a real one)
I think you are right, but in Austria for example the healthcare system lacks with proper doctors and specialists in Hospitals and among private doctors too. In next years will be worse, because many will go to retirement.
This country is not alone with that, so I think that´s why it´s logical to find a way to reduce the need of human workforce.
Well, the whole thing doesn´t sounds good to me either, but maybe that will be our future and we should get used to it. We can´t clone doctors if there is simply no more available, or “trained humans” -as you said- can somehow reduce the need for them at least.
I really don’t see ‘replace humans with LLMs in jobs that should be done by humans’ as the logical outcome, honestly. I want a world with more human connection, not less. I don’t want to just accept companies trying to replace the human touch with statistically generated responses.
I suppose my ‘new thinking’ for the year is that we should stop accepting low quality LLM and genAI generated replacements for actual human efforts that matter.
I want to too more real life connections, we can solve our problems (and live more happiely) the best if we all talk to eye to eye with others.
I see a huge trust issue among mankind because of the lack of good connections.
It´s something very hard to fix, especially in poor regions.
As I see what you wrote historically, like tractors replaced horses or machineries replaced human workforce too.
They had seriously bad impacts, just look at the 20th century, how bad is when many people lost their jobs and political tense is high. Similar like today in my eyes.
The worst is when the users themselfs want to replace human touch, because that way they have no responsibility to others or just can´t afford specialists and than blindly accept everything from the output.
I don´t accept it either, that´s why I wanted to talk about looking for better solutions.
Like this research try to find answers at least:
What I´m saying is we should accept the fact, than history happening to us, but we should not accept the situation as it is now, we have to adapt the best way possible.
Sticking to the main thema, this realisation globally is what I expect from this year and the last comment too.
What is your expectations and solutions?
(Like in Austria the psychologic basic therapies became free, a good start to make new connections or fix old ones leading to better mental health without LLM.)
Honestly, I think the biggest thing we can all do is refuse to accept LLM generated replacements that don’t actually meet the same standards as human generated work.
The argument against low quality LLM generated content won’t last last long. For specific, and sandboxed applications, like bug hunting, testing, ideation, LLMs are becoming the go to option. For example, founders don’t have to spend the time, months or years, or the money it can take to build an MVP.
LLMs trained on other LLMs could produce better quality content with human involvement.
A well structured prompt can write a quality newspaper article based on a local councils annual report, in less than twenty seconds. A seasoned reporter can take hours, and will not be as thorough.
freeCodeCamp is already using AI on GitHub for productivity. They suspended all the translation contributors last year.
freeCodeCamp also teaches how to ethically and safely use AI:
For health, since AI, LLM’s, and chatbots can reduce the time needed to accomplish tasks, perhaps companies could advocate using the saved time on wellness programmes.
There are already memes for AI usage and token limits.
They tend not to, they tend to poison themselves.
Sure, if you don’t care about the article being actually true.
… Is fCC planning on abandoning human translations? That sucks
Yes, fCC is chasing the hype. I keep asking myself if I ultimately am well value aligned with the org the harder fCC pushes
seems like discussions are getting really interesting… very informative, keep going fellas ![]()


