I started using FCC in my freshman year of highschool, only got serious with it my sophomore year but later dropped it for who knows what reasons. I picked it back up to begin learning python and machine learnign along with APIs. I just did a simple workshop, and oh my god, the requirements were so tedious, I felt that when creating the workshop, I was forced to code the way that they want it. I feel like (in my opinion) having A.I to grade the prompt by just giving it a basic answer key would allow programmers to have a little more ley room. I understand that FCC is free, which is amazing! I’m not complaining, but having a little more free space to code however you like, seems great. However, if you were to add A.I, please limit what it could do. For example, make it so that the user can not ask A.I for help as this what the forum is for. The A.I should be only used to grade the code, and maybe at the end, give suggestions to write better or more fluent code. What do you guys think?
Welcome to the forum @alexnredplayz
At the moment AI is not a great learning tool. Without proper training someone may not be able to distinguish whether the output is good quality or not. Most, if not all, of the advice on the forum is to avoid AI while learning to code.
If you need help with coding, that is what this forum is for.
Happy coding
fCC, being a non-profit org, has a pretty lean budget.
Running an LLM is not free, running it at the scale fCC runs would also be a significant challenge with the limited income fCC has. Even a basic local model would have some costs associated with running it.
Now who pays for that could be a way around this, similar how this forum software is granted free by Discourse (the developers) itself. If you could find a way to get an LLM to essentially “be free” for this use-case its financially possible.
From a technical standpoint it could be done a multitude of ways.
Practically though AI would only be useful in a niche scenario where what you have is actually correct but the validates are too inflexible. Your code will run on a machine, and a machine (with code) is what validates your input, there is no room for error here.
A “vibe coding version” of fCC wouldn’t be very useful either, since the goal is for you to learn how to do stuff as they need to be done, not for how you would tell an LLM to do it for you. Of course there is the argument you could never need to know how its actually done (actual vibe coding), but that isn’t fCC’s goal.
What exactly makes ai code low quality though? As a beginner I cannot really tell though whenever I see something obviously made by ai I can immediately tell and I don’t like how it works or the aesthetics at all.
The quality of AI responses depends on the quality of the prompt(s).
Their output is based on statistical models and methods, so can sometimes hallucinate or get confused when they need to fill in the blanks.
Without a solid knowledge foundation, you may not recogise or realise when AI is not giving you correct information.
AI does have use cases, but at the moment it isn’t so good for learning to code.