What is your hint or solution suggestion?
Entropy(word) = -Sum (count/length*log_2(count/length)) for each unique character of word.
However I get
fibWord(5) expecting N:5 with entropy of
Challenge: Fibonacci word
Link to the challenge:
It’s unclear to me if you are offering a hint or asking for a hint.
Yes, your formula for Entropy gives a different answer than FCC gives.
Upon changing how I compute my log2, it works. Remember that
log2(x) = log(x)/log(2). With decimals that long, there are going to be rounding errors. I think log2 entered JS in ES6 - that code may have been written before that or by someone that wasn’t familiar with it.
Personally, I hate challenges that are “too clever by half” or that rely on obscure math. Logarithms are especially tricky, and any challenge that relies on irrational numbers should allow for rounding errors. Not only should they explain what they mean by “Entropy” in this context, but should probably offer the formula - this is a class in coding, not number theory.
Number theory is actually about integers but I agree that this challenge should allow for some minor rounding.
I would be offended if that wasn’t the kind of mocking pedantry of which I am all too guilty myself.
Yeah, I should have said “advanced math tricks”. Yeah, we need math to do programming, but the vast majority of us don’t need to know gymnastics with logarithms.
I’ve just run into too many algorithm problems where someone is just trying to show off that they know some math trick or whatever. Logarithms are especially abused in this manner. Sure, they are a neat trick and can do some cool things, but any long floating point number is going to have rounding errors on a computer (and some of the short ones, as we know). It is unfortunate when people that design algo challenges don’t realize that and expect 16 decimal places of accuracy. Again, “too clever by half”. Especially since they didn’t even specify the formula to use, apparently not realizing that that would have a big effect.
I’ll make an issue later this morning. We should accept a ‘fuzzy equals’ for these algorithms. Within 10*epsilon or something like that should probably work.
Sorry for the late reply, I’ve been busy with work so only just getting around to it now.
“It’s unclear to me if you are offering a hint or asking for a hint.”
I was proposing giving the Entropy formula as a hint, but also pointing out the solution seemed wrong compared to what that formula I found suggested, but I didn’t even consider inaccuracy that could be introduced by different implementations of said formula.