What is your hint or solution suggestion?
Entropy(word) = -Sum (count/length*log_2(count/length)) for each unique character of word.
However I get 0.9709505944546686
while fibWord(5)
expecting N:5 with entropy of 0.9709505944546688
Challenge: Fibonacci word
Link to the challenge:
Itâs unclear to me if you are offering a hint or asking for a hint.
Yes, your formula for Entropy gives a different answer than FCC gives.
Upon changing how I compute my log2, it works. Remember that log2(x) = log(x)/log(2)
. With decimals that long, there are going to be rounding errors. I think log2 entered JS in ES6 - that code may have been written before that or by someone that wasnât familiar with it.
Personally, I hate challenges that are âtoo clever by halfâ or that rely on obscure math. Logarithms are especially tricky, and any challenge that relies on irrational numbers should allow for rounding errors. Not only should they explain what they mean by âEntropyâ in this context, but should probably offer the formula - this is a class in coding, not number theory.
Number theory is actually about integers
but I agree that this challenge should allow for some minor rounding.
I would be offended if that wasnât the kind of mocking pedantry of which I am all too guilty myself. 
Yeah, I should have said âadvanced math tricksâ. Yeah, we need math to do programming, but the vast majority of us donât need to know gymnastics with logarithms.
Iâve just run into too many algorithm problems where someone is just trying to show off that they know some math trick or whatever. Logarithms are especially abused in this manner. Sure, they are a neat trick and can do some cool things, but any long floating point number is going to have rounding errors on a computer (and some of the short ones, as we know). It is unfortunate when people that design algo challenges donât realize that and expect 16 decimal places of accuracy. Again, âtoo clever by halfâ. Especially since they didnât even specify the formula to use, apparently not realizing that that would have a big effect.
Iâll make an issue later this morning. We should accept a âfuzzy equalsâ for these algorithms. Within 10*epsilon or something like that should probably work.
1 Like
Hi Kevin,
Sorry for the late reply, Iâve been busy with work so only just getting around to it now.
âItâs unclear to me if you are offering a hint or asking for a hint.â
I was proposing giving the Entropy formula as a hint, but also pointing out the solution seemed wrong compared to what that formula I found suggested, but I didnât even consider inaccuracy that could be introduced by different implementations of said formula.