# Hint for Fibonacci Word Entropy

What is your hint or solution suggestion?

Entropy(word) = -Sum (count/length*log_2(count/length)) for each unique character of word.

However I get `0.9709505944546686` while `fibWord(5)` expecting N:5 with entropy of `0.9709505944546688`

Challenge: Fibonacci word

Itâs unclear to me if you are offering a hint or asking for a hint.

Yes, your formula for Entropy gives a different answer than FCC gives.

Upon changing how I compute my log2, it works. Remember that `log2(x) = log(x)/log(2)`. With decimals that long, there are going to be rounding errors. I think log2 entered JS in ES6 - that code may have been written before that or by someone that wasnât familiar with it.

Personally, I hate challenges that are âtoo clever by halfâ or that rely on obscure math. Logarithms are especially tricky, and any challenge that relies on irrational numbers should allow for rounding errors. Not only should they explain what they mean by âEntropyâ in this context, but should probably offer the formula - this is a class in coding, not number theory.

Number theory is actually about integers but I agree that this challenge should allow for some minor rounding.

I would be offended if that wasnât the kind of mocking pedantry of which I am all too guilty myself.

Yeah, I should have said âadvanced math tricksâ. Yeah, we need math to do programming, but the vast majority of us donât need to know gymnastics with logarithms.

Iâve just run into too many algorithm problems where someone is just trying to show off that they know some math trick or whatever. Logarithms are especially abused in this manner. Sure, they are a neat trick and can do some cool things, but any long floating point number is going to have rounding errors on a computer (and some of the short ones, as we know). It is unfortunate when people that design algo challenges donât realize that and expect 16 decimal places of accuracy. Again, âtoo clever by halfâ. Especially since they didnât even specify the formula to use, apparently not realizing that that would have a big effect.

Iâll make an issue later this morning. We should accept a âfuzzy equalsâ for these algorithms. Within 10*epsilon or something like that should probably work.

1 Like
1 Like

Hi Kevin,

Sorry for the late reply, Iâve been busy with work so only just getting around to it now.

âItâs unclear to me if you are offering a hint or asking for a hint.â

I was proposing giving the Entropy formula as a hint, but also pointing out the solution seemed wrong compared to what that formula I found suggested, but I didnât even consider inaccuracy that could be introduced by different implementations of said formula.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.