Is the utf-8 - 2 byte sequence correct?

Is the utf-8 - 2 byte sequence correct?

In the article What is UTF-8? UTF-8 Character Encoding Tutorial it says:

For a two-byte sequence, the code point value is equal to ((leader byte - 194) * 64) + (trailing byte - 128).

Shouldn’t it be 192, which is 11000000b, where 194 is 11000010b?
For 3 and 4 bytes sequences it uses 224 (11100000b) and 240 (11110000b), which is logical - 3, and 4 leading 1 Adam’s rest of zeroes. Why would it be 11000010b for 2 bytes sequence?