The point of a function is to be able to define a reusable operation. So like

```
function addTwo(num) {
return num + 2;
}
```

This is a function, `addTwo`

, that has one parameter. That parameter is called `num`

(what it’s called is arbitrary), but it’ll be a number, and the function returns that number plus two.

The parameter has to be an arbitrary name: it’s a variable. When you actually run the function, whatever number you pass as an argument gets assigned to that variable.

So if I run `addTwo(2)`

, then `num`

is 2. So `num + 2`

is `2 + 2`

is 4, the function returns the number 4. If I run `addTwo(100)`

, this function returns the number 102. If I run `addTwo(1.23)`

, the function returns 3.23

What you’ve done is tried to define a function with values instead of parameters, which can’t work. The values are used when the function is *called*, not when it’s defined.

So when you *run* `functionWithArgs(7,9)`

, that’s fine. But the function can’t be *defined* like that. The example shown: that’s what you need to look at again.

You also hardcoded a value in the function body. So even if you fix the parameters, what would happen if you ran `functionWithArgs(10,10)`

? The value it should log is 20, but instead it will log 16. If you ran `functionWithArgs(64, 36)`

, it should log 100, but instead it will log 16. And so on.

The point is that you can give the function *any* two numbers, and it will log the sum of those two. Not just the numbers 7 and 9