# Issue

*This Content is from Stack Overflow. Question asked by tsninja *

So, we’ve a function `foo`

that does not specify any type for `this`

, which means `this`

parameter for `foo`

is `unknown`

(`ThisParameterType<typeof foo>`

).

Next, we have a `wrapper`

function that accepts two arguments and **there’s a generic type argument T that’s used at three places.**

Now, when we call `wrapper`

with `foo`

and a `string`

, `T`

is `unknown`

for the `this`

parameter but is a `string`

for the `arg0`

parameter, but the **final inference for T is string.**

```
function foo() {}
function wrapper<T>(cb: (this: T, ...args: any[]) => any, arg0: T): T {
console.log(cb, arg0);
return "" as any as T;
}
let bar = "bar"
wrapper(foo, bar)
```

Let’s now see a similar example, where it behaves differently. Here also `T`

is `unknown`

at one place and `number`

at another, **but the resultant inference this time is unknown and not number.**

```
function someFunc<T>(a: T, b: T): [T, T] {
return [a, b];
}
let num = 10
someFunc(num as unknown, num);
```

I want to understand the reason behind this inconsistency?

# Solution

First let’s look at the simple case:

```
let num = 10
let unk: unknown = num
function f<T>(a: T, b: T) { }
f(unk, num);
// function f<unknown>(a: unknown, b: unknown): void
```

In the call to `f()`

, the type checker needs to infer the generic type parameter `T`

from the types of values you’ve passed in for `a`

and `b`

. Another way of saying this is that the appearances of `T`

in the `a`

type and the `b`

type are *inference sites*.

You have one value of the `unknown`

type, TypeScript’s top type, and one value of the primitive `number`

type corresponding to just numbers. So there are two candidates from which it can choose.

What happens if it chooses `number`

? Well that wouldn’t work because while `num`

is of type `number`

, `unk`

is *not* (even though we know it’s actually a `number`

at runtime, we’ve intentionally widened `unk`

to the `unknown`

type). Since `unknown`

is not assignable to `number`

, that inference would fail to type check.

What happens if it chooses `unknown`

? That works just fine because `unk`

is already of type `unknown`

, and `num`

can *also* be treated as having type `unknown`

. Indeed, the whole point of the top type `unknown`

is that all types are assignable to it.

And so the compiler chooses `unknown`

.

Now compare to the following variation:

```
function unkFunc(x: unknown) { }
function numFunc(x: number) { x.toFixed() }
function g<T>(a: (x: T) => void, b: (x: T) => void) { }
g(unkFunc, numFunc)
// function g<number>(a: (x: number) => void, b: (x: number) => void): void
```

In the call to `g()`

, the appearances of `T`

in the types of `a`

and `b`

are still inference sites, but now they appear in positions of a function parameter. Again you have one value where `T`

should be inferred as `unknown`

and another where `T`

should be inferred as `number`

. So there are two candidates.

What happens if the compiler chooses `unknown`

? That *wouldn’t work* because while `unkFunc`

is definitely of type `(x: unknown) => void`

, `numFunc`

is *not* of type `(x: unknown) => void`

. If it were, you could call it with any value you want, like `numFunc("")`

or `numFunc({})`

or `numFunc(null)`

. If you actually do any of those you’ll get a runtime error, and even if you didn’t (if I left out the `x.toFixed()`

line), it would be wrong because `numFunc`

‘s call signature requires that its parameter be of type `number`

. And since `unknown`

is not assignable to `number`

, that inference would fail to type check.

assignable to `number`

, that inference would fail to type check.

What happens if it chooses `number`

? That works just fine because `numFunc`

is already of type `(x: number) => void`

, and `unkFunc`

can *also* be treated as having type `(x: number) => void`

. It is perfectly safe to treat a function that accepts anything as one that only accepts numbers. The call `unkFunc(10)`

is fine.

And so the compiler chooses `number`

.

Note how the direction of inference and type checking for function parameters is opposite to that of plain values. In other words, `((x: A) => void) extends `

((x: B) => void)`if and only if`

B extends A`. The type of a function varies *counter* to the type of its parameter. In other words, functions are **contravariant** in their parameter types. The inference sites in `

g()`are in "contravariant positions", whereas those in`

f()` are in **covariant** positions (because they vary the same way as the type you’re trying to measure… they co-vary).

See Difference between Variance, Covariance, Contravariance and Bivariance in TypeScript and the Wikipedia entry on variance for a general discussion about variance, and the release notes for `--strictFunctionTypes`

which also discusses contravariance of function parameters.

Your `foo()`

example is essentially this, although there are some details that make it harder to see. One detail is that one inference site is contravariant and the other is covariant. That means it would turn out to be fine no matter which of `number`

or `unknown`

gets inferred:

```
function h<T>(a: (x: T) => void, b: T) { }
h<number>(unkFunc, num); // okay
h<unknown>(unkFunc, num); // okay
```

Of course when you actually call it, you get `number`

and not `unknown`

, and you still might want to know why:

```
h(unkFunc, num);
// function h<number>(a: (x: number) => void, b: number): void
```

That’s because inference sites have different **priority** (see ms/TS#14829 for a related issue in which inference site priority is discussed). Roughly speaking, because `(x: T) => void`

is a more complex type than `T`

, the compiler gives more priority to the simpler inference site. So will tend to infer from `b`

and not from `a`

. Since the `number`

candidate from `b`

works, that’s what you get.

Another detail is that you’re using a virtual `this`

parameter instead of a regular parameter, but functions are still contravariant in their `this`

context:

```
function unkThisFunc(this: unknown) { }
function i<T>(a: (this: T)=>void, b: T) {}
i(unkThisFunc, num);
// function i<number>(a: (this: number) => void, b: number): void
i<number>(unkThisFunc, num); // okay
i<unknown>(unkThisFunc, num); // okay
```

And thirdly you are not actually specifying the `this`

parameter, and it gets tricky to say whether we should treat this as being implicitly `unknown`

or whether there isn’t actually a candidate present for that inference site, which would end up falling back to the implicit `unknown`

generic constraint, but again, you get the same behavior:

```
function implicitUnkThisFunc() { }
i(implicitUnkThisFunc, num);
//function i<number>(a: (this: number) => void, b: number): void
i<number>(implicitUnkThisFunc, num); // okay
i<unknown>(implicitUnkThisFunc, num); // okay
```

But backing way up, I think the important bit to understand here is that you can assign `number`

to `unknown`

but not vice versa, and you can assign `(x: unknown) => void`

to `(x: number) => void`

but not vice versa. Armed with that, it makes sense that `number`

is a valid inference candidate when the parameter type is `unknown`

. You might still wonder why `number`

is chosen over `unknown`

, but the fact that `number`

is valid should no longer be a concern.

This Question was asked in StackOverflow by tsninja and Answered by jcalz It is licensed under the terms of CC BY-SA 2.5. - CC BY-SA 3.0. - CC BY-SA 4.0.