Thanks to visit codestin.com
Credit goes to github.com

Skip to content

hat matrices #2

@ljwolf

Description

@ljwolf

Wei's suggestion and the back and forth we had back in Marchish of 2017 was:

We know that a simple definition of a hat matrix is \hat{y} = S y for hat matrix S.

If \hat{y} = \sum_j^p \hat{f}_j, then maybe we can get S from expanding the estimators of \hat{f}_j, given that each is \hat{f}_j = S_j( y - \sum_{k \neq j}^p \hat{f}_k) for process-specific hat matrix S_j.

In one line:

2018-02-08-141339_1089x89_scrot

Immediate question I have is: what's y^{-1}, given it's a vector?

Strategies I've looked into include:
2018-02-08-141744_416x139_scrot
which is just 1/y diagonalized.
2018-02-08-141951_398x101_scrot
inspired by the adjoint-determinant definition of the inverse
2018-02-08-142235_352x92_scrot
where that cross-times is a elementwise product, which is about as literal an interpretation of the factor-out logic I can see.

None of this yields a hat matrix. In most cases, the second term is larger than the first term at nearly all elements, so you end up with a hat matrix with values somewhere between -4 and 0. Then, taking the dot of that and y gives you massive too large numbers. BUT their general pattern looks sort of like the predicted values.

I'll post code here I'm using to generate these values, as well as track further ruminations.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions