-
Notifications
You must be signed in to change notification settings - Fork 134
Description
Wei's suggestion and the back and forth we had back in Marchish of 2017 was:
We know that a simple definition of a hat matrix is \hat{y} = S y for hat matrix S.
If \hat{y} = \sum_j^p \hat{f}_j, then maybe we can get S from expanding the estimators of \hat{f}_j, given that each is \hat{f}_j = S_j( y - \sum_{k \neq j}^p \hat{f}_k) for process-specific hat matrix S_j.
In one line:
Immediate question I have is: what's y^{-1}, given it's a vector?
Strategies I've looked into include:
which is just 1/y diagonalized.
inspired by the adjoint-determinant definition of the inverse
where that cross-times is a elementwise product, which is about as literal an interpretation of the factor-out logic I can see.
None of this yields a hat matrix. In most cases, the second term is larger than the first term at nearly all elements, so you end up with a hat matrix with values somewhere between -4 and 0. Then, taking the dot of that and y gives you massive too large numbers. BUT their general pattern looks sort of like the predicted values.
I'll post code here I'm using to generate these values, as well as track further ruminations.