-
Notifications
You must be signed in to change notification settings - Fork 4
Generalized inverse gaussian #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Use conditional expression with shape Correct typo in test Update RealField to work with arrays Add the inverse Gaussian distribution Bump Split structure classes into 3 files Polish parameter_supports mecahnism Factor out triangular number functions Skip sampling test for inverse Gaussian Correct issue with scalar support bound shapes Split parameter package Correct docstrings Bump Eliminate Jax initialization on import Update examples Bump change sample using NP directly resolve conflicts Remove commented-out code related to Inverse Gaussian in pytest_generate_tests resolve conflicts resolve conflicts
Awesome!! Very cool.
Have you looked in tensorflow probability? I don't know much about Bessel functions, but is this close to what you're looking for? If not, I suggest you request it there. Also, there was this issue is worth taking a look at, and maybe asking there. What do you think? |
Yes |
Yes, I think so. You can see how I imported other Bessel functions into |
…aussianNP to use log_kve for calculations
Added the
Wonder if there is any plan to
The other tests in test_distributions.py are passed. |
Instead of adding constraints, the trick is to ensure that the flattened parametrization is over the entire plane. It may be a bit hard to understand, but the beta distribution also has constrained parameters, but it has no problem with I see you used this in your distribution, so I wonder why it's not working. I can look at it later this week if you get stuck. Just let me know. |
FYI, these are the errors I get running this PR
You may want to squash and rebase onto main since it's running with some old dependencies. |
Thanks! I will take a look at this in the weekend. |
Cool, just checked and it looks like you forgot the constraint on |
This value should not have constraint, see https://en.wikipedia.org/wiki/Generalized_inverse_Gaussian_distribution
I should have the latest update from the main? I see your last change 6a058e6 in my branch. This seems to be a numerical issue from the log bessel function when taking its derivative from finite difference. I changed the eps from 1e-6 to 1e-10 and the Newton's method converges. However the natural parameters are not the same: p_minus_one = jnp.array(0.9891)
negative_a_over_two = jnp.array(-3.5979)
negative_b_over_two = jnp.array(-0.4638)
gig_np = GeneralizedInverseGaussianNP(
p_minus_one=p_minus_one,
negative_a_over_two=negative_a_over_two,
negative_b_over_two=negative_b_over_two
)
gig_ep = gig_np.to_exp()
gig_np_from_ep = gig_ep.to_nat()
print(gig_np_from_ep)
print(gig_np)
print(gig_np_from_ep.to_exp())
print(gig_ep)
Small difference in The pdfs are not the same. I compared the results to scipy pdf to make sure the pdfs are correct: p = gig_np.p_minus_one + 1
a = -2 * gig_np.negative_a_over_two
b = -2 * gig_np.negative_b_over_two
gig_sp = geninvgauss(p=p, b=np.sqrt(a*b), scale=np.sqrt(b/a))
p = gig_np_from_ep.p_minus_one + 1
a = -2 * gig_np_from_ep.negative_a_over_two
b = -2 * gig_np_from_ep.negative_b_over_two
gig_sp_from_ep = geninvgauss(p=p, b=np.sqrt(a*b), scale=np.sqrt(b/a))
x_values = np.linspace(0.001, 10, 1000)
plt.figure(figsize=(8, 6))
plt.plot(x_values, gig_np.pdf(x_values), label='jax pdf original')
plt.plot(x_values, gig_sp.pdf(x_values), '--', label='scipy pdf original')
plt.plot(x_values, gig_np_from_ep.pdf(x_values), label='jax pdf NatToExp')
plt.plot(x_values, gig_sp_from_ep.pdf(x_values), '--', label='scipy pdf NatToExp')
plt.legend() When I sample random numbers from the two distributions, the sufficient statistics are highly similar: n = 100000
x_sample = gig_sp.rvs(size=n)
x_sample_from_ep = gig_sp_from_ep.rvs(size=n)
print(f"mean log: {jnp.log(x_sample).mean()}, {jnp.log(x_sample_from_ep).mean()}")
print(f"mean: {x_sample.mean()}, {x_sample_from_ep.mean()}")
print(f"mean inv: {(1/x_sample).mean()}, {(1/x_sample_from_ep).mean()}")
Will do more research on this numerical instability. |
…erseGaussianNP for improved precision
Hi Neil,
I am trying to add the generalized inverse Gaussian distribution. However there is an issue with the log bessel function of the second kind in the log normalizer and the
to_exp
method: https://en.wikipedia.org/wiki/Generalized_inverse_Gaussian_distributionThere is no JAX implementation of this function jax-ml/jax#17038, so I use this package: https://github.com/tk2lab/logbesselk
However it prevents me from using jvp in the log normalizer. In addition it does not support
jax.grad
: tk2lab/logbesselk#33, so in the current version I simply use the finite differencewhich is not accurate.

Please let me know if you have any suggestions on this issue. Tensorflow has an implementation of the log bessel function https://www.tensorflow.org/probability/api_docs/python/tfp/math/log_bessel_kve, not sure if there is a way to use it in the current framework.
Besides that the pdf and sampling should work well.