Skip to content

Conversation

@stevertaylor
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@AaronDJohnson AaronDJohnson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could really save us time down the road with people asking us questions on how to use the models. I highly recommend that we do this sort of thing. Let's discuss the implementation a bit though.

common=['gw_log10_A']+gamma_common_name, name='gw')
]) for psr in psrs))
elif common_type == 'hd':
gl = ds.GlobalLikelihood((ds.PulsarLikelihood([psr.residuals,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above. I think that this extra Likelihood block could be modified to fit within just one block if we make a small if then statement choosing between makegp_fourier and makegp_fourier_global.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to do that. But I'm confused about ArrayLikelihood being adopted as the standard. The README mentions it as still being experimental. If we should adopt is as standard, then why does GlobalLikelihood exist at all?

from PTMCMCSampler import PTMCMCSampler as ptmcmc


class JumpProposal(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These JumpProposals should really go in the sampler instead. @vhaasteren can we put them there instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I asked this below, but do you mean in PTMCMCSampler itself? We could do that, and it would certainly be a cleaner way for both discovery and enterprise to access these custom jump proposal schemes. The issue is that the sampler package then becomes super specific to PTA data analysis, and loses its generality. If we do go that route, then I'd advocate for the generality of PTMCMCSampler to be retained, but with some additional features that PTA data analysts can access without being intrusive or obvious to the more general user.

from jax.tree_util import Partial
import discovery as ds

def lhood_maker(psrs, noisedict=None, gamma_common=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think make_likelihood would be a better name for this.

gamma_common_name = ['gw_gamma']

if common_type == 'curn':
gl = ds.GlobalLikelihood((ds.PulsarLikelihood([psr.residuals,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should exclusively use ArrayLikelihood in these functions. It is faster, and these models typically assume that the noise is the "same." If they do not, these e_e type functions won't save us and the full GlobalLikelihood functionality needs to be visited.


class JumpProposal(object):

def __init__(self, param_names, empirical_distr=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empirical_distr isn't used here

class JumpProposal(object):

def __init__(self, param_names, empirical_distr=None,
save_ext_dists=False, outdir='./chains'):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

save_ext_dists and outdir aren't used here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet. It's basically ported over from e_e, anticipating when we may want to use empirical distributions as proposals.

self.get_lnlikelihood = lambda x: float(jlogl(self.x2p(x)))
self.get_lnprior = lambda x: float(jlogp(self.x2p(x)))

def x2p(self, x):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x2p should be defined outside of this class, because all samplers may have use for it.

# does not handle vector parameters
return {par: val for par, val in zip(self.param_names, x)}

def p2x(self, p):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

p2x should be defined outside of this class, because all samplers may have use for it.

return q, float(lqxy)


class InferenceModel(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sold on this class being necessary. It seems to me that this could just be a function (setup_ptmcmc_sampler or similar) that returns the sampler object.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's more to this class though. The initialization takes in the discovery likelihood and the pulsar names, then prepares things for PTMCMC (like get_lnlikelihood and get_lnprior) in a way that feels familiar to enterprise users. Plus you have the separate function that prepares parameter groupings. We may be able to split some of this up, but we do need most of it.

"""
return np.array(list(p.values()), 'd')

def get_parameter_groups(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function also seems like something that should be in the sampler package since it gets used in multiple places (here and e_e).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean in the PTMCMCSampler package itself? If so, that's certainly possible, but it would then make that package highly specific to PTA data analysis in a way that it is currently not, and would lose some of its generality as a sampling package. We could go that route though, but it's a bigger conversation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants