"sean" <jaymoseley@xxxxxxxxxxx> writes:
> Craig Markwardt wrote:
> > "sean" <jaymoseley@xxxxxxxxxxx> writes:
> > ...
> > > Note from my quotes that I have stated that rapid
> > > variations in the optical lightcurve should be
> > > seen if the time scale of the sampling were short
> > > enough. One of my theoretical criticisms of current
> > > procedures and beamed theory was that the power law
> > > fitting of optical data employed by the astrophysics
> > > community was in fact a misguided procedure
> > > that disguises the true variability of magnitude within
> > > the optical lightcurves by incorrectly smoothing out
> > > the observed variability in the the optical lightcurve,
> > > particularly in early time lightcurves where
> > > flucuationsare more easily seen due to the relative
> > > brightness of grb afterglow.
> Note that Craig seems to be in denial that power
> law fitting takes variable brightness observations
> and fits them to a straight line power law.
> Especially considering that Craig himself has insisted
> many times that the decay rate is best described by
> a smooth power law decay with at most 1 break.
> Here is Craigs quote from years ago trying desperately
> to show how my non `power law` approach is just not
> "..Also, as has been pointed out, trying to infer that
> something is a "peak" when the data are as noisy
> and as sparse as they are, is in my view a dubious
> practice. Simply "connecting the dots" will lead
> to *a possible* solution, but ultimately a very
> *low probability* one a priori.
> A more appropriate approach would be to start with
> a featureless model of the decline (say, a power law),
> then add a gaussian or some other simple
> parameterization of the putative peak. By computing
> the F-statistic, one can then find out how significant
> the additional peak is, statistically speaking,.."
> And here is Staneks reply that this method as endorsed
> by Craig is basically, well unscientific as regards grb`s..
> "....but with clear short-timescale variations, as
> reported before by Morris et al. (2006b). Trying to
> describe these erratic events with smoothpower-law
> fits is often a dubious statistical proposition..."
> Note how Stanek clearly criticises Craigs approach
> and endorses my argument that the small time scale
> variations in optical data has to be accepted
> without ANY power law smoothing.>
... much snipped ...
"Sean," I am not sure what prompted this strange post. It appears
that in many cases you have completely misinterpretted my words or
taken my posts out of context. Examples: (1) the Feb 2001 post that
you cite several times, was a discussion of a *single burst* afterglow
(GRB 970508), not all afterglows in general. (2) It was a discussion
of ways to *admit* (not reject!) variability interpretations in a case
of a noisy and poorly sampled light curve. (3) You are fixated on a
"power law." However, I gave that as an *example,* "(say, a power
law)," but not an exclusive example. I have always been in favor of
letting the data drive the analysis. Sometimes the "flares" are
obvious and no statistical test is needed. However, the discussion in
2001 (GRB 970508) this was not the case. Certainly in the pre-Swift
era almost all detected afterglows were quite smoothly decaying
(Berger et al 2005). There are other examples of your bizarre
misreading of my posts (discussion of the "flat part" of GRB
060210??). I encourage you to read the words that I actually wrote,
and interpret them as a normal human being would.
It would probably be quite convenient to you if you did not have to
provide any quantitative measure of the success or failure of your
"theory," which you claim works for all bursts(!). However, being
objective usually involves being quantitative. Your manner of
"analysis" suggests a large amount of cherry-picking and *sub*jective
interpretation, which is prone to huge biases. I can only encourage
now, as I did in 2001, for you to try to remove your personal biases
and stake in this issue so that you can be objective.
I suspect that you don't really understand the nature of error bars
and uncertainty, much less what data modeling is. These are basic
issues in science and analysis that can't be ignored. Measurement
uncertainties (both statistical and systematic) fundamentally limit
how we can interpret the data. I encourage you to learn about
probability and statistics. A text like Bevington (2002) is great.
It's not possible to have a "discussion" with you, if you insist on
warping words and taking postings out of context. Assertions need to
be substantiated. You are certainly free do to what you wish, but it
won't involve me.
Berger, E. et al 2005, ApJ, 634, 501
Bevington, P., Robinson, D. K. 2002, *Data Reduction and Error
Analysis for the Physical Sciences*, McGraw Hill