[Prev][Next][Index][Thread]

Re: (meteorobs) Visual magnitude




Thanks for the reply, Rainer! The first part of my answer is a little off-topic 
for meteorobs, but there's also a question for y'all at the end...

There's a variety of formulae which deep-sky nuts (like Brian Skiff, Steve 
Waldee, Mel Bartells) have developed for describing how easily a particular deep 
sky object can be spotted against the sky background, under certain conditions.

One of the most interesting of these are the algorithms for figuring out the 
"optimal detection magnification" for an object - the power to use on a given 
telescope, under sky with a given brightness, to MAXIMIZE the likelihood of 
spotting that object in the eyepiece. The more sophisticated of these formulae 
actually take into account the spectral characteristics of the objects involved, 
and the effects of using various filters too.

I'm not that familiar with the details of these algorithms, but they seem to 
revolve around the log-relative contrasts between the object and the sky.


You also wrote:
>If the sky is lit up lm for stars and meteors may separate significantly.

This is referring to the provision that meteor data collected under skies with 
LMs below 5.0 (or so) are not useful for rate analysis, because of the large 
correction factors involved? But I had always assumed that this 5.0 limit had 
more to do with small sample size (you just don't see too many meteors from a 
parking lot), and maybe with the use of LM=6.5 as the "standard sky" centerline, 
than with any inherent divergence between stellar and meteor detectability!

What are the factors that contribute to this divergence? Is it because meteors 
are actually slightly extended objects, or because of some property inherent in 
their light emission which makes them on average less detectable than a sampling 
of stars with many spectral types? This one made me really curious!

Thanks, and clear skies y'all!
Lew

References: