IMHO, even 7 duplicates per 15 million values (~0.00005% chance of collision)
is [more than] acceptable result for most, if not all, applications. :)
And using time as the seed value doesn't and can't guarantee uniqueness of
generated values. And no matter what you use to generate GUIDs, increasing
system load increases chances of getting duplicates.
I agree with one of the posters from that discussion:
"chances are so low that you really should stress about something else--like
your server spontaneously combusting or other bugs in your code. That is,
assume it's unique and don't build in any code to "catch" duplicates--spend
your time on something more likely to happen (i.e. anything else)."
On Monday 14 June 2010 20:06:05 [email protected] wrote:
> Konrad Rosenbaum wrote on Monday, June 14, 2010 3:41 PM:
> > ...
> > With the algorithms given and the width of the value you don't get
> > anything better. The correct name for this thing would be QLUID -
> > quite likely unique ID, but UUID with the first "U" being translated
> > as "universally" was better marketing... ;-)
> This apparently matches with this guy's experience:
> "It should not happen. However, when .NET is under a heavy load, it is
> possible to get duplicate guids. I have two different web servers using two
> different sql servers. I went to merge the data and found I had 15 million
> guids and 7 duplicates."
> So to me 7 duplicates out of 15 million seems quite a lot of "not so
> universally unique instances". Off course he was using .NET, whereas we use
> Qt... ;)
> Cheers, Oliver
Qt-interest mailing list