ons, 20,.04.2005 kl. 14.24 +0200, skrev Robert Klemme:
> > Unfortunately the current ODBC driver has a weak point: mass inserts.
> > Assuming, that your application uses mass inserts, this is exactly,
> > what you observe.
> > Meanwhile a new ODBC driver is on the way (basing on SQLDBC) which
> > supports
> > mass inserts as your applications requires.
> > So, the solution for you is not there, but coming soon...
> He could use loadercli instead - I guess those restrictions do not apply
> there, do they?
I do need control over the import. The program I use can handle
replication; it will add new records, update old and ignore unchanged.
I also need to perform fast inserts and queries from my own
applications. One application I'm workin on is an Open Source NNTP
(news) server. These have > 1Tbytes new data every day, and the volume
is normally doubled every year or so. The server will have to support
several databases (I'm thinking of Oracle, mssql, mysql, postgres,
sqlite and maxdb, and maby some more) trough a generic database
interface. Most installations will be low-volume (< 100 Mbytes each
day), but the server must be able to handle high-volumes on the
databases that can support it.
I'm not sure if I can use the C++ procompiler API for this. I briefly
read the documentation yesterday, and there are two issues that
immediately hits me:
1) There is a limit on 8 connections from one application instance(?)
I will probably need hundreds of connections, as the servers
I work on use threads and queued async IO with potantially tens
of thousands concurrent users.
2) The use of a dedicated linker breakes with the design of my
database-interfaces, as these are loadable modules (or
dll's under Windows).
> Btw, Jarle you said tables are not yet indexed but there is a primary key:
> the PK is in fact backed by an index - so you actually do have an index. :-)
I know. That's why I mentioned it ;) It does however take _alot_ more
time to insert records into a database with extra indexes. MaxDB use
only 10 - 15 minutes to import the 135 million rows in my test-dataset
with the loader, but then spends almost 100 minutes creating an extra
index. With InnoDB (where the indexes are created prior to the import),
each extra index adds about 80 - 100 minutes to the import. This is not
all that much overhead, but when the row-numbers icrease to several
billons, the extra time becomes significant.
Jarle Aase email: jgaa@xxxxxxxx
Author of freeware. http://www.jgaa.com
War FTP Daemon: http://www.warftp.org
War FTP Daemon FAQ: http://www.warftp.org/faq/warfaq.htm
Jgaa's PGP key: http://war.jgaa.com/pgp
NB: If you reply to this message, please include all relevant
information from the conversation in your reply. Thanks.
<<< no need to argue - just kill'em all! >>>
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe: http://lists.mysql.com/maxdb?unsub=mailarch@xxxxxxx