On 22 Sep, 02:58, vieviu...@xxxxxxxxx (Luke) wrote:
> I am looking for a proper, fastest and most reasonable way to insert
> data from pretty big file (~1,000,000 lines) to database. I am using
> Win32::ODBC (ActiveState Perl) module to connect with Access/MSSQL
> database and inserting line after line. I was wondering if there is a
> better way to do it...
> Maybe create hash with part of data (maybe all of it - what are the
> limitations ?)
> What is other way to do it instead 'INSERT INTO...' statement after
> reading each line ?
DBI has an execute_array method that can allow DBD drivers to optimize
Unfortunately AFAIK none of the DBD drivers I've encountered do any
For efficient bulk inserts I usually fall back on writing a file and
using the underlying database's bulk insert tool. This, of course,
does not give portability between databases.
From the documentation for system:
X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on archivum
X-Spam-Status: No, score=4.2 required=5.0 tests=MISSING_DATE,MISSING_HEADERS,
This is not what you want to use to capture the output from a command, for
that you should use merely backticks or qx//, as described in `STRING` in
the perlop manpage. Return value of -1 indicates a failure to start the
program or an error of the wait(2) system call (inspect $! for the reason).