After disappointment with how Oracle developer edition runs on my PC I decided to give an old acquaintance a whirl. I downloaded the latest MySql source, and built it in the cygwin environment (using the instructions in the DBD::mysql perl package).
It all installed nicely, and I downloaded TOAD to help set up my tables - another great (and free) tool.
My requirement is a simple lookup, so I set up a simple table, and a perl script to loop and insert values into it. I was reading a bit about optimising inserts, and found that dropping the index before all the inserts, and then adding it again is significantly quicker than keeping it enabled during the inserts.
Then I discovered that it is possible to insert multiple records at once. So I started to play around with this, and made the number of rows inserted at a time a configuration variable. The sql syntax is like "insert into table_name values(?,?,?), (?,?,?)" etc... where there are 3 columns in the table.
Not surprisingly inserting 10 at once rather than one speeded the times up from 2000/sec to 5000 a sec with significantly less trips to the database. Continuing the trend, I discovered that about 10,000 at a time was quite a good number, with around 130,000 inserts per second! Now that's pretty good!
After adding a single index on the 5 parameters I needed to search on, I was able to run well over 1000 queries per second, which is pretty good whenever there is over 11 million records in the database.
Another notable difference using MySql was that there seems to be no slowdown in inserts when the dataset grows.