Quote from NovaZyg on 08/31/07 at 08:02:24:There is a DDF for every file in DBA. Some of the files share the same DDF, they are schema files of the same FD. Example BKARINV and BKARHINV. They have the exact same layout so they share the FD of BKARINV. And yes Option Three would be your best bet.
I figured you'd be the font of knowledge on this subject.
Yes, I think that if this works out then option 3 is the way to go. I forgot in my description that there would need to be an update trigger that unpacked the binary blob into seperate fields again but it's sort of obvious that it would have to be that way. I'm not sure whether to totally use triggers or views or some other database concept or to directly pack and unpack records in the DLL and still use triggers for if/when an update is done outside of btrieve. It would probably be most consistent to do it all in the DB. After all, postgresql can use C code triggers so I'm not limited there...
If this works it could be fairly neat. No more pervasive licenses (and the server is sort of expensive), better data consistency with multiple users, full transaction support, faster reports (I'd imagine that they could be faster because I could locally cache tables in the DLL thus greatly reducing network latency).
Unfortunately many of the real advantages would be out of reach as the btrieve data access paradigm is totally different than the SQL paradigm. For instance, in reports in SQL you could say 'select * from InvMaster where InvMaster.PartNum < 10000;' and the SQL engine would return the whole result in one shot (and 1-3 function calls). Where as btrieve would require MANY function calls. Or 'update InvMaster set InvMaster.Price = InvMaster.Price * 1.05' to update for inflation (or something...). Which would run faster, the SQL query or btrieve? My money is on SQL.
Still, local chunk caching (caching a table by chunks) and improved concurrency could still make this worthwhile.