ISTech Support Forum
http://www.istechforum.com/YaBB.pl
Evo-ERP and DBA Classic >> Suggestions for Updates >> Better DE-A
http://www.istechforum.com/YaBB.pl?num=1188217205

Message started by kkmfg on 08/27/07 at 05:20:05

Title: Better DE-A
Post by kkmfg on 08/27/07 at 05:20:05

To be quite blunt I hate DE-A. I use it all of the time for exports out of DBA but it's slower moving than frozen molasses. The other real joy in my life is that the export filename field is so short. Sometimes it's possible that I really might want to type c:\reports\inv_dump.txt or something of that sort. Can I do that? No way! It's limited to something like 8 characters. That thing is lucky I don't have a gun here at work! ;-)

Title: Re: Better DE-A
Post by GasGiant on 08/27/07 at 05:55:30

Agreed, it sucks. Let's write something better! Oh, wait. The "Hello World" program is better. I guess what I meant is, let's write something that works better than DE-A..... done yet?

Title: Re: Better DE-A
Post by kkmfg on 08/27/07 at 07:17:15

Actually, I do already have part of this done. I hate BTrieve so much that I toyed with a BTrieve -> PostgreSQL converter. I've got some source code from that that extracts the field names and types from each DBA file. It wouldn't be too difficult to make it snag the data too. And from there a GUI to select just which table to open, what to save it as, which fields, output headers? and we're home. I don't remember whether the thing was written in VB6 or VB.NET... And, as a bonus it uses ODBC so maybe, just maybe, it'll be able to get more than one record at a time over the network. If there is one thing I hate the most about BTrieve it's working with only one record at a time. What a worthless way to access the data!

Title: Re: Better DE-A
Post by GasGiant on 08/27/07 at 09:05:12

Oooh, good idea. Down with Btrieve! Port the whole shebang. Just imagine Evo with referential integrity and MVCC. Plus, not having to horse around with spotty ODBC support. I could use XPath/XSLT for snappy Intranet apps. {sigh} Too bad Pervasive gave up on PostgreSQL.

Title: Re: Better DE-A
Post by kkmfg on 08/27/07 at 10:50:52


GasGiant wrote:
Oooh, good idea. Down with Btrieve! Port the whole shebang. Just imagine Evo with referential integrity and MVCC. Plus, not having to horse around with spotty ODBC support. I could use XPath/XSLT for snappy Intranet apps. {sigh} Too bad Pervasive gave up on PostgreSQL.


I really honestly was going to do it too. I found code from MS for BTRIEVE -> Sql Server. It basically replaces the wbtv32.dll file with a custom made one that instead queries SQL server instead of btrieve. I thought that maybe I could modify that to work with postgresql. My intention was just as you said: to get referential integrity, transactions, and MVCC. The problem, of course, with that is that DBA doesn't know how to take advantage of such things. I could add automatic transactions but referential integrity is a bit harder to automatically do. MVCC could more or less automatically be taken advantage of though.

Unfortunately though, btrieve really, really, really sucks. All records are passed back and forth in packed binary format and it's difficult to figure out what it's database handles actually mean. MS got around this by basically hard coding C code to pack each table's data from SQL format to btrieve format. They also required a recompile of the calling program to get around the fact that nobody outside of Pervasive understands what the btrieve handles encode. Obviously I'm not going to be able to recompile TAS so that it plays nice... So, its a little tougher if you have to try to figure that out on the fly. I thought about writing a parser that took as input the table definitions for DBA and output C code that could be compiled to hard code like MS did. This might be doable with a lot of work. The biggest thing is being able to recognize the handles passed to btrieve so that one knows which file is being accessed and which record. So far I'm striking out on that.

I might still toy with this a bit as it will be of benefit for many DBA users. Not to mention postgresql is free and allows unlimited users. ;-)

Update: Apparently I've been a bit dense... The reason I can't figure out what the handles are encoding is that I've been using my wbtrv32.dll as a shim between DBA and the real wbtrv32.dll. If I cut Btrieve out entirely I could use the handle to store my own info rather than try to decode the one pervasive sql makes... I might actually be able to do this after all.

Title: Re: Better DE-A
Post by kkmfg on 08/28/07 at 19:44:28

For anyone interested in the progress of these things you may keep apace of the situation on the wiki.

http://www.evoerpwiki.com/index.php?title=C/C%2B%2B_Hacks

Title: Re: Better DE-A
Post by kkmfg on 08/31/07 at 05:50:12

Ok, for anyone interested in this or anybody with a high level of technical expertise (read: sadomasochism) please comment on this:

Btrieve stores all records as binary blobs. It inherently has no idea what the record format of any file is. That's why DDF's are necessary for 3rd party reading of the data. Programs written specifically for btrieve could be hard coded with the record formats.

I can use the DDF's to get the record format and extract the relevant data to postgresql that way. However, not all DBA files have DDF info generated (as far as I can tell... at least I suspect that a few files used internally by DBA may not.) Even if I did extract the record info I'd still have to pack it into a blob again to send it to DBA where it would be internally unpacked again by TAS. This will be sort of slow. There would still be some advantages though (MVCC, transactions, could be faster on large reports due to less network overhead, more stable than btrieve).

Another option is to store each btrieve record as a binary blob the size of the record and just feed DBA the blobs. This is essentially what btrieve is already doing for DBA. We'd still get MVCC and transaction support but one would be unable to directly interface with postgresql to get at the record contents. Since it also takes PervasiveSQL out of the mix it also breaks ODBC support.

I suppose a third option for people with lots of harddrive space and somewhat fast processors is to unpack the records using DDF (so long as I'm wrong and all files are specified in the DDF) into postgre and add a binary blob field. Add an update trigger so that every record update to postgre also repacks the record into a blob. That way you have the data both ways and on record retrieval from DBA I could just serve the blob but you'd also have the data in fields if you wanted to play with it outside of DBA. And the update trigger will then update DBA"s blob every time you modify a field. Obviously there would need to be an insert trigger too. This could all be done even if the DDF doesnt have absolutely every file. It defines the important ones you might want to get at externally and the rest (if there really are undefined ones) can still be blobs and that should work fine.

So, whatta ya say? Option three?

Title: Re: Better DE-A
Post by NovaZyg on 08/31/07 at 08:02:24

There is a DDF for every file in DBA. Some of the files share the same DDF, they are schema files of the same FD.  Example BKARINV and BKARHINV.  They have the exact same layout so they share the FD of BKARINV.  And yes Option Three would be your best bet.

Title: Re: Better DE-A
Post by kkmfg on 08/31/07 at 09:10:36


NovaZyg wrote:
There is a DDF for every file in DBA. Some of the files share the same DDF, they are schema files of the same FD.  Example BKARINV and BKARHINV.  They have the exact same layout so they share the FD of BKARINV.  And yes Option Three would be your best bet.


I figured you'd be the font of knowledge on this subject. ;-)

Yes, I think that if this works out then option 3 is the way to go. I forgot in my description that there would need to be an update trigger that unpacked the binary blob into seperate fields again but it's sort of obvious that it would have to be that way. I'm not sure whether to totally use triggers or views or some other database concept or to directly pack and unpack records in the DLL and still use triggers for if/when an update is done outside of btrieve. It would probably be most consistent to do it all in the DB. After all, postgresql can use C code triggers so I'm not limited there...

If this works it could be fairly neat. No more pervasive licenses (and the server is sort of expensive), better data consistency with multiple users, full transaction support, faster reports (I'd imagine that they could be faster because I could locally cache tables in the DLL thus greatly reducing network latency).

Unfortunately many of the real advantages would be out of reach as the btrieve data access paradigm is totally different than the SQL paradigm. For instance, in reports in SQL you could say 'select * from InvMaster where InvMaster.PartNum < 10000;' and the SQL engine would return the whole result in one shot (and 1-3 function calls). Where as btrieve would require MANY function calls. Or 'update InvMaster set InvMaster.Price = InvMaster.Price * 1.05' to update for inflation (or something...). Which would run faster, the SQL query or btrieve? My money is on SQL.

Still, local chunk caching (caching a table by chunks) and improved concurrency could still make this worthwhile.

ISTech Support Forum » Powered by YaBB 2.1!
YaBB © 2000-2005. All Rights Reserved.