Maybe I should explain:
already in the past, when the "old" system was in effect, the programs didn't
do normal READ and WRITE calls, but they used site-specific macro calls
instead, that hide the technical peculiarities of the access method from the
So it was kind of easy to change those macros in such a way that instead
of reading from the old DB system, now DB2 SELECTs are issued.
That's the very short version of the story, of course ...
Programming languages are PL/1 and ASSEMBLER.
Am 05.03.2014 17:40, schrieb Bernd Oppolzer:
> The company I am working with did a similar kind of migration
> some ten years ago, replacing an old home grown data base system
> based on flat files by DB2, but only the physical layer of the DB system
> was migrated; the access services above the physical layer stayed the
> same (that is, the access macros were replaced by SQL statements).
> The data was treated in much the same way as you describe it, that
> is: key columns plus 1 large varchar for the rest.
> The idea behind that was: minimal impact on the existing applications,
> but benefit from the DB2 infrastructure, utilities, prefetch etc.
> We also did some research on: what would it cost to move the whole
> application to "true relational"? We stopped this, because this turned
> out to be some 200 person years ... we simply could not afford this.
> After all these years, I come to the conclusion, that this project has been
> a big success. The data is still processed in the same way; the access
> services are still in place as they were 20 years before, but all physical
> data is on DB2 now.
> But: beware of BLOBs etc.; LONG VARCHARs should be sufficient.
> In some places we have records which can be up to 200k; then we split
> them in several VARCHARs and put sequence numbers before them.
> The joining and splitting is done by the access layer, before the
> records are presented to the application as a whole.
> Kind regards
> Am 05.03.2014 09:22, schrieb M.K.:
>> We are migrating VSAM (KSDS) into DB2. Instead of doing 1-1 or
>> normalized approach we want to check the feasibility of storing the
>> entire VSAM data (say 800 bytes) into one column as BLOB or VARCHAR
>> (800) except for key fields.Basically we want to keep the changes to
>> application code minimal. we understand that Db2 features will not be
>> used, keping those aside what is the view point w.r,to performance
>> 1. What will be the performance implication while retrieval. Will
>> amount of bytes of data add to CPU overhead,i/o's to the application
>> code or CPU relates to point to find the row/column to be retrieved.
>> 2. While insert/update has impacts in perfomance as DB2 needs to take
>> care of length change and log it for varchar
>> 3. Will performance increase if handled as blob object.