OE Deployment

Code, discussions, and whitepapers related to deploying ABL applications including version changes, performance, etc.


AppServers

Tools, techniques, and discussions related to the use of AppServers


Deployment Utilities

Miscellaneous utilities for managaging a deployed database.


Fathom

Tools and whitepapers related to Fathom


Performance Measurement

Tools, code samples, and whitepapers related to measuring performance in deployed systems.


OESNMP: Monitoring your OpenEdge infrastructure from Any monitoring solution

Expose OpenEdge infrastructure to any monitoring platform

OESNMP is a set of utilites to expose information about OpenEdge Appservers, Databases and related infrastructure to Monitoring systems supporting SNMP. This allows you to notice unusual behaviour, verify performance tuning has the desired results, warn you before problems result in downtime,...


Table Block Mapping For Type I Areas

Many people know of the perils of record scatter and fragmentation with Type I areas. This tool provides an easy way to determine how scattered your data is and present that information in an easy to view HTML format. Primarily I use this tool to explain to non technical clients and managers why we need to migrate data from Type I to Type II areas.

You specify which tables are to be mapped by passing which areas or tables to the tool. CAN-DO style lists are supported in order to easily exclude larger tables from the list.


OpenEdge Database Advisor

The OpenEdge Database Advisor is intended to provide a quick checkup for common database configuration issues. Obviously proper tuning for an application requires much more than any tool can provide, but the advisor should highlight some of the most common low hanging fruit.

For best results you will need a recent database analysis file (proutil -C dbanalys) and you should run this against your production database. A large portion of the suggestions will be based on VST information that will differ greatly between your production and test environments.


Determine whether Progress is 32- or 64-bit

This is a bash script that determines whether the Progress version installed on a *nix system is 32-bit or 64-bit. It also determines whether the O/S is 32- or 64-bit, though this has not been widely tested. It can be conveniently called from other scripts by running with the -s switch and checking the return value (32 or 64, or 0 on error).

The script needs to be able to find the Progress installation in order to run. It will check $DLC, $PATH, and /usr/dlc.

Typical usage:

> 32or64.sh
Progress is 64 bits; OS is 64 bits

> ./32or64.sh -s
> echo $?
64


DBA Group

Issues related to management of the database - design and deployment, including performance

Use this group to associate any content which relates to database development and deployment, including performance, configuration, and testing and to ask questions which are not linked to specific content.


Secondary broker parameter spreadsheet

Attached you find a speadsheet I use to calculate the -n -Mn ... when using a secondary broker.


DFCheck

A simple tool to help you to remove all Schema Area information in a df file. You just dump your database df file which contains a complex (or


DataHack

A simple tool to simply watch and/or update your data. Everything is reading with dynamic object. It's possible to copy the table and field name in


VSTHack

At this moment, only 2 simple tools to watch what's happen on your Lock Table Progress DB and what's happen on index reading. Those tools are


Database .st and .df generation utility for conversion and migration.

A sample ABL code to assist with generation of a new .st and .df based on an earlier version of the database .df and the output of a database analysis. This code example will generate a new .st file which defines or re-defines storage areas based on data taken from the database analysis. This is one possible suggestion for storage area layout and may not be optimal in all cases. A new database definition file (.df) is created to go along with the .st file generated by the code.
The procedure also builds scripts to assist with dumping; loading or tablemoving data to the new storage areas.


Syndicate content