Archive for the ‘hcOPF’ Category

hcOPF Dialog Validation

Monday, September 30th, 2019

Reminder to self, when presenting a dialog for editing an hcOPF object a generic OK button click or action Execute handler is:

procedure TfrmObjectDialog.btOKClick(Sender: TObject);
var
  ValidationErrorList: ThcValidationErrorList;
begin
  //switch focus to another TWinControl to ensure the current focused editor
  //updates it's Subject
  SelectNext(ActiveControl as TWinControl,True,True);

  ValidationErrorList := ThcValidationErrorList.Create();
  try
    if hcUIObjectBinder.BoundObject.IsValid(ValidationErrorList) then
    begin
      hcUIObjectBinder.BoundObject.Write(osRDBMS,False);
      hcUIObjectBinder.BoundObject := nil;
      ModalResult := mrOk;
      Close;
    end
    else
    begin
      MessageDlg(Format('Please Correct the Following Error(s)'#13#10#13#10'%s',
        [ValidationErrorList.Text]),mtWarning,[mbOk],0);

        //focus editor for first invalid attribute
        if (ValidationErrorList.Count > 0) and
         (assigned(ValidationErrorList.Items[0].Attribute)) then
        hcUIObjectBinder.FocusControlForAttribute(
           ValidationErrorList.Items[0].Attribute
        );

     ModalResult := mrNone;
    end;
  finally
    ValidationErrorList.Free;
  end;
end;

And for the Cancel button:

procedure TfrmObjectDialog.btCancelClick(Sender: TObject);
begin
  hcUIObjectBinder.BoundObject.UndoChanges;
  hcUIObjectBinder.BoundObject := nil;
  ModalResult := mrCancel;
  Close;
end;

or if you are using the VCL you can Inherit or Copy the hcDialog object found in the \Source\UI\VCL folder. I would suggest adding it to the Object Repository to make it easier to do so. There are comments in the unit suggesting how to use either design-time or run-time bindings.

Adding a New Attribute to hcOPF

Friday, September 13th, 2019

In order to support a client using SQL Server with replication I needed to add GUID support to hcOPF.  This post is a chronicle of my efforts.

If you’re unfamiliar with ThcAttribute you can breathe a sigh of relief.  It’s analagous to a barebones TField implementation which contains two native Delphi scalar fields used to store the original value of the attribute as read from the object store, and the current value as manipulated by the user.  It also contains two booleans to track whether the value of either field is actually NULL.  Using native Delphi fields of the corresponding type to the database field type minimizes the amount of memory required (a variant is 16 bytes), and still provides functionality similar to the Nullable types found in .NET.  If you’re talking to a database,  NULL <> ” or 0 so you need to provide full support for database NULLs.

First I created a new unit, hcGUIDAttribute and created a corresponding class descending from ThcAttribute.  Since a GUID doesn’t have many valid forms (you can’t convert it to a boolean, integer, or float for example) the number of methods that needed to be overridden is quite small.  Perhaps this is not the best example for what needs to be done when implementing a new attribute type, but it may provide some insight into the framework since ThcAttribute is a fundamental building block.

Since a GUID can really only be represented by a string, variant, or a TGUID we only have to override SetAsString, SetVarValue and their symmetric equivalents GetAsString, and GetAsVariant.

There are three abstract virtual methods on ThcAttribute.  All of these methods are declared as abstract because they must access the native Delphi fields used for the storage of values.  Since only descendants of ThcAttribute declare the private storage fields, these methods must be implemented in descendants, and the code is always the same:

SetOrigAsVariant():  This method sets the original value of the attribute as it was loaded from the database (assuming it ever was).  This method is used by the OriginalValue property to populate the native field FOriginalValue.  Since the framework populates the objects from the database, you may wonder why we need an OriginalValue property in the first place.  The answer is, so we can load single objects we know exist, from the datastore by populating it’s values and calling ThcObject.Read().  The framework also needs to be able to set an object’s OriginalValue as it propagates Primary Keys to child objects without knowing the native datatypes involved.

There are also two virtual methods that must be overriden in all descendants, but are not declared as abstract; ResetModified and UndoChanges.  These methods reset change tracking after the database has been updated, or reset the values back to those read from the database respectively,

Then I added code to ThcAbstractFactory.GetParameterValue for GUIDs and ThcAbstractFactory.PopulateAttributes method to populate the attribute from a database TField.  With a few miscellaneous support methods in the hcGUIDUtils unit  to generate new GUIDs and strip/add brackets to GUID strings it was ready.

Unlike most ORM/OPFs hcOPF supports complex PK/FK constaints and of types other than INT.  The SQL Server system I wrote that required GUID support has been running now for 10 years with minimal issues.

FireDAC TFDScript component - Bug or Feature?

Sunday, September 8th, 2019

When switching a project from using the Spring4D ORM to hcOPF I was forced to change the implementation of the CreateDatabase functionality that was so easy to implement using Spring4D.  Now I am using a TFDScript component with multiple scripts which naturally have to match the MetaData definitions used by hcOPF, but cannot be easily derived from that MetaData.

I dropped a FireDAC TFDScript component in my datamodule and added 3 named scripts:

TFDScript with 3 SQL Scripts

TFDScript with 3 SQL Scripts

Then in my CreateDatabase method I added code to execute each script like the following:

  Script := FDScript.SQLScripts.FindScript('CreateDatabaseContent');
  FDScript.ExecuteScript(Script.SQL);

Much to my surprise only the first script was executed followed by an AV, so I started tracing into the code only to find that the very first line of code in the ExecuteScript() method calls SQLScripts.Clear!  This means you cannot use the SQLScripts collection for any more than a single SQLScript at a time if you make this method call.  So why then would it be a public method?  Why would it even exist?  Why isn’t there an FDScript.ExecuteScriptByName(’CreateDatabaseContent’) method?

procedure TFDScript.ExecuteScript(const AScript: TStrings);
begin
  SQLScripts.Clear;
  SQLScripts.Add.SQL := AScript;
  SQLScriptFileName := '';
  ValidateAll;
  if Status = ssFinishSuccess then
    ExecuteAll;
end;

According to my interpretation of the documentation ExecuteAll should execute all scripts in the SQLScripts collection in the order in which they are defined. Using the FireDACMonitor and tracing through the code showed it only executed the first script (index 0).  As a result, my ugly hack to make the component work they way it should (correct me if I am wrong here) is:

  for I := FDScript.SQLScripts.Count -1 downto 0 do
  begin
    FDScript.ExecuteAll;
    FDScript.SQLScripts.Delete(0);
  end;

Either I am missing something fundamental, or the TFDScript component is badly broken and/or documented. I have created RSP-26131 in case I haven’t lost my mind, so please vote for it.

hcOPF now supports FireDAC

Saturday, September 7th, 2019

In my last post I talked about how unfortunately Spring4D does not provide change notifications so that developers can easily determine if an object graph has been modified and needs to be persisted.  It was primarily for this reason I decided to change the persistence layer in my latest project to hcOPF.  Obviously, I’m more familiar with this ORM, and I know it’s viable because systems I have authored 10 years ago are still being used without issue.  Even though I am using Delphi Rio 10.3.2, with lots of new language features, I don’t have time to thoroughly investigate and learn other alternatives. Change notifications will allow me to implement data auditing, and the validation & binding framework functionality will ensure any data editing is easily crafted.

Unfortunately, IBX no longer works well with Firebird 3.04 even if you can actually find a version for the latest Delphi editions.  I already ran into issues with UIB supporting larger varchar columns so that was not an option.  Instead, I decided to implement support for FireDAC, in part because it had been requested and because my Spring4D implementation with FireDAC proved to be performant.  The implementation was basically a copy/paste/modify of another DAL layer.  It currently does not support StoredProcs, but queries work. More testing and additional unit tests are needed, but it should allow developers to get started using FireDAC.

I also added support for quoted column names which is a Firebird requirement if you use mixed case names.  I’ve been dogfooding it for about a week now, and ironing out a few issues encountered.  I’m pleased with the performance, as hcOPF seems to be faster than my Spring4D ORM implementation.  If I get time I will try to put together some benchmark comparisons.  My only explanation for this would be Spring’s use of RTTI.

FireDAC was added for Rio and then Tokyo. Projects should be easy to back port to other recent Delphi versions. There is one demo specific to FireDAC in the Demos folder with projects for Rio. That should show you how to get started, and it’s essentially the same for all DALs;  create a datamodule, drop a connection and all the hcOPF components on it, link them, and configure your database connection.

FireDAC DataModule Components

FireDAC DataModule Components for a Firebird Database

hcOPF now supports XE4

Friday, July 5th, 2013

I just updated the sourceforge repo with VCL projects for XE4 with the exception of HengenOPFJVValidators (I don’t have JVCL installed at the moment). Simply define an environment variable “hcOPF” to point to the root folder for hcOPF and you should be able to compile the packages. Rt. Click on all dcl prefixed packages and choose Install from the local menu.

Enjoy!

hcOPF ReadOnly Object Attributes

Monday, October 29th, 2012

For those of you looking at upgrading to XE2 for live bindings, or XE3 to get visual live bindings I thought I would mention that hcOPF supports object binding with earlier versions of Delphi (D7 and up).  Not only that, but you’re not reliant on a black box expression engine.  Bindings in hcOPF are written in Delphi and debuggable in Delphi.  In fact, it’s quite easy to write your own mediators to support any non data aware control you want to use and best of all, since hcOPF is open source, you can modify it to suit your own needs, and there are no undocumented ’secrets’.

hcOPF automatically handles ReadOnly attributes ‘out of the box’, by assuming that the domain object is the source of the truth.  That means if an attribute on the domain object is readonly, then the mediator informs the UI control that it should also be readonly.  There are some situations though, where this is not desirable.  For instance, if you want to display some boolean values on a form when a client is selected, but in order to edit the client information, the user needs to go into the client profile form, you don’t want users inadvertently changing client information by clicking on the checkboxes.  In this scenario you can toggle the AutomaticReadOnly property of the mediator (assuming it’s implemented) to False.   AutomaticReadOnly is normally True by default, and means that the mediator will ensure the UI control mirrors the domain object’s ReadOnly attribute, or will behave as if the attribute is ReadOnly if the object itself is marked as ReadOnly.  By changing AutomaticReadOnly to False, you can control the UI control’s ReadOnly behaviour yourself.  In the example given, the checkboxes would be disabled.

I recently added a new ReadOnly field variable to ThcAttribute since I ran into a scenario where I wanted to use AutomaticReadOnly but wanted to make certain object attributes ReadOnly.  Since the ThcAttribute.ReadOnly property is determined by its MetaData, changing it for one attribute in one object instance effectively changed it for that attribute in all object instances.  Adding the ReadOnly field variable and initializing it from the attribute definition (ThcAttributeDef) effectively solved this issue.

hcOPF - using Attribute OnChange Events

Monday, October 22nd, 2012

Althought hcOPF implements automatic calculations via a ThcCalcObject registered with the object metadata, it’s not the most efficient implementation.  Since the framework has no idea of the attribute dependancies in the calculation, it calls the CalcObject whenever an attribute of the object changes.  Of course it avoids doing so, during mass object attribute changes, such as when initializing the object, or reading it from the objectstore.  Nevertheless, you may encounter situations in which you want to optimize the calculation, such as when you make a database call.

Just like a TField object, a ThcAttribute implements an OnChange event that can be used implement calculations.  This is a much more efficient mechanism, but unfortunately does not benefit from the framework’s knowledge about the calculation, and cannot therefore automatically avoid triggering the calculation event during object reads or other mass object changes, such as object initialization or resetting object attributes after writing them to the objectstore.  It also suffers from the disadvantage that the code for multiple calculations is spread out across different event handlers instead of being in one place.  That said, any good framework does not box you in, so hcOPF allows you to use either method.

If you use the ThcAttribute event to perform the calculation, make sure to subscribe to the event early enough in the lifecycle of the object in an overridden method.  For instance, subscribing to the event for each object processed in the ThcObjectList.Load() method may be sufficient for most cases, but if you create individual objects for consumption, you should subscribe to the event in the ThcObject.Initialize method instead (recommended).  Also, be sure to check objectstate before performing the calculation.  Avoid trying to access attributes while they’re being initialized, or populated.  IOW, make sure the ObjectState = osNone and remember to fire the event in any ThcObject.Read() or ThcObject.ReadAttribute() override.

For example here is a possible event handler:

procedure TMyself.HairColorChanged(Sender :TObject);
begin
  if (ObjectState = osNone) then
  begin
    FQuery.SQL.Text := Format('select HairColor from fnRandomHairColor(%d)',
            [GetTickCount()]);
    FQuery.Open;
    try
      HairColor.Assign(FQuery.Fields[0]);
    finally
      FQuery.Close;
    end;
  end;
end;

and in the Read() override:

procedure TMyself.Read(Source :ThcObjectStore; WithChildren :boolean = True);
begin
  inherited Read(Source,WithChildren);
  HairColor.OnChange.FireEvent(HairColor);  //Sender should always be the attribute
end;

hcOPF - Configuring XE2 for Compilation

Friday, May 4th, 2012

It is not necessary to change the DCP output folder because the defaults automatically take into account compiling for different platforms.  In the Tools - Options - Library settings if you select Win32 you will notice that the Package output directory is set to:

$(BDSCommonDir)\bpl

and the DCP output directory is set to:

$(BDSCommonDir)\dcp

which works great since the IDE is a 32 bit EXE and this folder will be on the search path so the IDE can load the design-time packages.  This provides backwards compatibility, but the moment you start compiling the same package for additional targets it becomes cumbersome.

If you select Win64 or OSX you will notice that the package output directory changes to:

$(BDSCommonDir)\bpl\$(Platform)

It’s interesting that you cannot modify the Library Path globally, so if you have a product that compiles for multiple platforms you have to add the necessary bits into each Platform’s version of the Library Path.  This is an enhancement I have suggested in QC#105378 .  Personally I have always preferred explicit specification rather than implicit, and as such I think the default Package output directory should be:

$(BDSCommonDir)\bpl\$(Platform)

and likewise the default DCP paths should be:

$(BDSCommonDir)\dcp\$(Platform)

This is sort of like defining a class as

TMyObject = class
end;

vs.

TMyObject = class(TObject)
end;

I believe consistency in usage promotes more readable and thus more maintainable code and IDE environments.  From my experience it’s also easier to manually purge your output folders, and confirm the appropriate units are being generated if a consistent directory structure is used.  If you agree, please vote for QC #105377.

In the case of hcOPF the Library Path needs to contain the following:

$(hcOPF)\Lib\D16\$(Platform)\$(Config)
$(hcOPF)\Source\Resources
$(hcOPF)\Source\Include

If you happen to notice that the Path is greyed when you add it to the dialog, don’t panic.  For some reason, the Directories dialog has problems validating Paths that contain $(Platform)  which is evident by the first path in the list

$(BDSLIB)\$(Platform)\release

also appearing in grey.  I have entered a QC report (#105375) for this, so please vote for it.

So DCP and BPL output folders are handled by default in a suitable fashion by the IDE, unless you’re like me and prefer a more uniform directory structure in which case you can change the Win32 DCP and BPL path defaults in the Tools - Options - Library dialog.  If you change these paths, packages which do not have an override value specified in their Project - Options, will output to the new default directories.

At a minimum developers need to make sure their Unit output path does not collide which means using a structure something like .\Lib\D16\$(Platform)\$(Config) as I alluded to in my previous post.  This is also handled by default if you’re creating new packages in XE2.  If you’re upgrading existing packages, make sure to set the unit output path to use $(Platform)\$(Config) as well.

One thing I find intriguing is that under Project - Options for ‘DCP output directory’ there is an entry for the Target ‘Debug Configuration - All Platforms’, yet there is not one for ‘Release Configuration - All Platforms’.  Maybe someone can explain this one to me…

hcOPF - Time to Start Monkeying Around

Monday, April 30th, 2012

I’ve put quite a bit of effort as of late getting hcOPF ready for Win64 compilation, and as part of that effort, re-factoring it to support FireMonkey.  The actual code and package changes were relatively minor when compared with trying to understand what was required.  To me this validates the design of the framework that it can be adapted rather easily to support new frameworks and platforms.

There are no new recommended guidelines for DesignTime and RunTime packages AFAIK now that XE2 supports FMX and VCL.  To complicate things further, add in Unit Scope Names, the fact that the IDE automatically renames FMX package projects when you target the OS/X, and all the conditional directive permutations, and it can drive you bananas!

On top of that, the IDE does not provide default guidance when upgrading packages from earlier IDE versions.  It does not default the unit output directory to .\$(platform)\$(config) as it does for new ones, so when you compile for one platform you overwrite your DCUs for another, and really confuse things.  For this reason, you might notice that for All configurations - All platforms hcOPF uses a unit output directory of $(hcOPF)\Lib\D16\$(Platform)\$(Config) where $(hcOPF) points to the root directory of the framework.  It would be great if there was an environment variable for the IDE version.  Then you could use something like $(hcOPF)\Lib\$(DelphiVersion)\$(Platform)\$(Config).

I am pleased to announce that hcOPF now supports FireMonkey (Win32/64 and OSX32) as well as VCL Win32/64 targets with a few caveats:

1) when compiling for the Mac or any 64 bit target you must skip compilation of the design time packages.

2) The HengenOPFValidatorsFMX/VCL packages do not support 64 bit targets since it uses the open source PerlRegEx component instead of the RegEx support present in XE and above.

3) Certain packages of course cannot be used on certain platforms like ADO and the Validators on the Mac since they have Windows specific implementations.

4) As always some packages require third party commercial libraries, such as the HengenDevExpressXXX packages which require the Developer Express Quantum Grid suite (highly recommended).

5) Win32 BPLs go into the default $(BPLDir) and those for OSX32 and Win64 go into their respective subdirs

In a future version I hope to add support for iOS as well as providing validator support for FMX.  Currently hcOPF will not compile using FPC because it does not support the implements interface delegation syntax that EMB’s compiler does.  Eventually I plan to support FPC and SQLite.

If you want to start Monkeying around with hcOPF, check out the FireMonkey SVN branch.

FireMonkey vs. VCL

Monday, March 26th, 2012

I have finally started re-factoring hcOPF to support FireMonkey and Win64.  Win64 was a breeze, but supporting FMX is proving to be a bit of a challenge.

If I was EMB, I would be trying to make FireMonkey a write once compile on many platforms solution, and it is across Linux, Mac OS/X and Windows AFAIK, if you build an FMX application.  However, I would venture that most customers are looking to port their existing VCL codebase to access more markets, or at least leverage their existing knowledge, and might be a little hesitant to bet it all on a newly released UI framework sold and supported by a single vendor (especially after CLX).

All developers know, changing code tends to break what once worked, especially when you introduce more conditional compilation (check out my QC request 94287 to make conditional compilation easier to use).  To accomplish this end, the API usable to FireMonkey applications needs to have as much in common as possible with even VCL for Windows applications because at some point a developer will use code to manipulate the controls.  If it’s possible to use the same code for both platforms, you’ve saved in not having to maintain functionally duplicate code and in dealing with conditional compilation.

The ideal scenario would be to be able to specify either a FireMonkey or a VCL form DFM in a conditional directive, and have all the UI code shared (or minimal conditional compilation).  Of course, if you want to use the advanced functionality available in FireMonkey, perhaps this approach isn’t viable.  If you just want to target Mac OS/X without having to re-write your VCL application forms, and are using FMX for basic presentation of data, this would be ideal!  You wouldn’t have to invest the same effort to determine the merits of FireMonkey as a UI replacement for the VCL because you wouldn’t need to keep two separate UI source code trees in sync while FireMonkey matures.

So far, with my brief exposure to FireMonkey with XE2 I can see a number of problems in achieving the Write Once Compile for Many Platforms concept.  FireMonkey uses .Text instead of .Caption for some control window captions (ie: TGroupBox).  So while the control class may be the same, even some simple UI code cannot be shared with a VCL application.

Even non-visual code may be a challenge to re-use.  For instance, I use GetTickCount() to time activity in hcOPF.  I want to keep hcOPF compilable for users of D7 and above.  I personally feel that my coding productivity was higher in D7 with CodeRush than the Delphi XE with GExperts and CnPack.  Part of the reason for this is the code parsing the IDE performs in the main thread.  Type in Begin incorrectly, and you can be waiting for 10 seconds while the IDE tries to figure out what is going on.  That’s pretty sad on a 6 core system…but I digress.

GetTickCount() is implemented as an inline function in the implementation section in the System unit for MacOS and Linux, and the Windows implementation is in Windows.WinAPI.  Not a big deal to add a few {$ifdefs} to handle that, but GetTickCount() as defined in System is not accessible to other units.  So in order to use it, you will have to copy the implementation from System, and expose it.

From what I have seen of XE2 Update 4, FireMonkey is still not usable from a performance standpoint for replacing a normal VCL form. On my 6 core system with a Radeon 4250 the main form appears and you can visibly see the contents of the form background and content being rendered.  In this case, the form is just 4 edit controls in 2 separate group boxes using hcOPF Object Binding to populate the controls.

I think FireMonkey is a great concept, but adoption would be faster with an API more consistent with the VCL, and I think FMX has a long way to go before it becomes a viable replacement for the VCL and X platform Delphi becomes a true reality for more than the most trivial application written from the ground up for FireMonkey.