Archive for May, 2012

Why Managed Code isn’t Managed very Well

Wednesday, May 30th, 2012

Today I was listening to the 50th podcast @ Delphi.org where Jim McKeeth again brought forth the idea of Managed Code being the future, and JITing for the CPU of the device being better than native compilation in advance.  While I agree with the latter to some extent, I certainly don’t agree with the former.  I think it’s safe to say the vast majority of OS/X and iOS applications are written in ObjC and compiled to native code.  Native is still king in many camps, and even Microsoft is bringing sexy back.

IMHO managed code is not very well managed.  What I mean by that is, every time you load a managed EXE, the code contained therein, is JITted for your CPU.  This increases startup time, and CPU load every time you run it.  Even today, .NET applications start up slower than a Delphi application compiled for a native target.  Most businesses have standard, relatively current hardware so you don’t have to worry so much about whether to target Pentiums with floating point issues.  You can target current generation CPUs and still get decent optimization.  Perhaps not as good as some IL JITters, but good enough, and you don’t need the Jitters to already be installed on the target machine.  What native compilation without garbage collection does, is force the developer to think more about managing lifetimes, and making efficient use of CPU cycles.  The more abstract the language and framework, the harder it is to truly understand the cost of implementation decisions, which only adds to the memory bloat and performance problems that are the scourge of software development..  The resurgence of native compilation seems to indicate that even .NET has gone full circle just as Java did.  It’s great for interoperability, and on the server side, but not the end all be all for XPlatform, or client applications.  Native toolkits still have their place, which is great news for EMBT, and has been reflected in their recent sales figures.

Back in the days of the Dec Alpha and WinNT 3.51, you could choose to convert an EXE compiled for an Intel x86 CPU into an EXE native to the Alpha.  That was done once, and from that point forward you saw the true performance of the Alpha chip.  Unless the EXE changes, or the user changes their CPU, there is no point in re-compiling the EXE, therefore JITting should really be a deployment activity.  IL is merely a method of delivering application code in a CPU agnostic way, the same as Google’s PNaCl used as part of their NaCl.  If the CPU market was more fragmented and there were OSes that ran on multiple platforms using different CPUs, I could see a real advantage to JIT compilation.  As it is now, in the worst case scenario you generate a couple different versions of your EXE for the popular instructions sets, and you’re good to go.

As for whether the UI is native or not for the platform, really depends on the platform.  Sometimes having a unique, or non-standard UI helps differentiate your product.  Sometimes it’s plainly not acceptable to the users on that platform because it doesn’t adhere to platform conventions.  That’s just another choice a developer has to make when choosing a toolset.  What’s nice, is to have choices that don’t necessarily box you in.  That’s why it would be great if EMBT would support ObjPas on the Mac without FireMonkey.  If you don’t want to learn ObjC, prefer pascal syntax, or have a lot of Pascal code you want to leverage,  you would have another option to produce a native UI on the platform.  I don’t know that this would make sense for EMBT financially, but they could certainly help the open source community make Pascal a first class citizen again on the Apple platform.

Quality is Job #1?

Wednesday, May 9th, 2012

Today I got frustrated by XE2 rewriting the DPR source file when switching between 64 and 32 bit Windows targets.  Eurekalog doesn’t compile yet for Win64, so I added a conditional directive around that unit in the DPR source.  When XE2 undid my change, I turned to Quality Central to see if one of my numerous reports regarding conditional compilation issues has been addressed.  I use the Windows QC Client even though it’s a very old style MIDAS type interface which isn’t the most intuitive, at least it works (including voting).

The windows QC Client has been around forever, yet strangely, it’s still version 1.

QC for Windows About Box

QC for Windows About Box

This got me thinking about how many updates I’ve produced over the last year on my current project alone.  We started the year with version 3.0.0.0 and are up to 3.2.0.7.  That means some significant changes to an already mature product.  To get an idea of how my reports were progressing, I took a look at my user stats:

My User Stats

My User Stats

The submission information really doesn’t give much insight.  For instance, it doesn’t indicate the average age of a report that is open or closed, and doesn’t even give counts of the number of reports marked as not reproducible, duplicate, fixed, or still languishing in the reported state.  A lot more could be done for this dialog alone (charts anyone?).

Since I didn’t understand the difference in the Voting group between Total and Total Votes (they are after all in the same group box, so what else could Total refer to, but the total # of votes), I thought I would check out the help.  Choosing Help - Quality Central Web Site from the menu takes you to the main QC Website where it says Embarcadero Developer Network on the banner graphic, but there are still numerous references to CodeGear.  How long has it been since EMB bought CodeGear?  Oh yeah….thanks Google…8 May 2008….about 4 years ago now.

I decided to check out the general help for the QC Windows client even though it has not been updated since John Kaster did so on December 15, 2006.  It’s contents are meager one page, and there is a comment from Kris Kumler on Jan 22 2009 politely asking for the missing content to be added.  Of course, the User Stats information I was looking for isn’t present.

It’s been my experience that both people and companies spend their time on things that are important to them.  They often say one thing, and do something else, so watching what they do gives the most accurate picture of what they feel is important.  It’s also been my experience that once an application hits the streets, users find issues that even the best internal testing misses.  In order to produce a better product, you need as much accurate feedback from users as possible and using state of the art tools to collect as much information as timely as possible is paramount.

We use Eurekalog to capture error reports, and a ticketing system for users to submit enhancement reports.  I’ve found Eurekalog reports to be indispensable in figuring out how to reproduce a problem because users are often unable to communicate what they were doing that lead to the problem, sometimes due to a lack technical knowledge, or an understanding of what information is useful.  Sometimes they simply don’t care enough to spend their time to help you.   That’s why I envy EMB’s position; having highly technical people provide bug reports, and willing to spend their time to try to get it fixed because they use the tool daily and it’s important to them.

So why is it then that the oldest of my QC reports have been in the Reported state since 1/24/2007?  My oldest Delphi related report still in a Reported State with no comments is 7/22/2011.

I used to oversee a support department of 3 technical support representatives that supported 5 different medical billing and practice management solutions.  I personally reviewed the support call logs, acted as second level support, engaged development to address issues, and even called random clients to inquire about the quality of the support they received.  If I would have let support issues get 5 years or even a year old without addressing them, I would have been let go.

I would rather pay a support fee, than a maintenance fee that guarantees a new product version.  Support fees mean issues get fixed in a timely fashion instead of packing more features dreamed up by marketing into the box.  How do you feel about QC?  Do you think your reports get the timely attention they deserve?  Do you think the QC processes are well documented and transparent?

BTW, Ford’s mantra is probably what saved them from needing a bailout, and resulted in their recovery from hard times…  The economic value of investing in quality has been repeatedly proven.

hcOPF - Configuring XE2 for Compilation

Friday, May 4th, 2012

It is not necessary to change the DCP output folder because the defaults automatically take into account compiling for different platforms.  In the Tools - Options - Library settings if you select Win32 you will notice that the Package output directory is set to:

$(BDSCommonDir)\bpl

and the DCP output directory is set to:

$(BDSCommonDir)\dcp

which works great since the IDE is a 32 bit EXE and this folder will be on the search path so the IDE can load the design-time packages.  This provides backwards compatibility, but the moment you start compiling the same package for additional targets it becomes cumbersome.

If you select Win64 or OSX you will notice that the package output directory changes to:

$(BDSCommonDir)\bpl\$(Platform)

It’s interesting that you cannot modify the Library Path globally, so if you have a product that compiles for multiple platforms you have to add the necessary bits into each Platform’s version of the Library Path.  This is an enhancement I have suggested in QC#105378 .  Personally I have always preferred explicit specification rather than implicit, and as such I think the default Package output directory should be:

$(BDSCommonDir)\bpl\$(Platform)

and likewise the default DCP paths should be:

$(BDSCommonDir)\dcp\$(Platform)

This is sort of like defining a class as

TMyObject = class
end;

vs.

TMyObject = class(TObject)
end;

I believe consistency in usage promotes more readable and thus more maintainable code and IDE environments.  From my experience it’s also easier to manually purge your output folders, and confirm the appropriate units are being generated if a consistent directory structure is used.  If you agree, please vote for QC #105377.

In the case of hcOPF the Library Path needs to contain the following:

$(hcOPF)\Lib\D16\$(Platform)\$(Config)
$(hcOPF)\Source\Resources
$(hcOPF)\Source\Include

If you happen to notice that the Path is greyed when you add it to the dialog, don’t panic.  For some reason, the Directories dialog has problems validating Paths that contain $(Platform)  which is evident by the first path in the list

$(BDSLIB)\$(Platform)\release

also appearing in grey.  I have entered a QC report (#105375) for this, so please vote for it.

So DCP and BPL output folders are handled by default in a suitable fashion by the IDE, unless you’re like me and prefer a more uniform directory structure in which case you can change the Win32 DCP and BPL path defaults in the Tools - Options - Library dialog.  If you change these paths, packages which do not have an override value specified in their Project - Options, will output to the new default directories.

At a minimum developers need to make sure their Unit output path does not collide which means using a structure something like .\Lib\D16\$(Platform)\$(Config) as I alluded to in my previous post.  This is also handled by default if you’re creating new packages in XE2.  If you’re upgrading existing packages, make sure to set the unit output path to use $(Platform)\$(Config) as well.

One thing I find intriguing is that under Project - Options for ‘DCP output directory’ there is an entry for the Target ‘Debug Configuration - All Platforms’, yet there is not one for ‘Release Configuration - All Platforms’.  Maybe someone can explain this one to me…