Why Managed Code isn’t Managed very Well

Today I was listening to the 50th podcast @ Delphi.org where Jim McKeeth again brought forth the idea of Managed Code being the future, and JITing for the CPU of the device being better than native compilation in advance.  While I agree with the latter to some extent, I certainly don’t agree with the former.  I think it’s safe to say the vast majority of OS/X and iOS applications are written in ObjC and compiled to native code.  Native is still king in many camps, and even Microsoft is bringing sexy back.

IMHO managed code is not very well managed.  What I mean by that is, every time you load a managed EXE, the code contained therein, is JITted for your CPU.  This increases startup time, and CPU load every time you run it.  Even today, .NET applications start up slower than a Delphi application compiled for a native target.  Most businesses have standard, relatively current hardware so you don’t have to worry so much about whether to target Pentiums with floating point issues.  You can target current generation CPUs and still get decent optimization.  Perhaps not as good as some IL JITters, but good enough, and you don’t need the Jitters to already be installed on the target machine.  What native compilation without garbage collection does, is force the developer to think more about managing lifetimes, and making efficient use of CPU cycles.  The more abstract the language and framework, the harder it is to truly understand the cost of implementation decisions, which only adds to the memory bloat and performance problems that are the scourge of software development..  The resurgence of native compilation seems to indicate that even .NET has gone full circle just as Java did.  It’s great for interoperability, and on the server side, but not the end all be all for XPlatform, or client applications.  Native toolkits still have their place, which is great news for EMBT, and has been reflected in their recent sales figures.

Back in the days of the Dec Alpha and WinNT 3.51, you could choose to convert an EXE compiled for an Intel x86 CPU into an EXE native to the Alpha.  That was done once, and from that point forward you saw the true performance of the Alpha chip.  Unless the EXE changes, or the user changes their CPU, there is no point in re-compiling the EXE, therefore JITting should really be a deployment activity.  IL is merely a method of delivering application code in a CPU agnostic way, the same as Google’s PNaCl used as part of their NaCl.  If the CPU market was more fragmented and there were OSes that ran on multiple platforms using different CPUs, I could see a real advantage to JIT compilation.  As it is now, in the worst case scenario you generate a couple different versions of your EXE for the popular instructions sets, and you’re good to go.

As for whether the UI is native or not for the platform, really depends on the platform.  Sometimes having a unique, or non-standard UI helps differentiate your product.  Sometimes it’s plainly not acceptable to the users on that platform because it doesn’t adhere to platform conventions.  That’s just another choice a developer has to make when choosing a toolset.  What’s nice, is to have choices that don’t necessarily box you in.  That’s why it would be great if EMBT would support ObjPas on the Mac without FireMonkey.  If you don’t want to learn ObjC, prefer pascal syntax, or have a lot of Pascal code you want to leverage,  you would have another option to produce a native UI on the platform.  I don’t know that this would make sense for EMBT financially, but they could certainly help the open source community make Pascal a first class citizen again on the Apple platform.

21 Responses to “Why Managed Code isn’t Managed very Well”

  1. Sebastian Gingter Says:

    You do know that it’s as easy as calling NGen.exe (Native image generator, it’s part of the .NET Framework) once to compile an assembly to the ‘native image’ that is actually executed, to eliminate any further YIT’ing?

    That ends you up with a run-on-CPU binary image in the global assembly cache, that is directly loaded and executed every time you run your application. You won’t be able to tell any difference between a pre-compiled assembly and a native application anymore, because everything is compiled (and optimized) ahead of time for you.

    Of course, more application developer should add that step to their setup (like Paint.NET does, for instance), but as a user you’re fine to do that yourself too if you like.

  2. Michael Thuma Says:

    The demand for JIT does not come from speed, this is more an additional benefit especially in the Java corner on IBM machines - don’t assume that companies in case of business applications are interested in applications that run into problems because of memory leaks … from their perspective the corner we are in is more or less what was the ‘UNIX’/C world…

    Simply think of a shrinking number of employees and support only IT organizations, does apply to layered IT too. Assume that for such organizations a standardized ECMA compatible runtime and the possibility to ‘recompile’ does count a lot … in the case of SAP/ABAP JIT the benefits are more obvious, because the whole system is integrated in a different way without the need for an external IDE….

    Of course in the case of embedded devices ‘Java’ has been an enormous enabler, Java was designed with embedded/mobile (more or less a Smalltalk for such devices)… the world is not the PC only. The switch could come of course if the number of PCs is shrinking … but many ifs.

  3. Michael Thuma Says:

    It is a false idea that the tradition of everlasting rewriting or everlasting backward compatibility the Windows/MS ECO system still does benefit is the normal case - this is the exception. All this managed technologies are one step into the direction of data-center, maybe a shift in processor technology. This can become very urgent, the moment maybe INTEL servers are no longer of interest and other processors are used … see the mobiles as a test run. This is an argument for JIT another argument is a transparent way to handle concurrency … - the evolution is on they, but the fact that the ultimate solution has not been found does make the ‘native’ a lot better alternative to managed code in general now and in a future especially.

  4. Michael Thuma Says:

    sry - the evolution is on they,
    The evolution is on the way, but fact is that the ultimate solution ….

  5. Michael Thuma Says:

    Generating Code and modifying the program at runtime is another proven practice that works well… ok not in 3GL. This was meant when talking about SAP ABAP. Something I really miss in .net and Java too and a point I wanted to clarify.

  6. LDS Says:

    The truths is you don’t need a “managed” language at all to JIT. LLVM for example is capable of “jitting” while avoid the overhead of a “managed” language. A “managed” language VM has a far broader scope than executing IL code. It has to “sandbox” the application to ensure its security boundaries, and offer services like GC, reflection, etc. That’s why they are “managed” in the first place (I call it “encaspulate the developer”). You can have the benefits of JITs delivering pure native code that doesn’t need the “managed” layer.

  7. Zenon Says:

    @Sebastian Gingter

    NGen has its deawbacks, like for example inferior execution-time performance of the code it generates, comparing to the code generated by JIT compiler.
    There are also many other problems - if you are curious what they are you can find a relevant information in Jeffrey Richter’s “CLR via C#” book page 18.

    Regards,
    Zenon

  8. Larry Hengen Says:

    @ Sebastian,

    Yes I thought about mentioning NGEN. When I last read the documentation about NGEN it was unclear to me under what circumstances the image generated became invalid, and had to be reJITted. IIRC, security changes were one such situation. AFAIK, I have not seen any .NET developer NGEN their application on delivery, even when users complained about the speed.

  9. Stefan Glienke Says:

    This “Native vs Managed” argument seriously reminds me of the “FreeAndNil” thing that was going on for a while….

    I wonder if some of the people that claim that one or the other is faster (be it starting time, execution or whatever) or better create two identical applications (one in a native language like Delphi and another one in a native one like C#) to back up their bold statements.

    Apart from that to me (ymmv) all this sounds to me like saying: “Use pointers, that’s faster”

  10. Thronging Says:

    LDS Says: “….”
    absolute exactly that!

    So you never know how much your code can and will be examined by
    third parties,
    i.e.,
    maybe/ probably you have produced your highly sophisticated algorithms for competitors.
    Not the thing you really want to.

    Therefore managed code could be necessary in some very rare special
    cases, but in no other case.

  11. Panagiotis Says:

    I think someone in Embarcadero forums created an OSX application with native ui (in code) using Delphi XE2.

  12. Larry Hengen Says:

    @Panagiotis,

    Are you referring to Phil Hess? If not do you have a link?

  13. Larry Hengen Says:

    @Stefan,

    I have actually ported a Delphi application to C# .NET FCL 2.0. Although the C# version was slower on startup, the EXEs were indistinguishable in performance once the main form loaded. I didn’t do any specific timings though, and there were some implementation differences. Also, the Delphi version was written in D7, and XE2 generates much faster code now. I’m not sure how much better performance .NET 4 provides over 2.

    I am surprised someone at EMBT in marketing hasn’t gotten a gear head to write some tests, and provide some indication of how RAD Delphi is compared with VS.NET. I’m sure it would spark some interesting debates….

  14. Yogi Yang Says:

    “This increases startup time, and CPU load every time you run it. Even today, .NET applications start up slower than a Delphi application compiled for a native target.”

    This statement is false.

    When a .net app is run for the first time it gets JITted and the binary image thus generated binary image is cached and stored on HDD so after this step on every run the pre-compiled binary image is loaded and the program is JITted on every run.

  15. Kevin G. McCoy Says:

    Want to bring a .NET app to its knees? Do this:

    Run another service process that uses 100% of the CPU *at Low priority*. All other unmanaged apps will work fine, as they run at normal priority and override this service when they need the CPU’s attention.

    The .NET garbage collector runs at low priority and politely waits for the the 100% process to quit. This never happens and the garbage collector fails to… collect the garbage.

    We all know what happens in NYC when the human garbage collectors go on strike. :-)

  16. Stefan Glienke Says:

    @Kevin: You are wrong. See http://stackoverflow.com/questions/830822/net-garbage-collector-what-is-its-thread-priority

    @Larry: I wonder what things might be faster in XE2 than in 7 regarding a regular VCL application. But that might rather be some question for the compiler cracks. I also might say that .Net 4.0 made some huge step from 2.0 performance wise.

  17. Kevin G. McCoy Says:

    @Stefan Then why does the .NET app hang/crash in such an environment? If I stop the service in question, the .NET app works as advertised. If I restart the service, the .NET app dies.

    The service uses CPU resources, very occasional network resources and very little disk time. The service and the .NET app have no other common resources.

  18. Panagiotis Says:

    I cannot pinpoint the thread in the forums but you can get the code from http://stackoverflow.com/questions/7442131/delphi-xe2-is-it-possible-to-create-mac-gui-applications-without-firemonkey

  19. Stefan Glienke Says:

    @Kevin If I could answer such a question with the given information I would open some consultant company and make lots of money ;) I honestly don’t know. But as you can read in Jeff Richters book your statement about the GC collector is wrong.

  20. Kevin G. McCoy Says:

    @Stefan,

    Well, if this information is in a book, then it *must* be true. ;-)

  21. Ciprian Mustiata Says:

    If you work with SQL, you have a managed language, if you work with Xml to read your configs and you use a library to define it as a tree of data, you work with another managed language. And many abstractions have costs. SQL does not work necessarily faster than native iteration, and Xml does not work faster than a native binary config.
    If you restrict the definition of managed language a language that does not work directly with the OS/platform, so it has an extra layer of indirection (I include here the JIT), I think that we all as programmers we understand the abstraction cost.
    To radically improve performance of startup, it simply means to do less: IO, code, JIT and so on. There is no silver bullet, but simply profiling you can get fast applications from .Net side. Sometimes even faster than their equivalent counterparts: WPF as it used GPU from Vista/VS 2008 times, it was freeing the CPU so the application could respond faster by not doing repaints, but by solving user problems.
    As native platform is losely defined, if we think on Metro/WinRT, is it managed to define the UI in Xaml? Is it Android’s Java the native platform? If so, Mono For Android is certainly faster than default Java. .Net in WinRT it would be certainly faster than the JavaScript version of the WinRT.
    At the end, I think we should focus on results: there are applications which are “managed” and faster than their C++ counterparts: Paint.Net is faster than Paint, SharpDevelop is faster than Visual C# Express 2010. Obviously Photoshop is faster than Paint.Net (excluding the startup time), but the point is that without giving realistic metrics, most of the time, you can get any conclusion you want.

Leave a Reply