Try/Catch not allowed to recover from a native ClrFunction call

Jan 14, 2011 at 11:31 AM

During the implementation of a native DotNet <--> JScript interop model (something I already did in the past for Small Basic), I have got some nasty problem with your code : a ClrFunction call does not handle an error thrown in the native function call in order to pass it to the first JScript 'catch' block. Instead, the exception is uncatched and crashes the whole application.

As I'm not confident enough of the way you handled exceptions in your code, I was wondering how I should do to modify your code in order to catch the error, and convert it to a 'JScript' error ?

        public object Call(ScriptEngine engine, object thisObject, params object[] arguments)
            BinderDelegate delegateToCall;

            // Create a delegate or retrieve it from the cache.
            if (arguments.Length <= MaximumSupportedParameterCount)
                // Save the delegate that is created into a cache so it doesn't have to be created again.
                if (this.delegateCache == null)
                    this.delegateCache = new BinderDelegate[MaximumSupportedParameterCount + 1];
                delegateToCall = this.delegateCache[arguments.Length];
                if (delegateToCall == null)
                    delegateToCall = this.delegateCache[arguments.Length] = CreateBinder(arguments.Length);
                delegateToCall = CreateBinder(arguments.Length);

            // Execute the delegate.
            return delegateToCall(engine, thisObject, arguments);

The outlined line is the line which may throw some Exception. There may be more than one reason :
- The function call has thrown an exception as part of its natural process.
- The arguments passed to the function are not of the good type (which naturally lead to an InvalidCastException).


Jan 14, 2011 at 9:37 PM

The way it is designed right now is that only Jurassic.JavaScriptException is catchable from within javascript code.  This was for a number of reasons, but mainly because I wanted Jurassic to crash if unexpected exceptions were thrown (these are part of the API contract after all).  Therefore, if you need to catch an exception from within javascript, you need to wrap it within a Jurassic.JavaScriptException.

Something like this:

  // code that may throw
catch (Exception ex)
  throw new JavaScriptException(engine, "Error", ex.Message);

You need a reference to the ScriptEngine - you can get this from the Engine property of ObjectInstance or some other method.

Jan 14, 2011 at 9:39 PM

You can preserve the stack trace as well by using the innerException parameter:

  // code that may throw
catch (Exception ex)
  throw new JavaScriptException(engine, "Error", ex.Message, ex);
Jan 15, 2011 at 12:19 PM

Thanks for the insight.

Jan 15, 2011 at 9:03 PM

In case you want to test drive it, here's a online version of a fork of Jurassic that support DotNet interop. (It's based on the build you posted 2 days ago (more recent patches are not included), but with the problems I outlined are naturally fixed) :

It runs only in Console mode (they are security-related problems I didn't have a look at on Silverlight but that should be resolvable). Interessant code is in the ScriptEngine constructor, where I perform some tests to check the progress of the engine. The code is not very clean at this time and is lacking comments but it works great.

I don't think the performance of the global engine have been affected. Using DotNet objects is slow because it needs to create a lot of object when you start to use new types, but I don't think we could do much faster (or, at least, we could do faster by keeping the current behavior that is to consider DotNet objects like Host objects in IE9 (=being freely modifiable).

The biggest problem at this time is the lack of suport for call late binding when multiple version of a same function exists. It previously threw an exception (which wasn't a good behavior) but now it uses a 'random' function, which is still wrong. Some code would be needed to allow multiple (ambigous) versions of a same function properly.

Events are not supported, either.

Jan 16, 2011 at 1:04 AM

Interesting!  I've been slowly working on something similar for a while - as you say, the tricky part is resolving method overloads.  The main difference is that I'm dynamically generating stubs to access the fields and methods, rather than using reflection.  This is faster but significantly more complex.  I'm tempted to throw my code away and use something similar to how you are doing it.

A few comments:

  • You seem to be exposing private methods to javascript (BindingFlags.NonPublic).  This is probably what causes Silverlight to fail.
  • You are exposing your properties as PropertyAttributes.FullAccess - this means javascript code can delete and modify those properties.  I would have chosen Sealed instead.
  • Also, inherited members are not exposed (BindingFlags.DeclaredOnly).  This does not match the normal .NET model.
Jan 16, 2011 at 8:46 AM

Exposing private methods is not something I wanted. If it's the case, I'll change it.

BTW, I think exposing properties as FullAccess is the best thing to do. It's the way DOM Objects are exposed in IE9/FF4 (which are my refercences, in case you were still wondering :D). This allows to modify a property in order to fix a bug or for example perform operation before setting the "true" property (for example, innerHTML has bugs in IE, and several scripts exists to modify innerHTML of HtmlSelectElement to make it working properly). Giving more power to the script language is always the best thing to do. Host objects are going to be more and more like native one in every ECMAScript implementation over time.

Anyway, inherited members are exposed. This is because I recreate a full Prototype chain. For example, a MessageDisplayer instance has a prototype, which is MessageDisplayerPrototype. MessageDisplayerPrototype contains references for DeclaredOnly members, but inherits from ObjectPrototype (the prototype of System.Object). This means no property are set on a new MessageDisplayer instance by default, which make instanciation much faster because only the first use of a type is time-costly. It also mean that if I import a class that inherits MessageDisplayer, only Declared members are going to be analysed, the others being already added to  the MessageDisplayerPrototype (or ObjectPrototype). It also allows to add an "extension method" to all object inheriting from a certain type.

Also, I wanted to use DynamicCompilation, too, but I think I'll introduce it on a "place by place" basis (for exemple, it would be easy to modify InteropFieldGetter / InteropFieldSetter to use dynamic IL instead of pure reflection). There are other place where it's much more complex... BTW, I'm asking myself if we should not use a Reflection based ClrFunction Call when an ambigous function definition occurs.

Jan 16, 2011 at 12:04 PM

> BTW, I think exposing properties as FullAccess is the best thing to do.

My gut instinct is always to restrict the power of APIs to only what is needed since a larger API surface is generally detrimental (in terms of flexibility and future maintainability).  However, in this case I don't really see any down sides... so you've convinced me.  I notice it's quite inconsistant:

IE9, Firefox DOM objects - functions are full access
Chrome DOM objects - functions are non-configurable
ECMAScript built-in objects - functions are non-enumerable

> Anyway, inherited members are exposed.

I didn't notice that reading through the code... interesting!  I currently put all the inherited members on a single object but the way you are doing it seems much more "javascripty".  I assume you handle constructors the same way (i.e. inherited statics)?

> Also, I wanted to use DynamicCompilation, too

Performance is actually pretty low-priority for this feature, in my opinion, because it won't show up in SunSpider, etc.  The reason I'm using it is because I'm trying to extend the existing method binding code (FunctionBinder, etc) to handle arbitrary .NET APIs.  This class obviously needs to be high performance since it is used when calling all the built-in methods.  But what I'll probably end up doing is bypassing all that for dynamic overload resolution - it's too complex to try to generate code to do it.

Jan 16, 2011 at 1:20 PM

Well, constructors are different. This is because the Activator class already do the job for us (and find the best constructor based on the arguments we send it). I used Reflector to see if the code was adaptable, but it uses too many shadowed API that would require reflection import. The same apply for Microsoft.VisualBasic, which already implement LateCall, LateGet, ... The problem is that it also use shadowed API, so it's not practical. However, it's maybe a good idea to have a look on it to see how it could be implemented.

Please also note I actually *use* ClrFunction accross my implementation, and I used no other function-calling behavior. Only Fields and Non-Indexed Properties are using pure reflection at this time, and it's only because it was convenient to do so. The current ClrFunction model, while not what I would have done at first glance (I'm more used with pure reflection, and the cancelled InteropFunctionInstance was a try to move forward in that direction), this ClrFunction model thus, was perfectly adaptable to all .NET functions with very few modifications (boxing and unboxing value-types hase been the most complex part of the work because it involved better knowledge about context your intial EmitConversion module was able to furnish; it's why I added fromDotNetType and toDotNetType as arguments for some of its methods, based on real-life situations that were not working properly). The only problem I see with CrlFunction is that the overloads is not correctly supported because the types of the arguments sent to the function are not used in the function choice process. This is however perfectly modifiable.

Another problem is Silverlight. I fixed a first security problem (some of my classes were defined as Internal and thus not accessible by Dynamicly generated subs) but I now get another one, which is more enigmatic : Security.VerificationException "This operation could destabilize the runtime". Some research conduce me to think some cast conversion I use isn't "a good way" to perform it, but I wonder which is causing the problem. Maybe the "this" loading. I'll investigate it.

Jan 16, 2011 at 2:09 PM

Silverlight problems are now fixed. (I updated the zip). However, Reflection is not available from the script engine. That mean that GetType("System.String").getReference() works (that mean we can use it to pass it to a native function) but .getReference().FullName or .Name returns a security error. This is not a big deal since the DotNet Interop is not the biggest use case of Silverlight-based JavaScript, and Reflection is even more unlikely to be used. The remaining problem was indeed due to the fact I didn't emit a CastClass command to convert the this type in some case. It wasn't a problem in console mode because the code wasn't checked and worked great.


Jan 17, 2011 at 4:24 AM

You've obviously put a lot of effort into this - I'm impressed.  Clearly I need to examine your changes in more detail.  Once I get some time I'll sit down with something like Beyond Compare and see exactly what has changed :-)

Jan 17, 2011 at 10:10 AM

Well, it's not the first time I work on something like that, I end up by being quite good in the domain.

Here's a summary of the current issues :

- There may still exist problems, which are related to the assumptions you made in your emit conversion layer. I've been forced to make few changes to support new value types like Byte, Char in a non- or less-specific manner... Those types are problematic when they're used because they need some special operations in some cases (boxing, unboxing, and even byref embedding when used as "this" reference with native functions). It's seems clear I'll need to build more and more samples to find out where your assumptions are being broken by the new possible types to convert from and to (and the different contexts where those conversion may occurs).

The current fixes are not based on predictions but on bug fixing, which is not a good way to perform the things (but have we any alternative ?)

- ClrFunction should have better support for overloads. How it should be done is an open question, but it should be done.

- Events are not supported. I'm still wondering how I should add support for them. From my past experience, I know we need to dynamicly create a delegate converter which will wrap FunctionInstance.Call for the delegate type used in the event. To add or remove handlers, I think I'm going to add two methods in the System.Object prototype : addEventListener and removeEventListener, based on the current HTML5 specification (the last argument will be ignored, however, as it's not convertible into the .Net event model).

Jan 17, 2011 at 11:41 AM

> ClrFunction should have better support for overloads.  How it should be done is an open question, but it should be done.

My idea is to first (ahead of time) determine the set of methods that match a given number of arguments, taking into account optional params and ParamArray parameters (some of this is already done).  Then, at runtime, compute the number of type conversions required to match each method (this is zero if the arguments match exactly and +max if the conversion cannot be performed).  Then if there is a single method with the lowest score it gets called.  Otherwise an exception is throw (ambiguous overload error).  One problem with this scheme is that it treats the conversion from int -> double as the same as int -> string.  The latter should maybe be counted as a double conversion - not sure.

> Events are not supported.  I think I'm going to add two methods in the System.Object prototype :

FYI, JScript.NET supports events by exposing two methods, add_<event name> and remove_<event name>, both of which take a function as the only parameter.  This is basically identical to how .NET represents events internally (try using ILDasm to view a class with an event).

Jan 17, 2011 at 1:05 PM
Edited Jan 18, 2011 at 12:47 PM

Well, it seems a good idea. I agree with you number conversion should count as “nearly” no conversion. We could resolve this using the CSS-Priority way :

(0) The selected method can be called with as many arguments

If there’s more than one possibility, compute the following values :

(1) TypeConversions: #Arguments converted (getting the field is not counted as a conversion; however, if an argument’ type is PrimitiveTypes.Other and that the type of doesn’t match (=isn’t && doesn’t implement the interface or inherits the class required for the argument), the function is not selectable (it means no conversion is done between a Char and a Byte because none of them are supported by your runtime and they’re different types so the type resolver would not recognize that a conversion is possible); if the type of doesn’t not match but that the argument’ type is not PrimitiveTypes.Other, it means a traditionnal ObjectInstance –> ValueType can be performed instead, and the method should not be rejected; this allows Number(b) to run if b is an InteropObjectInstance whose me field contains a Byte)

(2) ValueTypeConversions: #Arguments converted from an ObjectInstance or ValueType to a ValueType that is *not* represented by the same ECMAScript class (Number->String is a ValueType conversion but not Double –> Integer); if an ObjectInstance is converted to a ValueType, it is counted as represented by the ECMAScript class of the value returned by valueOf(), if valueOf() throws an error or returns an object, String is used as its representative class.

(3) InternalConversions: #Arguments converted from ObjectInstance or ValueType –> ValueType that is represented by the same ECMAScript class (Double –> Integer)

The selected method is the function who minimizes (1); if more than one is possible, the selected method is the method that minimize (2); if more than one is possible, the selected methode is the one who minimize (3); if more than one method is possible, any of them could be selected to the discretion of the implementer (this is for exemple useful when Console.WriteLine(obj) is called, with obj being a new Object(); many solutions are minimizing both 1,2 and 3, but every of them would do nearly the same action. Choosing one of them is reasonable, I think, instead of throwing an error. My proposed solution would be to prefer String to Double, Double to Integer, Integer to Short, Short to Byte....

In case when the algorithm doesn’t satisfy the script author, he could still use a special function implementedy by all ClrFunction: “apply(object this, object[] arguments, string[] TypeNames)” which would select the only method that has TypeNames[0] as first argument type, TypeNames[1] as second arugment type and so one. If no such method exists, it would return null. This allows Console.WriteLine.apply(null, 1.5, [“System.Int32”]) to output “1” to the console.

Jan 17, 2011 at 9:40 PM

Wow, respond by mail isn't great looking. Will switch to text based mail next time.

Anyway, I've completed the code needed to convert any FunctionInstance to an arbitrary delegate type. I also added some fixes. I don't publish the code because I need to sleep before my tomorrow exam and because it still lacks some way to expose the new features to the script engine. Should not take long to add but it's too late now.


Jan 18, 2011 at 12:53 PM

Code updated. Now the compiler accept a FunctionInstance to be passed as argument everywhere a delegate is requierd.

The only exception is that it's currently not supported that a FunctionInstance returns an InteropInstance. (In fact, it can, but if the delegate was a delegate that has a return value and that the expected return value is contained in the "me" field of the InteropInstance, it won't get unwrapped and it will thus throw an InvalidCastException).

Behalve that, add_Event1 is a normal ClrFunction. The conversion between a FunctionInstance to a Delegate is unique (it means that if you pass a function to add_Event1 and then to remove_Event1, you get the expected result: the Delegate is removed from the handlers). It also mean the delegate stay alive as long as the function it reference is alive, even if the delegate is not used anymore. I could have used a WeakReference, but I don't think it's necessary here. Any function can be converted, even a native one.

Delegate are not converted back to function instance when they are the return value of a function. They're instead wrapped in an InteropInstance. Their DynamicInvoke function can be used if needed.

Jan 19, 2011 at 7:07 AM

I've had a better review of your changes.  Some comments:

  • In ReflectionHelpers.cs there is a static constructor where all the required MethodInfos are obtained.  You should follow the existing examples.
  • The test code in ScriptEngine.cs belongs in the Unit Tests project.
  • Publicly exposed members should be capitalized (e.g. Also fields should not be made public. See for the style guide I use.
  • InteropInstance.meField should be moved to ReflectionHelpers.cs - I have a cunning plan to speed up this process and it helps if it is all centralized in one place.
  • Documentation is lacking in some places.  My goal is to have XML doc comments for everything in the Jurassic project.
Jan 19, 2011 at 8:20 AM

Yes, I agree with all your points. I didn't notice ReflectionHelpers.cs when I started the project, but I've used it for the latest method definition(s) I've used. I promise I'll continue the move when I'll have time to work on it. I've an exam tomorrow, but I should have some free days before the following examen. It's on my "todo list".

Anoter fact is that I've never used a test unit project before. I always use a "Test" method I call in the application startup (#if DEBUG). I known it's not good practice but I never learnt about Unit Test project. I'll have a look at this.

BTW, exposing the "me" field is a good idea because it's used many time in the conversion code. (If the field is kept private, Silverlight throws a security exception when read.) If I need to change it to a property, we'll notice a visible slowdown. Maybe could I rename it to "Me", if you prefer that.

Jan 19, 2011 at 9:21 AM

> If I need to change it to a property, we'll notice a visible slowdown.

Are you sure? Have you timed it? I'm pretty sure that simple methods like property getters will be inlined and thus run as fast as a field lookup.  But don't take my word for it; time it yourself (just make sure to do it in release mode).

> Maybe could I rename it to "Me", if you prefer that.

Yes, that is better, though I actually prefer something more descriptive like "WrappedInstance".

Jan 19, 2011 at 10:52 AM

Yeah, I'm sure of it. I didn't time it, but I'm sure it won't get inlined because it's me that emit the code :-). And if I inline it, I'm forced to emit a ldfld on a private field, which Silverlight refuses to execute in a dynamic method (behalve if the field is public, but then it's not worth making a property around it because it won't get used).

BTW, "WrappedObject" seems great to me as a name.

Jan 19, 2011 at 11:15 AM

Just timed it in VB (compiled in Release mode) and the difference is quite visible (for a 10.000 times loop). 

Mean (11-20th loop execution) : 173 vs 378 ticks. It seems that when a method is called often, it gets quicker after some time. Maybe some optimization under the hood by the CLR.
Mean (11-20th loop execution) : 20 vs 29 ticks. (no debbugger attached to the process here).

Jan 19, 2011 at 2:06 PM
Edited Jan 19, 2011 at 2:18 PM

I didn't believe you so I tried it myself.  Try this program:

class Program
        public class TestClass
            public int A;
            private int b;
            public int B { get { return this.b; } set { this.b = value; } }

        static void Main(string[] args)
            // Up the thread priority so nothing gets in the way of the benchmarking.
            System.Threading.Thread.CurrentThread.Priority = System.Threading.ThreadPriority.AboveNormal;

            // Warm up.
            int total = 0;
            var instance = new TestClass() { A = 39, B = 39 };
            for (int i = 0; i < 1000000; i++)
                total += instance.A;
                total += instance.B;

            for (int j = 0; j < 10; j++)

                // Start timing.
                var timer = System.Diagnostics.Stopwatch.StartNew();

                total = 0;
                for (int i = 0; i < 100000000; i++)
                    total += instance.A + instance.A;

                // Stop timing.
                double elapsed = timer.Elapsed.TotalMilliseconds;

                // Output the result to the screen.
                Console.WriteLine("A {0:n1}ms", elapsed);

                // Start timing.
                timer = System.Diagnostics.Stopwatch.StartNew();

                total = 0;
                for (int i = 0; i < 100000000; i++)
                    total += instance.B + instance.B;

                // Stop timing.
                elapsed = timer.Elapsed.TotalMilliseconds;

                // Output the result to the screen.
                Console.WriteLine("B {0:n1}ms", elapsed);

} }

I can't see any significant difference between the two in release mode.  And it doesn't matter that you are emitting the code - the inlining is not done by the compiler; it happens when the execution engine converts CIL into x86 code.  And yes, I checked Reflector to make sure I was right about that.

This is what I get in release mode:

Field access     107.6ms
Property access  109.9ms
Field access     97.9ms
Property access  95.3ms
Field access     97.5ms
Property access  102.7ms
Field access     94.7ms
Property access  97.1ms
Field access     112.9ms
Property access  103.3ms
Field access     107.1ms
Property access  90.5ms
Field access     96.3ms
Property access  78.7ms
Field access     82.0ms
Property access  81.7ms
Field access     77.2ms
Property access  87.8ms
Field access     79.4ms
Property access  77.2ms 

 And this is what I get in debug mode:

Field access     462.0ms
Property access  1,342.7ms
Field access     425.4ms
Property access  1,345.1ms
Field access     440.4ms
Property access  1,321.8ms
Field access     438.1ms
Property access  1,346.3ms
Field access     436.4ms
Property access  1,337.5ms
Field access     435.9ms
Property access  1,345.7ms
Field access     434.8ms
Property access  1,340.1ms
Field access     424.0ms
Property access  1,325.4ms
Field access     439.2ms
Property access  1,331.2ms
Field access     425.1ms
Property access  1,300.2ms

 Clearly, without inlining property access is many times slower (hard to tell how much slower because the overhead is so high)

Jan 19, 2011 at 4:34 PM

I tested your app, it produces the result you posted: no difference. However...

Module Module1

    Sub Main()

        Threading.Thread.CurrentThread.Priority = Threading.ThreadPriority.Highest

        Dim X As Integer = -1
        Dim t As Stopwatch

        While True

            t = Stopwatch.StartNew()
            For X = 0 To 10000000
                B = 1 + CType(B, Integer)
            t.Stop() : Console.WriteLine(t.ElapsedTicks)

            t = Stopwatch.StartNew()
            For X = 0 To 10000000
                A = 1 + CType(A, Integer)
            t.Stop() : Console.WriteLine(t.ElapsedTicks)

            t = Stopwatch.StartNew()
            For X = 0 To 10000000
                A = 1 + CType(A, Integer)
            t.Stop() : Console.WriteLine(t.ElapsedTicks)

            t = Stopwatch.StartNew()
            For X = 0 To 10000000
                B = 1 + CType(B, Integer)
            t.Stop() : Console.WriteLine(t.ElapsedTicks)

        End While
    End Sub

    Private A As Object = 0
    Private _B As Object = 0
    Public Property B() As Object
            Return _B
        End Get
        Set(ByVal value As Object)
            _B = value
        End Set
    End Property

End Module

Could you try the previous code to see if you get the same result as me ?

On my computer, I usually get the two "inner" being smaller than the two "outer".

Jan 20, 2011 at 1:40 AM

> On my computer, I usually get the two "inner" being smaller than the two "outer".

I get that too, but only in debug mode.  In release mode there is no consistant pattern.


Jan 20, 2011 at 8:51 PM

Well, I have got a bit time to clean up the code today afternoon. I've merged reflection with your own ReflectionHelpers, but there's still some work needed to find out if they are duplicates and to move initialization to the Init method. I've commented most of the methods and properties, and I removed test code from ScriptEngine.cs. I created a Test Unit but it won't return anything useful at this time (only returns an error in case of an unexpected exception). It's not attached in the updated code.

* I've added a new global object: SysArray (allows to convert an ArrayInstance to a typed .NET Array). The Array constructor now accepts an InteropInstance that contains a native array as argument, and convert it to a new ArrayInstance. Sample: GetType("System.Array").CreateInstance(GetType("System.String").getReference(), 2)

* I've added a new system to find a method match in case where there's a conflict. It's proven to work better than the current one (I for exemple can call System.Array.CreateInstance, System.Array.prototype.GetValue and SetValue, which all didn't work before) but it may still need some work to find the "best" match, and not only the first one. I should also think off a way to create a delegate cache for this method (currently, it don't use any).