Using Interop in the .NET Micro Framework V3.0

The .NET Micro framework provides a rich level of support for embedded systems development from handling interrupts on GPIO pins to talking to hardware on an SPI or I2C bus. Unfortunately, sometimes, that’s not quite enough. For example, an A/D converter built into the chip that is memory mapped to the processor core is unreachable by managed code in V2.5 and earlier. For this, and many other reasons, the .NET Micro Framework V3.0 Porting Kit supports extending the .NET run time libraries with custom interop code to call into native (C/C++) code.

This article will cover the process of creating and using custom interop libraries through an example. The sample library provides managed applications access to OEM defined named memory windows in the system for direct access to memory mapped devices in a safe, bounds-checked manner.

 using System;
 using Microsoft.SPOT;
 using Microsoft.SPOT.Hardware;
  
 namespace TestAddressSpace
 {
     public class Program
     {
         public static void Main()
         {
             // get safe access to the display framebuffer
             AddressSpaceUInt16 frameBuffer = new AddressSpaceUInt16("Framebuffer");
             Debug.Print("test: " + frameBuffer.BitWdith + "; " + frameBuffer.Length + "; " + frameBuffer[0]);
  
             // fill the framebuffer with blue (R5:G6:B5)
             for (uint i = 0; i < frameBuffer.Length; ++i)
                 frameBuffer[i] = (UInt16)0x001F;
  
             // fill the framebuffer with red (R5:G6:B5)
             for (uint i = 0; i < frameBuffer.Length; ++i)
                 frameBuffer[i] = (UInt16)0xF800;
         }
     }
 }

As you can see from the sample code the AddressPsaceUInt16 provides access to a named region of memory (in this case the frame buffer for the display). The [] operator is overloaded to allow for accessing the memory window via standard array indexing syntax. This makes it relatively easy to access a variety of memory mapped devices and peripherals.

Prerequisites

Porting Kit

The steps outlined in this article require the .NET Micro Framework Porting Kit (PK). Thus, you must install the PK in order to follow these instructions and adequately evaluate the interop support.

Install the SDK

You will need to use Visual Studio 2008 (or Visual C# Express edition) and the .NET Micro Framework SDK to create the managed portions of your interop extensions. Thus, VS2008+SP1 and the Microsoft .NET Micro Framework V3.0 SDK must be installed.

Note:

The SDK version MUST match the PK build so that the matching metadataprocessor and assembly versions are used in the SDK. (E.g. don’t mix and match beta versions or beta versions and RTM bits)

Creating the Interop Enabled Assembly

1. From Visual Studio create a new .NET Micro Framework library project. It is usually simpler to create the project in a directory under the %SPOCLIENT% root of the porting kit build system. Placing all your code there makes it easier to integrate things in the build system. This example will assume the location is in %SPOCLIENT%\DeviceCode\Interop\AddressSpace

Note:

Theoretically any .NET Micro Framework project type should work - including a Window application; however this is not recommended, not tested and, therefore, not supported. Interop assemblies should be kept as small as possible with few or no dependencies on other assemblies. This makes managing them easier and, more importantly, prevents the need to re-merge the stub code with every change of the managed code side.

The Managed assembly and its corresponding native code are paired with a checksum of the text of the “mangled” native code name of all methods in the entire assembly. The CLR will fail to load the assembly if the checksum stored in the assembly’s .PE file doesn’t match the checksum for the registered native code. The mangling, like C++ name mangling, uses the class name, return type and argument types to create a unique name for the function. Since the checksum is on the mangled names of the methods in an entire assembly - ANY change to a class in the assembly creates a different mangled name signature that changes the checksum and requires regeneration of the stubs and recompiling the native code to make a new run-time image.

2. Add at least one static method or property of a class marked as extern and including the System.Runtime.CompilerServices.MethodImplAttribute with the System .Runtime.CompilerServices.MethodImplOptions.InternalCall parameter.

 using System;
 using Microsoft.SPOT;
 using System.Runtime.CompilerServices;
  
 namespace Microsoft.SPOT.Hardware
 {
     /// <summary>Base class for access to a memory address space</summary>
     /// <remarks>
     /// An Address space consists of a base address, bit width and length
     /// for some Memory or I/O Mapped region. Derived classes provide
     /// access to the memory with type safe accessors that overload the
     /// index (this[]) operator.
     /// </remarks>
     public class AddressSpace
     {
         /// <summary>Per instance Address Space ID stored by Native code</summary>
         protected UIntPtr ASID;
  
         /// <summary>Creates a new AddressSpace instance</summary>
         /// <param name="Name">Name of the address space</param>
         /// <param name="Width">Expected bit width of the address space</param>
         /// <remarks>
         /// The HAL defines the address space
         /// </remarks>
         [MethodImpl(MethodImplOptions.InternalCall)]
         protected extern AddressSpace(string Name, int Width);
  
         /// <summary>Bit width for the address space</summary>
         /// <value>number of bits per word for the address space</value>
         /// <remarks>
         /// The number of bits is determined by the HAL and cannot
         /// be changed from managed code. 
         /// </remarks>
         public extern int BitWdith
         {
             [MethodImpl(MethodImplOptions.InternalCall)]
             get;
         }
  
         /// <summary>Word Length for the address space</summary>
         /// <value>Number of address space words (BitWidth Wide) in the address space</value>
         /// <remarks>
         /// The length is a word length and not a byte length. This is done to support
         /// indexing in derived classes like an array. The byte length can be computed
         /// as follows: int byteLength = ( this.Length * this.BitWidth ) / 8;
         /// </remarks>
         public extern UInt32 Length
         {
             [MethodImpl(MethodImplOptions.InternalCall)]
             get;
         }
     }
  
     /// <summary>8 bit wide address space</summary>
     public class AddressSpaceUInt8 : AddressSpace
     {
         public AddressSpaceUInt8(string Name)
             : base(Name, 8)
         {
         }
  
         /// <summary>Accesses a byte in the address space</summary>
         /// <param name="Offset">byte offset from the base of the address space (e.g. 0 based index)</param>
         /// <value>New Value for the byte at the specified offset</value>
         /// <returns>Value of the byte at the offset</returns>
         public extern byte this[UInt32 Offset]
         {
             [MethodImpl(MethodImplOptions.InternalCall)]
             get;
             [MethodImpl(MethodImplOptions.InternalCall)]
             set;
         }
     }
  
     /// <summary>16 bit address space</summary>
     public class AddressSpaceUInt16 : AddressSpace
     {
         public AddressSpaceUInt16(string Name)
             : base(Name, 16)
         {
         }
  
         /// <summary>Accesses a 16 bit value in the address space</summary>
         /// <param name="Offset">16 bit word offset from the base of the address space (e.g. 0 based index)</param>
         /// <value>New Value for word at the specified offset</value>
         /// <returns>Value of the word at the specified offset</returns>
         public extern UInt16 this[UInt32 Offset]
         {
             [MethodImpl(MethodImplOptions.InternalCall)]
             get;
             [MethodImpl(MethodImplOptions.InternalCall)]
             set;
         }
     }
  
     /// <summary>32 bit address space</summary>
     public class AddressSpaceUInt32 : AddressSpace
     {
         public AddressSpaceUInt32(string Name)
             : base(Name, 32)
         {
         }
  
         /// <summary>Accesses a 32 bit value in the address space</summary>
         /// <param name="Offset">32 bit word offset from the base of the address space (e.g. 0 based index)</param>
         /// <value>New Value for word at the specified offset</value>
         /// <returns>Value of the word at the specified offset</returns>
         public extern UInt32 this[UInt32 Offset]
         {
             [MethodImpl(MethodImplOptions.InternalCall)]
             get;
             [MethodImpl(MethodImplOptions.InternalCall)]
             set;
         }
     }
 }

This example provides a base class called AddressSpace and several small derived classes to provide type and bounds safe access to memory regions of the CPU. The AddressSpace constructor is protected so it can’t be called directly. Instead, the type-safe derived classes will call the AddressSpace constructor with an Address space name and width. This constructor is marked with the InternalCall option to indicate it will only exist in the native code of the runtime. The derived classes pass a name for the address space and the expected width for the space as appropriate. As we’ll see later the native code implementation will check if the address space name is valid and that the width is correct for the named address space.

The AddressSpace base class also stores an AddressSpace ID, which is an opaque identifier used by the native code to quickly identify a specific address space. (This shows how a managed code field can be accessed and used by the native code)

The data types supported for Fields, parameters, array parameters and return types are:

CLR Type

C/C++ type

C/C++ Ref Type

(C# ref)

C/C++ Array Type

System.Byte

UINT8 (or BYTE)

UINT8*

CLR_RT_TypedArray_UINT8

System.UInt16

UINT16

UINT16*

CLR_RT_TypedArray_UINT16

System.UInt32

UINT32

UINT32*

CLR_RT_TypedArray_UINT32

System.UInt64

UINT64

UINT64*

CLR_RT_TypedArray_UINT64

System.SByte

Char

Char*

CLR_RT_TypedArray_INT8

System.Int16

INT16

INT16*

CLR_RT_TypedArray_INT16

System.Int32

INT32

INT32*

CLR_RT_TypedArray_INT32

System.Int64

INT64

INT64*

CLR_RT_TypedArray_INT64

System.Single

Float

float*

CLR_RT_TypedArray_float

System.Double

Double

double*

CLR_RT_TypedArray_double

System.String

LPCSTR

Not Supported

Not Supported

 

NOTE: Arrays as returns types or ref parameters are not supported. 

 

3. Right click on the interop assembly project in the Solution Explorer

4. Select the “.NET Micro Framework” tab on the project page

UsingInterop_1

5. Check the “Generate native stubs for internal methods” check box

6. Set a root name for the generated stub files. (RawIO in the sample) This name must be unique in your build as it is used as the name for a .CPP file which is compiled to an object file that is bound with other object files into a library. Some of the linker/librarians supported by the .NET Micro Framework build system do not handle multiple object files with the same name in a single library.

7. Save and close the project file

8. Specify an output location for the generated stubs. Usually this can just be “Stubs”. This is the location that the C++ code file stubs are generated in. Setting this to Stubs will place it in a folder called “stubs” in the same directory as the project file. Ultimately you will need to copy this code somewhere else before editing it so the normal convention of stubs makes sense.

9. Build the assembly

10. The build should generate the stub and skeleton files in a new “stubs” folder under the project’s folder. When the “Generate native stubs for internal methods” check box is set the build will perform two additional steps.

a. Generate the skeleton stub files

b. Mark the <Assemblyname>.PE file with a checksum of the method signatures for ALL of the methods in the assembly. This same checksum is written into the generated .CPP code as a means to validate a match between the managed code and the native code implementations at run-time. If the “Generate native stubs for internal methods” checkbox is not checked that checksum remains as 0 and therefore will always be incorrect and you will get an unsupported exception at run-time whenever an application tries to call one of the native methods.

Reviewing the generated code

 

The “Generate native stubs for internal methods” checkbox in the projects’ property page will instruct the build system to generate a set of files containing marshaling stubs and proxies. The build will also generate a skeleton implementation of the code you will need to implement to handle the HAL side of the interop functionality. The following table lists the files generated and their purpose:

File

Description

<StubName>.cpp

Table definitions for the native methods. [Internal details not documented and edits not supported by PK Users]

<StubName>.h

Header file for the Native code [Internal details not documented and edits not supported by PK Users]

<StubName>_<SafeFullClassName>.cpp

Skeleton class implementation file. (One for each class in the assembly ) This is normally where you place the bulk of your native code functionality.

<StubName>_<SafeFullClassName>.h

Skeleton class header file. (One for each class in the assembly ) You will often need to add additional methods and static variables for your interop support.

<StubName>_<SafeFullClassName>_mshl.cpp

Marshaling class implementation file. (One for each class in the assembly )

[Internal details not documented and edits not supported by PK Users]

<StubName>_<SafeFullClassName>_mshl.h

Marshaling class header file. (One for each class in the assembly ) [Internal details not documented and edits not supported by PK Users]

Dotnetmf.proj

Native code Porting kit project file for the native code (Should not need to modify this unless you add additional code files to your project)

<AssemblyName>.FeatureProj

Feature project for inclusion of your interop support in the solution wizard. (This must be customized as explained later in this article)

 

 <StubName> is the root stub name provided in the projects “Micro Framework” property settings page.

<SafeClassName> is the full class name, including all namespaces, with “_” as the separator instead of a “.”. Thus Microsoft.SPOT.HAL becomes Microsoft_SPOT_HAL Some Tool chains may not handle the “.” in the file name.

<AssemblyName> is the assembly name as set in the projects application property page.

Copying the generated files

As previously mentioned, the native method checksum for the output .PE file is only written when the stubs are generated. Thus, you need to always generate stubs for any assembly with interop support. It is, therefore necessary to copy the generated stubs into a new directory to prevent a future build of your C# managed code from overwriting any additional code you. I normally prefer to use a folder arrangement like this:

UsingInterop_2

This arrangement allows for a VS solution file for the Interop assembly and a test application alongside a separate folder for the native code in a sensible tree. This organization makes it relatively easy to perform diff/merge operations between the code generated in the “stubs” folder and the actual native code you have filled in for real functionality. If you change the C# methods, fields or classes then you will need to merge those changes over to your native code. (The tool does not attempt to auto-merge changed code and simply re-writes the files every time. This is why you need to make a copy).

Customizing the FeatureProj file

The generated .featureproj must be edited and placed into the proper location for the solution wizard and build system to use it correctly. In particular you must specify the location of some of the files you have copied and the location of the Assemblies .PE file. The following example shows the feature project generated for the sample Microsoft.SPOT.HAL assembly:

 <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="https://schemas.microsoft.com/developer/msbuild/2003">
   <PropertyGroup>
     <FeatureName>Microsoft.SPOT.HAL</FeatureName>
     <Guid>{ba7724ff-bcff-4eb1-95c9-ebe0a41dd857}</Guid>
     <Description>
     </Description>
     <Groups>
     </Groups>
   </PropertyGroup>
   <ItemGroup>
     <InteropFeature Include="Microsoft_SPOT_HAL" />
     <DriverLibs Include="Microsoft_SPOT_HAL.$(LIB_EXT)" />
     <MMP_DAT_CreateDatabase Include=" $(BUILD_TREE_CLIENT)\pe\Microsoft.SPOT.HAL.pe" />
     <RequiredProjects Include="Stubs\Stubs\dotnetmf.proj" />
   </ItemGroup>
 </Project>

 

The sections in bold are the ones you will need to update. The MMP_DAT_CreateDataBase tag lists a PE file that should be included in the devices ER_DAT file (for the built-in assemblies). This should be set to the location of the PE file generated by your visual studio build.

NOTE: The MMP_DAT_CreateDataBase tag is only needed if you want to include the managed code assembly’s PE file in the “built-in” area of the device ROM/FLASH (e.g. ER_DAT). This is often desirable for interop assemblies as it minimizes the risk of a developer deploying a version of the managed code PE file that is incompatible with the native code portion in the HAL. However, during development of the interop library it is often desirable to let VS do the deployment of the interop PE and a test application as you go. Thus, if you are releasing your interop assembly beyond your own office/cubicle, it is best to use the MMP_DAT_CreateDatabase tag to automatically include the pre-built managed image.

At this point in time there is no support for building the Visual studio project file in the porting kit build environment so your C# interop assemblies must be built with Visual Studio before building the run-time image from the Porting Kit.

 

The RequiredProjects tag should point to the dotnetmf.proj file of your copied Native code. (E.g. NOT the same \”stubs” location you set in VS for the stub generation but the location you copied those files to before editing them) In my sample folder layout that becomes:

 <Project ToolsVersion="3.5" DefaultTargets="Build"
          xmlns="https://schemas.microsoft.com/developer/msbuild/2003"
 >
     <PropertyGroup>
         <FeatureName>Microsoft.SPOT.HAL</FeatureName>
         <Guid>{391e4194-55c7-4114-bba2-ceba432f0173}</Guid>
         <Description>
         </Description>
         <Groups>
         </Groups>
     </PropertyGroup>
  
     <ItemGroup>
         <InteropFeature Include="Microsoft_SPOT_HAL" />
         <DriverLibs Include="Microsoft_SPOT_HAL.$(LIB_EXT)" />
         <MMP_DAT_CreateDatabase
          Include=" $(SPOCLIENT)\DeviceCode\Interop\Microsoft_SPOT_HAL\ManagedCode\Microsoft_SPOT_HAL\bin\Release\Microsoft.SPOT.HAL.pe" />
         <RequiredProjects Include=" $(SPOCLIENT)\DeviceCode\Interop\Microsoft_SPOT_HAL\NativeCode\Microsoft_SPOT_HAL\dotnetmf.proj" />
     </ItemGroup>
 </Project>

 

NOTE: The $(SPOCLIENT) macro is the current value of the SPOCLIENT environment variable set by the setenv_xxx command used for the porting kit build command prompt. Using this macro instead of a full hardcoded path allows other developers on your team to use your interop code even if they installed the PK to a different location than you did. So long as the projects are placed in the same location relative to the build tree root - it all just works.

 

After you have customized the featureproj file you need to save it (or a copy of it) into the %SPOCLIENT%\Framework\Features directory. Placing the featureproj file in this folder is required to allow the SolutionWizard to include this feature in a device run-time image.

Adding the feature to your solution

1. Start the solution wizard

2. Ensure that the Porting Kit Directory is set to the correct location (if you start solution wizard from a PK command prompt after running the setenv_xxx command it will default to the correct setting)

3. Click next, select Edit an Existing solution and click Next again

4. Select the solution you want to add the interop feature to and click next

5. Leave the Processor Properties settings unaltered and click next

6. Leave the Project Selection settings unaltered and click next

7. In the Feature Selection page select your interop feature as shown:

UsingInterop_3

8. Click next 2 more times and then click finish to complete the solution wizard and save your changes.

Sanity check build

As a sanity Check point you can now build the run-time image including the generated interop skeleton code. (The default generated native interop code won’t do anything but set all return values and out parameters to 0. It should at least all compile and still run.)

Writing the actual native code functionality

Now that you know you’ve got things set up right and all the one time actions are complete- it’s time to get down to the real work. The first place to start is the header file for the AddressSpace class. We’ll need to define a structure to store the information for a named memory region. The HAL will define a global array of these structures in order to expose a set of named memory regions. There are a few other additions to make the implementation work that are hopefully self explanatory when you look into the code. One key item to notice is that the parameter names of all the methods generated are of the form paramxx where xx is a number. This is due to the fact that the code generation process operates on the post processed PE file to ensure the method table exactly matches the code on the device. By the time the PE file is generated the parameter name information (along with other info not needed at run-time) is already removed. It is usually desirable to change the names at this point so that it’s clearer what the paramaters are.

 //-----------------------------------------------------------------------------
 //
 //                   ** WARNING! ** 
 //    This file was generated automatically by a tool.
 //    Re-running the tool will overwrite this file.
 //    You should copy this file to a custom location
 //    before adding any customization in the copy to
 //    prevent loss of your changes when the tool is
 //    re-run.
 //
 //-----------------------------------------------------------------------------
  
  
 #ifndef _RAWIO_MICROSOFT_SPOT_HARDWARE_ADDRESSSPACE_H_
 #define _RAWIO_MICROSOFT_SPOT_HARDWARE_ADDRESSSPACE_H_
  
 namespace Microsoft
 {
     namespace SPOT
     {
         namespace Hardware
         {
             struct AddressSpace
             {
                 // Helper Functions to access fields of managed object
                 static UINT32& Get_ASID( CLR_RT_HeapBlock* pMngObj )
                 { 
                     return Interop_Marshal_GetField_UINT32( pMngObj
                          , Library_RawIO_Microsoft_SPOT_Hardware_AddressSpace::FIELD__ASID );
                 }
  
                 // Declaration of stubs. These functions are implemented by Interop code developers
                 static void _ctor( CLR_RT_HeapBlock* pObj, LPCSTR Name, INT32 Width, HRESULT& hr);
                 static INT32 get_BitWdith( CLR_RT_HeapBlock* pObj, HRESULT& hr);
                 static UINT32 get_Length( CLR_RT_HeapBlock* pObj, HRESULT& hr);
  
                 struct TableEntry
                  { 
                     LPCSTR pName;       // Unique name of the address space
 void * pBase;        // Base address of the address space
                     UINT32 BitWidth;    // Bit width for "words" in the address space
                     UINT32 Length;      // Length of the address space in "words" 
                  }; 
 
  // Converts an AddressSpace ID from the managed code into a 
  // TableEntry*. If the ASID is not valid (e.g. doesn't point
  // to an entry in the table) an error code is returned. 
 static HRESULT GetValidEntry(CLR_RT_HeapBlock* pObj,  const TableEntry*& pEntry); 
 
  // verifies ASID is in the table and offset is
  // within the allowed range; If all is OK it fills
  // in the address. 
  // error: index out of range or invalid argument
 static HRESULT GetAddress(CLR_RT_HeapBlock* pObj, UINT32 offset,  void *& pAddress);  
 
  // Definition for these are provided by the HAL... 
 
  // Table of HAL specific address spaces to expose to managed code
 static const TableEntry Spaces[]; 
 
  // Number of entries in the Spaces table
 static const size_t NumSpaces; 
             };
         }
     }
 }
 #endif  //_RAWIO_MICROSOFT_SPOT_HARDWARE_ADDRESSSPACE_H_

 

Notice that all of the methods are static. This is because the object they manipulate is not a real C++ object instance, but rather a .NET CLR managed object. Thus you must not store the CLR_RT_HeapBlock* provided to the interop methods beyond the function itself as the CLR garbage collector is free to move the data and has no way to update your stored reference.

Another item to notice is that all of the generated methods have an extra parameter (HRESULT& ) not in the C# code. This is the mechanism for indicating errors in the interop code. If you set the provided reference to an error code before returning, the marshalling code will return that error code to the CLR which will convert it into an exception. The marshaling code sets it to S_OK before calling your implementation so you only need to set it on an error condition.

The CLR header files define a set of pre-defined error codes as follows:

Error Code

Managed Code Exception

CLR_E_INVALID_ARGUMENT

System.ArgumentException

CLR_E_ARGUMENT_NULL

System.ArgumentNullException

CLR_E_OUT_OF_RANGE

System.ArgumentOutOfRangeException

CLR_E_INDEX_OUT_OF_RANGE

System.IndexOutofRangeException

CLR_E_INVALID_OPERATION

System.InvalidOperationException

CLR_E_NOT_SUPPORTED_EXCEPTION

System.NotSupportedException

CLR_E_NOTIMPL

System.NotImplementedException

CLR_E_NULL_REFERENCE

System.NullReferenceException

CLR_E_OUT_OF_MEMORY

System.OutOfMemoryException

All other values

System.Exception

 

 The CPP implementation of the AddressSpace class looks like this:

 //-----------------------------------------------------------------------------
 //
 //                   ** WARNING! ** 
 //    This file was generated automatically by a tool.
 //    Re-running the tool will overwrite this file.
 //    You should copy this file to a custom location
 //    before adding any customization in the copy to
 //    prevent loss of your changes when the tool is
 //    re-run.
 //
 //-----------------------------------------------------------------------------
  
  
 #include "RawIO.h"
 #include "RawIO_Microsoft_SPOT_Hardware_AddressSpace.h"
  
 using namespace Microsoft::SPOT::Hardware;
  
 void AddressSpace::_ctor( CLR_RT_HeapBlock* pObj
                         , LPCSTR Name
                         , INT32 Width
                         , HRESULT& hr
                         )
 {
   // scan the table to find the named adrress space
 const AddressSpace::TableEntry* pSpace = NULL; 
 for(int i=0; NULL==pSpace && i< AddressSpace::NumSpaces; ++i) 
      { 
 if(strcmp(Name, AddressSpace::Spaces[i].pName) == 0) 
             pSpace = AddressSpace::Spaces + i; 
      } 
 
  // if not found, complain about it
 if (NULL == pSpace) 
      { 
         hr = CLR_E_INVALID_ARGUMENT; 
 return ; 
      } 
 
     UINT32& ASID = AddressSpace::Get_ASID(pObj); 
     ASID =  reinterpret_cast <UINT32>(pSpace); 
 }
  
 INT32 AddressSpace::get_BitWdith( CLR_RT_HeapBlock* pObj, HRESULT& hr)
 {
 const AddressSpace::TableEntry* pEntry; 
     hr = AddressSpace::GetValidEntry(pObj, pEntry); 
 if (FAILED(hr)) 
 return 0; 
 
 return pEntry->BitWidth; 
 }
  
 UINT32 AddressSpace::get_Length( CLR_RT_HeapBlock* pObj, HRESULT& hr)
 {
 const AddressSpace::TableEntry* pEntry; 
     hr = AddressSpace::GetValidEntry(pObj, pEntry); 
 
 if (FAILED(hr)) 
 return 0; 
 
 return pEntry->Length; 
 }
  
  // group = Internal implementation support
 
 HRESULT AddressSpace::GetValidEntry( CLR_RT_HeapBlock* pObj
                                     ,  const AddressSpace::TableEntry*& pEntry
                                     ) 
  { 
     UINT32& ASID = AddressSpace::Get_ASID(pObj); 
  
 
  // get the ID as a table entry pointer    
     pEntry =  reinterpret_cast <AddressSpace::TableEntry*>(ASID); 
  
 
  // make sure it's in the table
 if (  pEntry > (AddressSpace::Spaces + AddressSpace::NumSpaces) 
        || pEntry < AddressSpace::Spaces
        ) 
      { 
 return CLR_E_INVALID_ARGUMENT; 
      } 
 
 return S_OK; 
 }
  
 HRESULT AddressSpace::GetAddress( CLR_RT_HeapBlock* pObj
                                  , UINT32 Offset
                                  ,  void *& pAddress
                                  ) 
  { 
 const AddressSpace::TableEntry* pEntry; 
     HRESULT hr = AddressSpace::GetValidEntry(pObj, pEntry); 
 if (FAILED(hr)) 
 return hr; 
  
 
  // bounds check the offset for safety
 if (Offset >= pEntry->Length) 
 return CLR_E_INDEX_OUT_OF_RANGE; 
  
 
  // compute the actual address acounting for bitwidth of each "word" 
     pAddress =  reinterpret_cast<void *>(UINT32(pEntry->pBase) + Offset * pEntry->BitWidth/8); 
 return S_OK; 
 }
  

The AddressSpace::_ctor() is the native code constructor for the AddressSpace class. The constructor first scans the OEM provided table of entries to find a matching name and then stores a pointer to that entry into the ASID field of the object. To access the fields of the object you must use the Get_xxx methods generated in the header file. These functions all return a reference to the field value that you can use to read/write the field and it will be correctly updated in the managed object when the interop call returns.

NOTE:

It is important to point out that when the CLR calls into any custom interop code ALL managed code threads are suspended. Thus the native code is considered to be single threaded and generally does not need to deal with synchronization (Except when dealing with an ISR). This has performance implications. You don’t want to put a huge processing loop in your native code as everything stops until it is done.

The three derived AddressSpaceUintXX classes each get their own header/cpp file pair. They all look pretty much the same except for the type of the values used so we’ll only look at the 8 bit address space here and I’ll leave the other two as an exercise for the reader J:

 //-----------------------------------------------------------------------------
 //
 //                   ** WARNING! ** 
 //    This file was generated automatically by a tool.
 //    Re-running the tool will overwrite this file.
 //    You should copy this file to a custom location
 //    before adding any customization in the copy to
 //    prevent loss of your changes when the tool is
 //    re-run.
 //
 //-----------------------------------------------------------------------------
  
  
 #include "RawIO.h"
 #include "RawIO_Microsoft_SPOT_Hardware_AddressSpaceUInt8.h"
  
 using namespace Microsoft::SPOT::Hardware;
  
 UINT8 AddressSpaceUInt8::get_Item( CLR_RT_HeapBlock* pObj, UINT32 Offset, HRESULT& hr)
 {
     void * pTemp; 
     hr = AddressSpace::GetAddress(pObj, Offset, pTemp); 
 if (FAILED(hr)) 
 return 0; 
  
 
 return  * reinterpret_cast <UINT8*>(pTemp); 
 }
  
 void AddressSpaceUInt8::set_Item( CLR_RT_HeapBlock* pObj, UINT32 Offset, UINT8 value, HRESULT& hr)
 {
     void * pTemp; 
     hr = AddressSpace::GetAddress(pObj, Offset, pTemp); 
 if (FAILED(hr)) 
 return ; 
  
 
      * reinterpret_cast <UINT8*>(pTemp) = value; 
 }
  

The implementations are pretty straightforward. They simply use the GetAddress() method previously defined in the AddressSpace class to get a pointer to the memory location described by the given offset into the AddressSpace. After getting the address the code checks the returned HRESULT for an error with the FAILED() macro. This Macro will return true if the error code indicates a failure and false if it is a non-failure code such as S_OK. You’ll likely use this macro regularly throughout your code in a pattern much like that shown in this sample. Once the code has a valid pointer it simply dereferences it to read/write its value.

Completing the run-time image

For many interop library scenarios the work would be done at this point. Just run solution wizard to hook your interop code into the HAL and the build system takes care of the rest. However, in this case the AddressSpace native code class has a static TableEntry array that must be defined for each solution that will use the AddressSpace interop support. Since this is specific to a particular solution the code should be located in the solution directory along with the other configuration data. For this sample I’ll use the i.MXS platform to include the AddressSpace interop library but I’ll keep things as platform independent as possible to allow you to re-use the bulk of the project files. (NOTE: The steps here are manual. However, there is a way to further leverage the Solution wizard to provide a dependency for selecting existing implementations or generating a skeleton implementation from a “template”. Most of the work in setting this up is similar to what we’ll do in this article, however fully leveraging that is a large enough diversion from the main topic I won’t use it here.)

The first thing we’ll need to do is create a new folder in the solution for the configuration. By convention config libraries go into a folder structure like this:

%SPOCLIENT%\Solutions\<Solution Name>\DeviceCode\<config group>

Where <Solution Name> is the name of the solution you are adding the AddressSpace support to and <config group> is a meaningful grouping of the configuration libraries, in this case we’ll use AddressSpace. So the full path will be:

%SPOCLIENT%\Solutions\iMXS\DeviceCode\AddressSpace

NOTE: There may be additional folders below the <config group> as needed by the configuration data. (For instance - if you have multiple block storage devices you’ll typically see additional folders; one for each type of storage.)

 

We’ll need an AddresSpace.cpp file and a dotnetmf.proj file to build the library. The AddressSpace.cpp file is fairly simple:

 #include <tinyHal.h>
 #include "RawIo.h"
 #include "RawIo_Microsoft_SPOT_Hardware_AddressSpace.h"
  
 using namespace Microsoft::SPOT::Hardware;
  
 // This sample will expose the LCD display framebuffer
 // as an AddressSpaceUInt16, which allows Managed applications
 // direct access to the frame buffer.
 // In this sample the framebuffer is 320x240x16
  
 // This must be defined somewhere in the HAL.
 // The display driver is a logical place for that.
 extern void* const pFrameBuffer;
  
 const AddressSpace::TableEntry AddressSpace::Spaces[] = {
 //  |  Name        | Base address | Word Width | Word Length |
 //  +--------------+--------------+------------+-------------+
     { "Framebuffer", pFrameBuffer , 16         , 320*240}
 };
  
 const size_t AddressSpace::NumSpaces = ARRAYSIZE_CONST_EXPR(AddressSpace::Spaces);

There’s not a lot of code needed. Just a table of data structures that the AddressSpace implementation will scan. In this case it’s an entry to expose the LCD Frame buffer but it could be any internal or external peripheral you want to support RAW access to. The separation of data in this fashion (driver code + a small config library) is a central design pattern used in the .NET MF HAL and solution samples provided in the porting kit. I strongly encourage you to follow this separation to allow greater re-use of your code across multiple solutions and hardware platforms.

Creating the project file

The project file is a standard Msbuild project file for the .NET MF native code porting kit build system. It looks like this:

 <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="https://schemas.microsoft.com/developer/msbuild/2003">
     <PropertyGroup>
         <AssemblyName>AddressSpace_Config</AssemblyName>
         <!--
         You should set this GUID to a unique GUID whenever creating a new library. So if you copy
         this sample use GUIDGEN.Exe to create a new GUID and paste it here instead of the one shown
          -->
         <ProjectGuid>{YOUR-GUID-HERE}</ProjectGuid>
         <Description>Address space configuration library</Description>
         <Level>HAL</Level>
         <LibraryFile>$(AssemblyName).$(LIB_EXT)</LibraryFile>
         <!-- 
         Location of the project file; this assumes a location relative to the solution being built 
         NOTE: in V3.0 the solution name is in the $(Platform) variable for legacy reasons
         -->
         <ProjectPath> $(SPOCLIENT)\Solutions\$(PLATFORM)\DeviceCode\AddressSpace\dotNetMF.proj</ProjectPath>
         <ManifestFile>$(LibraryFile).manifest</ManifestFile>
         <PlatformIndependent>False</PlatformIndependent>
         <CustomSpecific>
         </CustomSpecific>
         <Required>False</Required>
         <IgnoreDefaultLibPath>False</IgnoreDefaultLibPath>
         <IsStub>False</IsStub>
         <Directory>Solutions\$(PLATFORM)\DeviceCode\AddressSpace</Directory>
         <OutputType>Library</OutputType>
         <PlatformIndependentBuild>false</PlatformIndependentBuild>
         <Version>3.0.0.0</Version>
     </PropertyGroup>
     <Import Project="$(SPOCLIENT)\tools\targets\Microsoft.SPOT.System.Settings" />
     <ItemGroup>
         <Compile Include="AddressSpaces.cpp" />
  
         <!-- The IncludePaths Tag tells the build system to add this path to the compiler command line
              so the compiler will find the AddressSpace header files if you put them into a different
              location then the one described in the sample you will need to update this path.
         -->
          < IncludePathsInclude = "DeviceCode\Interop\Microsoft_SPOT_HAL\NativeCode\Microsoft_SPOT_HAL"  /> 
     </ItemGroup>
     <Import Project="$(SPOCLIENT)\tools\targets\Microsoft.SPOT.System.Targets" />
 </Project>

 

To keep things simpler for re-use in this manual process I’ve used the $(Platform) variable wherever the solution name would normally go. The bulk of the project file is pretty generic stuff. The only parts that might need customization are the GUID, the project path and the IncludePaths as explained in the comments for the project file.

Hooking it all together

Now that the configuration library is ready you need to tell the build system to include the new configuration library in the image so you won’t get linker errors about the missing table. To do that you must edit the solutions tinyclr.proj file. (This is normally found at %SPOCLIENT%\Solutions\<Solution Name>\TinyCLR\TinyCLR.proj) Open the TinyCLR.proj file and scroll down to the end. It should look something like this:

 [...]
 <ItemGroup>
     <RequiredProjects Include="$(SPOCLIENT)\Support\WireProtocol\dotNetMF.proj" Condition="'$(PORT_BUILD)'==''" />
     <PlatformIndependentLibs Include="WireProtocol.$(LIB_EXT)" />
   </ItemGroup>
   <Import Project="$(SPOCLIENT)\tools\targets\Microsoft.SPOT.System.Targets" />
 </Project>

 

NOTE: Your specific TinyCLR.Proj may have different values at the end depending on which solution and what you have selected in solution wizard. The important part for this step is what you add and not what is already there.

 

Just above the closing </Project> tag there is an <Import> tag. You need to add a new ItemGroup just before the Import tag. In the ItemGroup you specify the RequiredProjects tag which points to a dotnetmf.proj file for the configuration library to tell the build system it should build the library when the TinyCLR.proj is built. You also specify a DriverLibs (in contrast to the PlatformIndependentLib shown in the previous example the DriverLibs are hardware platform specific)

      < ItemGroup > 
          < RequiredProjectsInclude = " $(SPOCLIENT)\Solutions\$(Platform)\DeviceCode\AddressSpace\dotNetMF.proj"  /> 
          < DriverLibsInclude = "AddressSpace_Config.$(LIB_EXT) "  /> 
      </ ItemGroup>
     <Import Project="$(SPOCLIENT)\tools\targets\Microsoft.SPOT.System.Targets" />
 </Project>

 

NOTE: This last step can be made automatic by setting up custom categories and dependency information for solution wizard. As previously mentioned doing that is a broader subject for a later article.

Summary

Well, there you have it a complete re-usable mechanism for providing direct access to memory mapped devices for C# applications in the .NET Micro Framework. That’s a pretty cool addition to the capabilities of the .NET Micro Framework and was made possible through the interop functionality provided in the V3.0 .NET Micro Framework Porting kit. The actual interop code in this sample illustrates how to define C# classes that will have native code implementations and ultimately how to fill in the implementations (including access to fields of the object the method is for). There’s still more to interop, especially the asynchronous notification mechanisms but I’ll leave that for another article in the future. I hope this tour, complete with a re-usable sample, helps in your understanding interop and creating the next great embedded device.