More Fun with Interop

Now let’s say we have a “C” exported function that looks something like this:

void MyFunction(void* myData);

This, of course, is inherently unsafe, because the void* could represent anything. In the wonderful world of “C”, however, such a construct is frequently used.
We must assume that the caller knows exactly what the called function is expecting the void* to point to. Knowing that, then, how would we call this function from C#? Good question.
I puzzled over it a bit before realizing that a void* is allowed in C#. No way, you say. That’s what I thought.

Using P/Invoke for this example, in your DllImport declaration, just put the “unsafe” keyword like so:

        public static unsafe extern void MyFunction(void* myData);

Using “unsafe” allows the code to compile and the compiler to be happy with a void* in C#. Cool. So how do you call it from C#?
This is one of those instances where you need to create a “pinned” address so the garbage collector doesn’t move the memory around during the call. For the sake of simplicity in this example we are just going to use an array of bytes for the void*. Put everything in an “unsafe” block so the C# compiler is happy with a void* :

public void AMethod()
    void* vaddr = null;
    Array arr = Array.CreateInstance(typeof(byte), 10);
    GCHandle handle = GCHandle.Alloc(arr, GCHandleType.Pinned);
    vaddr = (void*)handle.AddrOfPinnedObject();
    //Call the DllImport function

    if (handle.IsAllocated)

Assuming that MyFunction “knows” that it is going to get an array of 10 bytes (in this case), the call will work and both the C++ and C# sides will be happy. Of course, you probably should put something interesting in the array before sending it across. I’ll leave that to you.



Fun with Interop

Coding with C# is fun (most of the time) and really takes a lot of the work out of stuff, like memory allocation/deallocation, that was pretty grungy in C++. Unfortunately, sometimes a new .NET component has to talk to a legacy C++ API and that’s where things can get interesting.
I want to look at a hypothetical example where we are creating a managed wrapper around an old-style extern “C” exported interface.
This example will use the method known as P/Invoke (or Platform Invoke) to call from C# to the C++ interface. Passing primitive types is easy, since there is a pretty straightforward correlation between the managed and unmanaged types. But what about if we want to pass a wchar_t* (pointer to 16-bit UNICODE character) to an Extern “C” exported function as an embedded structure member? How could we do that from C#?

Assume the C structure looks like this:

typedef struct
  const wchar_t *Address;
  //other stuff

and the exported C function looks like this:

void foo(MyStruct* mystruct);

Assume that the C++ implementation is just going to examine the contents of the Address wchar_t* in the structure; it is not going to change it in any way.
Calling that from C++ is easy since it knows about pointers. C#, however does not know about pointers — just references to objects. It turns out to be a bit of a pain to call from C#.
What I did was define a C# structure that mimics the “C” one:

internal unsafe struct _MyStruct
  //IntPtr acts as placeholder for char*
  public IntPtr Address;

  //Managed memory handle
  private GCHandle hAddress;

What we doing here is creating a wrapper around a raw GCHandle memory handle and managing the address of the bytes ourselves in the IntPtr. Yuck. Keep reading, it gets better.

Let’s add to the struct:

internal unsafe struct _MyStruct
  //IntPtr acts as placeholder for char*
  public IntPtr Address;

  //Managed memory handle
  private GCHandle hAddress;

  internal _MyStruct(string str)  //c'tor
    UnicodeEncoding ue = new UnicodeEncoding();
    byte[] arr = ue.GetBytes(str);

    //extra 2 for null terminator char on end that "C" expects
    Array arr0 = Array.CreateInstance(typeof(byte), arr.Length + 2);
    Array.Copy(arr, arr0, arr.Length);

    hAddress = GCHandle.Alloc(arr0, GCHandleType.Pinned);
    Address = hAddress.AddrOfPinnedObject();

We get the raw bytes of the incoming string, create a byte array of that size (+ 2 extra for the null char that “C” expects), then GCHandle.Alloc a “Pinned” address that we remember. Since “C” expects an address of a “string” that is not going to move around, we have to “Pin” the memory and get its address with GCHandle.AddrOfPinnedObject and remember that too.
Yuck. Keep reading, it gets better.

Let’s add to the struct again:

public unsafe struct _MyStruct : IDisposable
  //IntPtr acts as placeholder for char*
  public IntPtr Address;

  //Managed memory handle
  private GCHandle hAddress;

  internal _MyStruct(string str)  //c'tor
    UnicodeEncoding ue = new UnicodeEncoding();
    byte[] arr = ue.GetBytes(str);

    //extra 2 for null terminator char on end that "C" expects
    Array arr0 = Array.CreateInstance(typeof(byte), arr.Length + 2);
    Array.Copy(arr, arr0, arr.Length);

    hAddress = GCHandle.Alloc(arr0, GCHandleType.Pinned);
    Address = hAddress.AddrOfPinnedObject();

  public void Dispose()
    if (hAddress.IsAllocated)


Here, we are deriving from IDisposable so we can free the memory handle manually in our Dispose method. Isn’t managed code supposed to free us from this kind of grunge? Guess not.

So, to call this puppy, we could do something like this:

public static extern void foo(ref _MyStruct mystruct);

string strHello = "hello";
_MyStruct mystruct = new _MyStruct(strHello);
  //The "C" function gets "hello" in the struct Address member
  foo(ref mystruct);

Sadly, we CANNOT do the following elegant call:

//Won't compile ***
using(_MyStruct mystruct = new _MyStruct(strHello))
  foo(ref mystruct);

Why won’t it compile? Because, as the compiler complains, you “Cannot pass ‘mystruct’ as a ref or out argument because it is a ‘using variable'”. Hmmm… Oh well. Another thing we live with.

One caveat here, the _MyStruct structure has to mirror the “C” struct, so if the “C” struct has “const wchar_t *Address” as its first member, _MyStruct has to have “public IntPtr Address” as its first also. If there were a second wchar_t* in the “C” struct, _MyStruct would have to have its second “public IntPtr” come immediately after the first and BEFORE any of the GCHandles. The reason here is that the memory layouts have to be the same if you are going to pass a _MyStruct with IntPtr members to an unmanaged function that expects a struct* with wchar_t* members. Make sense?
Oh, and don’t try to make _MyStruct a class. Bad things will happen.

Got a better solution? Lemme hear it.



Sometimes working with a managed language like C# can make you lazy. Having cut my programmers teeth on C++, I learned very early that if you “new” something you must also “delete” it. Forget and you get a memory leak.

After moving to C#, however, I occasionally become complacent and begin to think that the garbage collector (GC) will solve all wrongs with my coding style. I re-learned that lesson the hard way with database connections recently, and almost had a site shut down because of it. Perhaps that experience will save you some pain.

Managed code is awesome. You can “new” and allocate all kinds of resources however you want and the GC will just clean up for you. Right? Well, not exactly. It is true that the GC will free “new”ed memory once all references to it are released. The problem (especially with a web app on a busy site) is that by the time the finalizer queue runs, the server box could be out of a scarce resource like database connections and further requests will fail often causing the site to block for extended periods of time.

Let me give you an example. Let’s take a class derived from DbConnection like SqlConnection (for SQL Server databases) or OdbcConnection (for generic connections to databases like MySQL). We’ll use it by creating our own derived class and holding an instance of the connection as a class member. Just to be safe, we’ll let the finalizer close the DB connection for us (since database connections don’t tend to close themselves) so we don’t leak any resources.
Good idea? Let’s see:

public class DBUtilities
  private SqlConnection m_dbSqlConn;

     // Our safety net?
      if (m_dbSqlConn != null)

    public SqlConnection GetSqlConnection()
            m_dbSqlConn = new SqlConnection();
            m_dbSqlConn.ConnectionString = "MyConnectionString";
            if (m_dbSqlConn.State != ConnectionState.Open)
                return null;
        catch (Exception e) {}

        return m_dbSqlConn;


//Use the class:

void UseDatabase()
    DBUtilities dbu = new DBUtilities();
    SqlConnection sqlConn = dbu.GetSqlConnection();
    SqlCommand cmd = new SqlCommand("Select * from aTable", sqlConn);
    SqlDataReader dr = cmd.ExecuteReader();
    //Do something with the data

Cool. The finalizer for DBUtilities will close the database connection for us, won’t it?
Well, sure, but the important question is “when?”. The answer is “whenever the GC gets around to it.” That could be several seconds later, or longer. In my case, the games web site (The best online games) where I was using the code above would simply start blocking after a few database requests. Bad news. It gets worse.

The hosting company (CrystalTech) I was using at the time called me with a problem and an ultimatum. The problem was that my site was hogging resources (I had to figure out that it was database resources) and the ultimatum was that they were shutting off my site until I could prove to them that it was well-behaved again.

Scramble. Scramble.

Fortunately it didn’t take too long to dawn on me that the scarce resource was the database connections, and doing something as simple as calling


when done with the SqlDataReader reads and taking the Close() call out of the class finalizer solved the entire problem. Hard to believe? It was for me, and it was a hard lesson in non-deterministic finalization.
I think calling Dispose on the connection would have the same effect, but have not proven that yet.

In the end, I switched the site to Godaddy because of CrystalTech’s non-helpful attitude and now everything is humming along smoothly for the happy games players.



Continuation of Part 1

AJAX sample code – calling from a .NET page with C#

Now the good stuff.  How do we implement AJAX in our .NET pages?  The answer is surprisingly and pleasantly simple, as it turns out, and is not actually particular to any one scripting language, be it C#, php, perl or whatever.

The secret to AJAX is the XMLHttpRequest object.  Unfortunately, it has not been implemented in a totally consistent manner across browsers (what else is new?), but as it turns out, the code branching logic needed to overcome this problem is not bad at all.  All we have to do is test for the existence of the appropriate object with an “if” statement and we are ready to roll.

For the purposes of this simple AJAX tutorial, when my .aspx page loads, I am going to request data from a mythical web service (could be Amazon, Ebay, or any number of others), and present it in an asynchronous manner to the user when it comes in.  If the service is slow or if I am querying many web services in this page, the user will not be left waiting for them all to come back with a response.  The data will show up as it comes in.

Pieces to the puzzle

The first thing we need is a JavaScript function to call on some kind of event.  I have chosen an “onload” event for the page, but it could be on a button click, edit box “blur,” image load or whatever.  For the example here, all we need to do is put the following in our HTML page body tag:

<body onload=”AsyncDataRequest(1)”>

When the page loads, the browser will make sure that AsyncDataRequest gets called with a parameter of 1.  For the following functions, make sure all the code is in a <script language=”JavaScript”> </script> tag in the <head> section of your .aspx page.  Also, put a <span id=”ProductInfo1″>  </span> in your page where you want the response data to go.

The data request function simply looks at the page parameter passed in (optional), formats a request url and calls the function that actually makes the Http Request:

function AsyncDataRequest (page)
    var varProduct;
    var prodSpan;
    if (1 == page)
        prodSpan = 'ProductInfo1';
        varProduct = 'Product1';
    url = 'http://www.mysite.com/DataRequest.aspx?Query=' + varProduct;
    loadXMLDoc (url, processRequest, prodSpan);

We pass in a “page” parameter so we can more easily branch our logic based on what object is making the request.  Here, for page “1” we specify the name of a <span> tag (“ProductInfo1”) that will be filled in with our data later on and a product tag (“Product1”) that lets us return more specific information from our query.

We simply formulate the URL with the desired parameters and call another JavaScript function “loadXMLDoc.”  That function looks like this:

function loadXMLDoc (url, rsFunc, prodSpan)
    var reqHTTP;
    if (window.XMLHttpRequest)
        reqHTTP = new XMLHttpRequest();
        reqHTTP.onreadystatechange = function()
            rsFunc(reqHTTP, prodSpan);

    reqHTTP.open("GET", url, true);
   else if (window.ActiveXObject)
        reqHTTP = new ActiveXObject("Microsoft.XMLHTTP");
        if (reqHTTP)
            reqHTTP.onreadystatechange = function()
                rsFunc(reqHTTP, prodSpan);

        reqHTTP.Open("GET", url, true);

This function tests for the existence of the XMLHttpRequest object and makes our asynchronous request for data.
Notice how we have to branch our XMLHttpRequest creation with the if/else if so both IE and Firefox are happy.  With Firefox, XMLHttpRequest is a built-in object, but with IE, it is an ActiveXObject.  Such is life;  just accept it and go on.

The interesting thing here is the assigning of the “onreadystatechange” event of the request object to the function we want to be called when the event fires.  In case you haven’t seen it before, the onreadystatechange = function () syntax allows us to define what is known in JavaScript as an “anonymous function.”   The cool thing is that we can pass parameters to our callback function, and the script code will take care of preserving those parameters for the callback, even though it occurs later.

Note that “rsFunc” is the parameter we passed in to loadXMLDoc, which we called “processRequest.”  That is the function that will be called when the onreadystatechange event of the request fires (basically just when the request gets more data or an error condition).  More on that in a second.

The only things left to do here are call “Open” with the URL and “send.”

What does “processRequest” look like? I was hoping you would ask:

function processRequest(reqHTTP, prodSpan)
    if (reqHTTP.readyState == 4)
        if (reqHTTP.status == 200)
            response = reqHTTP.responseText;
            document.getElementById(prodSpan).innerHTML = response;

This is simple, boiler-plate code that looks for hardcoded numbers in the response. The readyState of the request has to
be 4 and the status has to be 200. Don’t worry about it, just accept it. Only under those conditions will the request give you good data.
Notice that we passed the request object that we created in loadXMLDoc to this function along with the <span> id originally specified in AsyncDataRequest.
Here, we get the response text from the request object, get the element of the document specified by id (prodSpan) and set its innerHTML to the response text. Voila. The text magically appears in the browser in the place where the <span> tag is.  Very cool.

Last piece to the puzzle — data!

So what does the code in DataRequest.aspx look like?  That’s mostly up to you.  That file does the dirty work of retrieving data from a database, 3rd party web service, validating user input against business rules, etc., etc.  Here’s a sample of what it might look like:

      String strQueryString = Request.QueryString["Query"];
      switch (strQueryString)
        case "Product1":

        //Other product feeds responses here


Here we are just branching our logic based on the Request string sent in the URL we formatted in the AsyncDataRequest function. Feel free to substitute whatever Request QueryStrings suit your needs.
In the GetAmazonData function you would request your data from the appropriate web service, format it (HTML would be desired here) and write it to the Response stream. Every thing else is automatic. Awesome.  You have to supply the logic in MyObject, however. :)

It’s OK for the GetAmazonData function to block for as long as it needs to, because the user is no longer waiting for the page to load.  This is the sweet part.   Here you could also do secret calculations or password validation, etc.

If you have done everything correctly, whatever HTML you put in Response.Write will show up in the ProductInfo1 <span> tag.

This should be a complete example that you can slice and dice to fit your own needs. If not, please post and let me know. Also, if you have found this helpful, please post your experience for others to read.
As you can see, this could easily be adapted to use in a php environment. The only thing that would be different is the trivial differences in syntax for the DataRequest page.

As an example, this link leads to a games site that has implemented Amazon Web Services as a monetizing tool to present games for sale:
Time Killing Games




AJAX, (which stands for “Asynchronous JavaScript and XML”) is a really cool technology.  Although not a perfect solution, AJAX has solved some difficult problems related to creating a “rich” user experience in a web browser.

With a traditional web page, the browser makes an http request to the server.  The web server delivers the HTML content to the browser and the connection between the browser and the server is then broken.  If the user performs some action on the page (e.g. fills out a form), the browser has to do another full round trip to the server, resulting in a page refresh, in order to get any information back to do something as simple as data validation.   This is a somewhat less than satisfying experience for the user, especially if the connection is slow.

Enter AJAX.

In this posting, I intend to cover the following subjects relating to AJAX:

-What AJAX is
-Reasons you would use AJAX
-Sample AJAX code that you can use on a .NET (.aspx) web page

A brief perspective

When I was first introduced to dynamic web page creation some time ago, I was very excited about the possibilities.  No more static .html pages and no more stale content.  Through the magic of the .php or .NET server pages, I could now introduce logic, database query results, 3rd party content and many other things on my sites.  This immediately led me to try integrating web service content from such vendors as Amazon in an effort to monetize my traffic.

While the integration was successful, the results of doing so were less satisfying — especially when querying more than one remote web service from a single web page at a time.  If there was any latency in one of the query connections, the whole page would block until all of the data was returned.  From a user’s perspective, this often meant that they could be left staring at a blank web browser (especially noticable under IE) for as much as 20 seconds — just waiting for the multiple query results to come back.

How AJAX can help

What is AJAX?

As mentioned above, AJAX stands for “Asynchronous JavaScript and XML.”  This means that it is JavaScript, executed in the browser, called in a manner that returns asynchronous results, frequently formatted as XML.  Most modern browsers (IE, Firefox, Safari, Opera) support JavaScript, so it should be a seamless experience for your site users.

The bad thing about using traditional JavaScript is that all your business logic is delivered to the client browser.  This makes it hard to manipulate any kind of passwords or trade secrets such as formulas with it.  The cool thing about what AJAX adds, is that it lets you keep all your business logic, password validation, secret calculations, etc. where they belong — on the server.

Reasons you would use AJAX

Here are a few reasons for using AJAX on your web pages:

-You want to integrate more than one 3rd party data source from typical Web service suppliers and want to avoid the multiple connection latency problem.
-You want to do some kind of data validation of user input (e.g. filling out a registration form) without either delivering all the logic to the client in JavaScript or making a round trip to the server with the resulting full page refresh.
-You want your web application to have the more natural feel and responsiveness of a traditional GUI desktop application.

More in Part 2


Amazon web services are one of the most feature and information-rich sources
of queryable product data available. They even have user and editorial reviews that
you can present to your customers to induce them to buy through your site. The
best part is that it’s free for developers (Get it at Amazon.com).
Here I want to present a short tutorial on how to access the data in AWS with
C#. It’s not really hard to use at all from .NET, and in fact is quite a bit
easier than some of the feeds I have dealt with before (e.g. Apple iTunes feed).
There are no namespaces in the nodes to worry about so our job is a bit simpler.

I know using XSLT might be preferable, but someone just might be looking for a
DOM-centric way to parse an XML feed.
In order to query the Amazon information database, you need an AWSAccessKeyId.
In order to get paid for any sales originating from your site, you need an
Amazon AssociateTag.
So the query URL would look something like this (substitute your AccessKeyId for
“12345” and your AssociateTag for “mytag-20”):


For this operation we will search for some items in a category (Mario VideoGames
in this case), so we append to the URL:

We also want to know about the items and get some prices and images so we tack
this on to the URL:

Lastly, we desire to see some reviews done by customers and editorial staff so
we append Reviews,EditorialReview

So the final URL looks like this:


The response is big, so I’ll just deal with the relevant parts here. Let’s
suppose we want to get the following information from the first item in the
-DetailPageURL (The page you direct your users to go to buy the item)
-SmallImage (Graphical representation of the product)
-Title of the game
-LowestNewPrice (lowest price for the item on Amazon)
-Text of a customer review of the item

The simplified feed looks like this:

        http://www.amazon.com/gp.......more stuff
         http://ec1.images.... more stuff
          Super Mario Bros.
            Bla Bla Bla. This is the greatest game ever.

What to do first? Let’s open the XmlDocument:

String strURL = "http://webservices.amazon.com.....<see above>";
XmlDocument doc = new XmlDocument();

//See if the response is valid - Amazon sends this info in a
//node of the XML tree:

XmlNodeList lstIsValid = doc.GetElementsByTagName("IsValid");
//This is nice because .NET automatically drills down a couple
//of levels to find the "IsValid" node tag

if (lstIsValid.Count > 0 && lstIsValid[0].InnerXml == "True")
  //InnerXml holds is the actual data from the node
  //Ready to party on the data because the response is valid

  XmlNodeList lstItems = doc.GetElementsByTagName("Item");
  //Also automatically getting info a couple of levels
  //deep - sweet
  //Get a NodeList of the "Item" nodes - they hold our info

  if (lstItems.Count > 0)
    //could use a foreach loop here to get them all
    XmlNode nItem = lstItems[0];
     //all children of the first "item" node

    foreach (XmlNode nChild in nItem.ChildNodes)
      //Clunky way to find a child node.
      //This is the best I could
      //come up with from the MSFT docs.
      //Anybody have a better way?
      if (nChild.Name == "DetailPageURL")  //string compare !
        String strURL = nChild.InnerXml; //our data !
      else if (nChild.Name == "SmallImage")
        //Now we have to look through the children of the
        //SmallImage node:
        foreach (XmlNode nURLImg in nChild.ChildNodes)
          if (nURLImg.Name == "URL")
            String strImage = nChild.InnerXml;  //our data !
      else if (nChild.Name == "ItemAttributes")
        foreach (XmlNode nIA in nChild.ChildNodes)
          //Look through each "ItemAttributes" to find the one
          //we want again
          if (nIA.Name == "Title")
            String strTitle = nIA.InnerXml;  //our data !
      else if (nChild.Name == "OfferSummary")
        foreach (XmlNode nOS in nChild.ChildNodes)
          //looking again
          if (nOS.Name == "LowestNewPrice")
            foreach (XmlNode nLNP in nOS.ChildNodes)
              if (nLNP.Name == "FormattedPrice")
                String strPrice = nLNP.InnerXml;  //our data !
                break; //done looking here.
      else if (nChild.Name == "CustomerReviews")
        foreach (XmlNode nCR in nChild.ChildNodes)
          if (nCR.Name == "Review")
            foreach (XmlNode nRev in nCR.ChildNodes)
              if (nRev.Name == "Content")
                //Our review text!
                String strReview = nRev.InnerXml;

You get the idea. Do what you need to do with the data, but this is how to
get it out. I liked the fact that C# would “drill down” a couple of levels to
the relevant child nodes without me precisely specifying like I would have
had to do in XSLT. I did NOT like having to iterate through the nodes and do
string compares to find child nodes. Perhaps there is a better way, but it is
the only one I could find in a lot of searching.

If you want to see the results of parsing the feed in a live site, check out
this games web site that offers game reviews, courtesy of AWS:



What with the big push to have XML adopted as a cross-everything data format, it seems like it should be fairly straightforward to parse an XML file in .NET.  For the most part, I would say that it is, but I ran across a situation recently that was a bit difficult to solve — mostly because of the lack of available information on the subject.
The problem is parsing an XML file that has namespaces embedded in the nodes.  Unusual, you say?

That’s what I thought until I came across the RSS iTunes feed from Apple.
It’s not totally clear to me why they want to embed namespaces in some of the nodes, but it is the way they do it, nonetheless.
RSS, as you probably know, is just a standard for formatting data delivered by content providers (e.g. Web services).  It has some basic, standard node names, but is also extensible so that a consumer of the data can extract the extra information if it is aware of it.
I won’t go into the details of RSS here, but basically, each item in the feed is sandwiched between <item> node tags (this is RSS 2.0), with associated <title>, <description> and other nodes.
In the case of the Apple feed, a number of the proprietary nodes are formatted with namespaces like so: <itms:artist>, <itms:artistLink>, etc.  I had never seen a node with a namespace before (don’t get around much, I guess), but it didn’t seem like it should be too hard to extract the data.  Boy, was I wrong !
Just finding out how to do this took hours of searching around on the web. I finally found a small example on a non-English web site or I would probably still be looking.  All the Microsoft examples of XML parsing I came across didn’t have any nodes with namespaces on them. PHP doesn’t have any trouble with the namespace qualified nodes, but C# is a very different story :)

I am going to present some small code samples to fix this problem that might save you a lot of trouble trying to figure out how to do the same thing. (I realize XSLT would be easier, but you might need to do it this way for some reason.)

So the Apple RSS feed (simplified) looks something like this:

<rss version=”2.0″  other stuff…>

Title here

Artist name

Album name

Release Date here

<itms:coverArt height=”100″ width=”100″>


…More items here


The nodes with the <itms: > namespace are the proprietary ones.

I’m just trying to get the “theImage.jpg” string from the <itms:coverArt> node.  Simple?  Let’s see…

Loading the document is easy:

XmlDocument doc = new XmlDocument();

Getting the item nodes is easy:
//RSS 2.0
XmlNodeList items = doc.SelectNodes("/rss/channel/item"); 

Now I have a collection of the “item” nodes.

Looping through them is easy:

foreach(XmlNode itm in items)
       //Here's the tricky part.
       //To get the data in the nodes with namespaces,
       //I have to jump through some hoops!

       XPathNavigator nav = itm.CreateNavigator();

       //Here I am setting up IXmlNamespaceResolver
       //to pick nodes that have a namespace of "itms"
       //Don't ask me to explain the syntax.

       IXmlNamespaceResolver  nsResolverImg =

       //Picking a node that is a "coverArt" node, in the itms namespace with an
       // attribute of height=100

       XPathNavigator img =
       nav.SelectSingleNode("itms:coverArt[@height='100']", nsResolverImg);

       String strImage = img.InnerXml;  //retrieves "theImage.jpg"
  catch (Exception ex)  {  }


Simple?  Yes, once you know the trick.  Intuitive it is definitely NOT.
What’s with all this //*[namespace::itms] stuff anyway?  Why can’t I just call this:
XPathNavigator img = nav.SelectSingleNode("itms:coverArt[@height='100']");
without all the IXmlNamespaceResolver stuff?  Isn’t that just as clear?  Well, apparently the designers at Microsoft didn’t think so.

Hope this has helped you in solving a similar issue. If you run across any variations of this, please post them for the benefit of all.  I would especially like to see some details on how the “//*[” works with other scenarios in the SelectSingleNode method.

Microsoft guys: How about some better docs – with examples – on this stuff?



Having recently purchased a new Gateway desktop computer recently, I was pleased to learn it had Vista Ultimate loaded on it.   This is Microsoft’s top of the line OS so I was expecting great things.
Having also heard that Vista really sucks up the RAM, I purchased way more than recommended — 4 GB.
After firing up the PC and booting into the OS, however, I was disappointed upon checking Task Manager and learning that Vista was only recognizing 3 GB of my RAM!  Bummer.
BIOS told me there were 4 GB present, so it must be a 32 bit Vista issue with the OS only seeing 3.
Being a programmer, I concluded that 32 bit Vista could only handle 3 GB of RAM.  (Seemed to make sense at the time but I started thinking, why should it be a problem for this cool new OS to see all the memory in my machine?)
Because I was at Tech Ed in Orlando this week, I decided to ask the friendly folks at the Microsoft Vista booth about my dilemma in losing the use of 1 GB of RAM.
The guy in the booth didn’t really understand why it wasn’t working and thought Vista might need some kind of /3GB entry in the boot.ini file.
 He looked around for awhile on Live Search and came up with a couple of articles on MSDN about modifying the boot.ini that looked a bit dated on the subject and not totally relevant.
In the end, he wasn’t much help at all.
I’m having a hard time believing I have to spring for a 64 bit OS just to get the use of all 4 GB of RAM (Couldn’t even NT 4 use more than that?)
Would love to hear from anyone with ideas on this.


This article is going to carry on some ideas from earlier posts. Those posts dealt with how to customize an Outlook Contact by adding a custom tab. If you missed the earlier information, you can catch up on it here: http://www.midniteblog.com/?p=6.

Now we are going to try to something with the blank tab we added. Let’s add an Active-X control to the tab. Assume you already have an Active-X control with a Progid of “MYCONTROL.MyControlCtrl.1” and it does some super special thing that no one else in the world can do so you want to add it to your custom Outlook tab.

Easy enough. We are working with C# here, so add the following 2 reference lines at the top of your connect.cs file:

using Outlook = Microsoft.Office.Interop.Outlook;
using System.Reflection;

Add project references to Microsoft.Office.Interop.Outlook.dll and Microsoft.Vbe.Interop.Forms.dll.
Add the following 2 members to your Connect class:
Outlook.Application m_outlook;
Outlook.Inspectors m_inspectors;

Add the following to your OnConnection method:

m_outlook = application as Outlook.Application;
m_inspectors = m_outlook.Inspectors;
m_inspectors.NewInspector +=
new Microsoft.Office.Interop.Outlook.

Here we are telling Outlook to notify us when a “NewInspector” event fires. This happens basically every time someone double-clicks to open up an Outlook Contact.

Add the following class method to Connect class:

public void Inspectors_NewInspector(Outlook.Inspector inspector)
  if (inspector != null && inspector.CurrentItem is Outlook.ContactItem)
    Microsoft.Vbe.Interop.Forms.UserForm userform;
    object obTemp = ((Outlook.Pages) (inspector.ModifiedFormPages)).Add("My Page");
    userform = (Microsoft.Vbe.Interop.Forms.UserForm) obTemp;
    Microsoft.Vbe.Interop.Forms.Control mc;
      mc = userform.Controls._GetItemByName("myControl");
    catch (Exception)
      mc = null;
    if (null == mc)
      mc = (Microsoft.Vbe.Interop.Forms.Control) userform.Controls.Add
      ("MYCONTROL.MyControlCtrl.1", "myControl", true);
    inspector.ShowFormPage("My Page");

So in this method we are adding our custom “My Page” to the Outlook inspector window (as we did before) and now we are asking the “Controls” collection of the userform to add an Active-X control by Progid (“MYCONTROL.MyControlCtrl.1”). This will CoCreate the AX control and parent it to the Outlook custom tab ! There’s your custom control hosted in Outlook – cool.
Note that if you do not do the check for finding any existing control (e.g. if this is not a new contact) with _GetItemByName you will get an additional control created for every time you open the contact.
This seems very counterintuitive, but apparently Outlook persists the state of each contact page, and when you “Add” a page on a subsequent open of the contact, it loads any controls that were there before.

Now let’s suppose you want to tell the AX control to do something. Let’s further suppose that the control exposes a method called “ShowMyData” that takes a BSTR in param. Continuing on with the example, add the following just before the ShowFormPage call we added above:

Type ty = mc.GetType();
object[] paramList = new object[1];
paramList[0] = "My Data";
BindingFlags.Default | BindingFlags.InvokeMethod,
null, mc, paramList);

Here we are using reflection to bind to the control at run-time and then invoking on it. The exposed method is called “ShowMyData” and the parameter string is hardcoded here to “My Data.” When the AX control gets the string, it can do with it what it likes. Amazing that with the big push to .NET, we still have to use interop to customize Office, but if you want to host a legacy Active-X control on an Outlook custom tab, that’s the way to do it.


Target machine is a clean install, Windows XP Tablet PC edition, never had any development tools on it. After doing XCopy deploy of .exe and associated .dlls, trying to run the application gives error Message Box:
“The application failed to initialize properly (0xc0150002). Click on OK to terminate the application.”
From the Microsoft posting “How to: Deploy using XCopy” http://msdn2.microsoft.com/en-us/library/ms235291(VS.80).aspx, you might think that running vcredist_x86.exe (comes with Visual Studio setup) would solve the problem. It doesn’t.
Going to try the dependency walker tool to see if I can determine what else might be needed.

It turns out that the dependency walker (“Depends”) tool was very informative and pointed out missing dependencies msvcr80.dll, msvcp80.dll and winmm.dll. However, that was only half of the story.
My .exe still wouldn’t run – it kept giving the same “The application failed to initialize properly (0xc0150002).” error. Hmmmm.

Time to pull out the File system monitor, otherwise known as Filemon (http://www.microsoft.com/technet/sysinternals/FileAndDisk/
Filemon.mspx). This is a truly amazing tool that monitors all file system activity. It has saved my rear more than once.
After working with it for a while, I kept noticing a NOT FOUND result in Filemon when csrss.exe was trying to access C:\WINDOWS\WinSxS\Policies\x86_policy.8.0.
Microsoft.VC80.DebugCRT…. Suddenly a light bulb came on – Policies…Debug…Hmmm. Sure enough, recompile in release and redeploy the app – now it runs fine on the target machine.
Looks like there are policies that need to get set up in order to run debug code on a machine.
Live and learn.