ObjectARX Wizard for Visual Studio 2010

If you’re an ObjectARX programmer, you may still be using Visual Studio 2008 because you’ve been told that you must. It’s true that you need Visual Studio 2008 installed to compile native ObjectARX code, but with Daffodil you can do all your work in the Visual Studio 2010 IDE.

Until now, the one thing that didn’t work in VS 2010 was the ObjectARX Wizard. I’m happy to report that the latest wizard seems to works fine in both VS 2008 and VS 2010. Thanks to Alexander Rivilis for revealing the secret URL:
http://images.autodesk.com/adsk/files/objectarx_2012_wizards.zip

Simplifying the problem: plain vanilla configuration

One of the stages in simplifying an AutoCAD software problem is to rule out (or in) third party add-ons as the cause. To do that, you’ll need a “plain vanilla” test bed configuration that consists of only the out-of-the-box components with no add-ons loaded.

The best way to test in a plain vanilla AutoCAD configuration is to install it on a virtual machine. I use VMWare Workstation for this purpose (Microsoft’s Windows Virtual PC is another popular choice). Virtual machines are completely separate from the host system, so they provide an ideal testbed for isolating software problems. Unfortunately, virtual machines take time and resources to build and manage, so they won’t work for everyone.

For most common cases, it’s sufficient to start AutoCAD with a “vanilla” profile (ideally using the /p <profile_name> target argument in a shortcut) that loads only the standard AutoCAD menu and no third party add-ons, but this can be a bit tricky. The problem is that some third party add-ons are determined to load, and it’s difficult to stop them.

Lisp add-ons can be thwarted by ensuring that your “vanilla” profile uses a clean path (i.e. a folder structure containing only unmodified out-of-the-box files) for all support files. In addition, you must ensure that only the standard unmodified menu files are loaded. Finally, you can set the DEMANDLOAD system variable to zero to disable all object enablers from loading. Note that AutoCAD must be closed and restarted after any changes to see the effect.

Finally, here’s one last tip: you can enter (arx) at the AutoCAD command prompt to see a list of loaded ARX modules. Some ARX modules may automatically load at startup; those can be disabled by tweaking the registry, but I don’t recommend that unless you really know what you’re doing.

In the end, the goal is to determine whether the problem can be reproduced on a “plain vanilla” configuration. If the problem goes away, you can start adding things back one at a time to see when the problem resurfaces. If the problem still occurs on a clean configuration, you can focus attention on other possible causes.

Simplifying the problem: divide and conquer

When a customer reports a drawing-specific issue, the next stage in simplifying the problem is to eliminate all drawing content that has no bearing on the problem. Unless the drawing object that causes the problem is obvious and can be immediately ascertained, I use the divide-and-conquer method to zero in on the cause.

To divide and conquer an AutoCAD drawing, I start by getting a quick feel for what is in the drawing and how it is structured (I use Periscope for this; in fact I originally wrote Periscope for this purpose). The general approach I use is to eliminate roughly one half of the drawing at a time by erasing an arbitrary half, then checking to see whether the problem still exists in the remaining half. Doing this reliably requires some care.

If the drawing has layouts, I start by erasing all layouts except model space. I then save the new file with a new filename, close AutoCAD, restart, open the last file, and test to see whether the problem still exists. If the problem went away, I go back to the original file and start over, erasing everything in model space, then repeat the saveas/close/reopen sequence. Once I pin the problem down to a specific layout (or layouts), I start working on individual drawing entities.

A typical iteration consists of erasing roughly half the drawing entities, then checking whether the problem persists. If the problem goes away, back up and erase the other half, otherwise continue. Entities can be erased either by selection or using QSELECT. If the drawing consists of entities nested inside blocks, I first try erasing the block references; if the problem goes away, I back up and use BEDIT to erase part of the block content.

While going through these iterations, I sprinkle in a periodic purge (using SuperPurge, of course) to remove hidden and invisible objects that become unreferenced as visible entities are erased. I continue this process until I have a drawing file cleaned of everything except the bare minimum needed to reproduce the problem. Often, but not always, that means a drawing file with a single visible entity in it.

The key to success is being methodical and saving with incremented filenames so that it’s easy to back up a step and start over when the problem disappears. Closing and restarting AutoCAD between each iteration helps ensure that the test is not influenced by erased objects remaining in memory. With experience and a bit of skill in correctly guessing where to look, I’ve learned that almost all drawing-specific problems can be narrowed down to the bare minimum in 5 to 10 iterations.

The art of simplifying the problem

One of the most important skills in resolving technical problems is not problem solving, but problem definition. Stripping a problem down to its essence often makes the solution obvious. I think this is generally true, but especially true in my experience with software tech support and tracking down software bugs.

The first stage in problem simplification is to document a set of steps that consistently reproduces the problem. These steps must be detailed enough so that someone else can use them to reproduce the problem, including a description of exactly what the problem is. Often this is the most difficult step, either because the problem doesn’t happen consistently, or because the problem description lacks detail.

The second stage is to try eliminating unnecessary steps with the goal of determining the bare minimum steps needed to reproduce the problem. If the problem is drawing-specific, this stage includes stripping everything out of the drawing except the bare minimum needed. This is often very time consuming, but almost always a worthwhile investment because it can eliminate a lot of potential dead ends in tracking down the ultimate cause. Often, this stage requires some trial and error.

The third stage is determining the exact cause and source of the problem. In my experience, even problems that at first appear very complicated can almost always be boiled down to just a few steps with a minimal amount of data.

Finally, in stage four the problem has to be solved, of course!

In future posts, I’ll share some tips and techniques that I use to simplify software problems.

Visual Studio 2010 Migration Post Mortem

I have more or less completed the task of migrating my internal projects to Visual Studio 2010. Native C++ programmers are pretty low on the totem pole of programming coolness these days, but Microsoft did throw us a few bones. At this point I have to say I’m ambivalent overall with VS 2010, but I really like some of the architectural changes and I’m excited about the future.

The Good

The new intellisense database is a huge improvement over the old .ncb file. I’ve had a few hiccups, but in general nothing serious. It still locks up the IDE occasionally, but not nearly as often or as long as previous versions. In previous versions I had to delete corrupted .ncb files all the time; those days appear to be gone in VS 2010.

Despite some shortcomings and the extra work required to make it usable, I love the new (to C++ projects anyway) MSBuild system and the new native multi-targeting feature that came with it. The MSBuild system is completely customizable, flexible, consistent, and standardized (unfortunately all the documentation assumes you’re already familiar with it, so it’s very daunting at first). MSBuild and native multi-targeting were the main reason I decided to move to VS 2010.

The switch away from the single inherited property sheet to the unlimited stack of property sheets is a very nice change, albeit a migration headache. In previous versions, these had to be maintained in a text editor; now they can be managed completely within the IDE using the familiar property page interface. This ranks as my second favorite improvement behind native multi-targeting.

VS 2010 still gets confused when you use the same source files in multiple projects, but less so than previous versions. In previous versions, source code was always parsed in the context of the first listed project of the solution that uses the file. In VS 2010, it is displayed in the context of the project you opened the file from. IMO it should switch with the active project, but at least you can now close a file and reopen it from the other project to view it in the context of the new project. This affects the display of Intellisense errors and graying of code hidden by preprocessor macros.

The Bad

There is no way to permanently turn off the display of Intellisense errors in the error list. You have to wait until one is displayed, then right click in the header area to turn them off. I guess in simple projects it might be useful to leave them on, but in all my projects they are numerous and useless.

The build system gets confused by multiple projects in a solution with different sets of configurations that don’t match the solution configurations one-to-one. Previous versions did too to some extent, but you could always fix these problems in earlier versions by setting build dependencies correctly in the solution file. In VS 2010, I have some projects that simply will not build correctly in some scenarios no matter how they are set. There are several different manifestations of this problem: batch builds that build projects in the wrong order, rebuild all resulting in a project getting cleaned a second time after it has already been rebuilt (hence not being available when a referencing project needs it), and batch builds that don’t build referenced projects which are selected to build but which are not selected in the active solution configuration.

Utility and MakeFile projects cannot have output files set, which means e.g. deployment projects that consume their output will end up empty handed. In addition, they can delete files indiscriminately when cleaning. I found a partial workaround to these problems by changing the project type to Application, and giving each project a unique intermediate directory — but that requires a bit of extra work.

Migrating complex projects is far from automatic. A lot has changed, and while VS 2010 does a decent job of migrating things from earlier versions, I had to do moderate to massive manual work after the migration to every single one of my migrated projects. For example, the migration resulted in odd and incomplete project configurations named “Template” being left in the converted project files that had to be manually removed in a text editor. Build rules required some manual work in some cases. I also had to convert all my Utility and MakeFile projects to Application projects, which required some manual rewiring.

Visual Studio has gotten larger and slower. It takes longer to start than VS 2008, it’s slower to start a debuggee, and some common actions like going to a symbol definition occasionally and unpredictably result in interminable minutes-long delays during which the UI is completely unresponsive.

Visual Studio caches some open files. In some cases, this means that files cannot be deleted until Visual Studio is closed. I’ve also seen cases where the file was not held open, but changes were ignored until Visual Studio was closed and restarted. That one resulted in some hair pulling until I figured out that I have to close and restart Visual Studio if I modify certain MSBuild related files (most notably, *.props files).

The Ugly

Lockups. I’ve had several cases where VS 2010 locked up so tight even task manager couldn’t get a word in edgewise, and I ultimately had to hard reset the computer. Previous versions lock up and crash as well, but I can almost always clear the cobwebs by killing the process. In VS 2010, there are several background worker processes involved as well, and things can get a bit uglier when those background processes get in on the action.

I have never been able to single step a 32-bit debuggee on my 64 bit Windows 7 machine. I was hoping Visual Studio 2010 would fix that, but alas, it’s still broken.

Finally, there is one bug that drives me absolutely nuts. It has dogged Visual Studio since forever, and I’m sorry to say that Visual Studio 2010 still has it. The bug is that certain actions cause every merge module and deployment project to be expanded in Solution Explorer. I meticulously build a solution explorer sand castle by setting the state of every node in my complex and sometimes deeply nested projects. Then suddenly and without warning, something as simple as going to a symbol declaration causes Visual Studio to crawl from one end of the solution to the other, exploding every carefully collapsed deployment project node. It is such a helpless feeling, like watching the waves slowly and tauntingly wash away your sand castle, all the while knowing that you have no choice but to wait patiently then rebuild it again when the storm passes. Why, oh why, didn’t Microsoft finally fix this?