Labview

Labview code

I've been programming primarily in Labview and Matlab for seven years now. It seems that Labview is getting some recognition at CMU as a suitable environment for novices and non-programmers (Lego robotics, etc.), but it is by no means limited to such uses. I believe Labview was a good choice for implementing Micron, and would not change my mind if I had to choose again.

Brian Becker was a PhD student who worked on Micron, and I got into a discussion with him on his blog about the merits of Labview. This is my reply to his rant, and his very charitable reply:


Rob MacLaclan:

As the person that caused Brian to have to deal with Labview, I can say that he is indeed an excellent programmer in C++, and did a credible job in Labview. I myself have an unusual background which surely contributes to my peculiar opinions about programming languages and environments. I worked for 10 years at CMU in the computer science department developing compilers for Common Lisp and Dylan and working on these language standards. For various reasons I had become somewhat burned out on the whole programming language and environment thing, and worked to retread myself as an electrical engineer with a particular interest in analog and power electronics. I switched to working in robotics because it got me closer to designing hardware. At first I just used C++, then added Matlab to the mix. Of course I had the usual problems with cryptic errors on dereferencing NULL or deallocated pointers, and it was quite annoying given my background with garbage collected languages. Even so, at the same time I was also programming PIC microcontrollers, which made gcc and gdb seem quite luxurious.

When I started working on the Micron project there wasn’t a lot of code, what there was needed to be rewritten, and my boss didn’t care how I got it done.

One of the drivers was that we had some need for GUIs, and one of the things I had learned over the years was that I hated writing GUIs, at least using the tools I had been using. Also, the lab had been pretty much a Windows shop, while I had only ever programmed under Unix. I did make a first pass modifying existing code using the win32 APIs and GUI builder, and I did hate it. The win32 APIs were every bit as bit-twiddling low-level as Unix (error prone), but also just plain different. I didn’t want to have to learn to program that way all over again.

I had heard about Labview being something that people used in research data acquisition applications, and decided to check it out. It did meet the requirements of easy GUI creation and also (importantly) hiding the windows API so that I didn’t have to learn it. At first it was just a signal processing app that ran at 100 samples/sec, with data blocking that generated real-time visualization at a lower rate. This was indeed a fairly classic Labview application, well within its sweet spot. Then we decided to use this sensor system inside our feedback loop, requiring 1000 samples/sec. This required using the Labview real-time module with deployment to a separate RT target. This was a somewhat more sharp-edged environment with worse debug support, but it did get the job done (and RT has continued to improve in usability.)

As a language designer and implementer, and someone who also worked on a non-text-based environment for a text-based language (the Gwydion project) my observation is that Labview happens to differ from traditional sequential imperative languages and text-based environments in ways that have important benefits:

One: it is a largely functional language (discouraging use of side-effects). It is quite possible to have functional text-based languages, but explicitly representing the data dependencies as wires does reveal an important aspect of parallelism. My guess is that Labview’s functional nature is a historical accident driven by the choice of the graphical representation, but it’s certainly coming in handy in the world of multi-core processing. Functional code is also safer than code relying in implicit side-effects and sequencing, but it requires relying on the compiler to avoid unnecessary copies, which can be a problem when the compiler lets you down.

Two: code is rich. It has semantic associations that you can’t see right away. This is what enables the fluid evolution of Labview GUIs. The GUI is not some attribution “on the side” of the code that can become inconsistent. Labview knows which terminal is associated with which GUI widget. Because the code is rich, the association between the GUI (control panel) and the code (block diagram) can be enforced to be semantically consistent. You could make code that looked like text, but was rich (that was what we were working on in Gwydion) but once you take the hit of abandoning the idea that the text files are definitive (which does make big problems for source control), why limit yourself to stuff that looks like text?

In summary, functionality is (mostly) good, and is synergistic with the rather graphical data-flow representation. Rich code is (mostly) good, and enables graphical programming. Neither of these deep semantic distinctions of Labview are equivalent to the observation that the language is graphical.

So far as Brian’s complaints about wires, spatial dependencies, real-estate: Yes, I do spend a lot of time deciding how to lay things out with a logical flow and rearranging, getting structures that are similar to look similar, and so on. It’s very much like an electronic schematic in that way. That is a hit which is much less severe in a text-based language. In Labview there are so many semantically insignificant degrees of freedom that you can spend time optimizing. It’s hard to say whether it’s overall worthwhile in comparison to a text-based language, but if you are programming in Labview, then I think it’s worth being patient with that sort of thing. It pays off down the line in code that is easier to understand, IMO often easier to understand than a text program.

It’s definitely a weakness of Labview that numerical expressions look so different from standard linear infix notation. Whether this is better or worse in an absolute sense is hard to say, but FORTRAN did try to translate formulas as naturally as it could, given character set limitations. You can use infix notations in Labview with formula blocks and expression nodes. I use expression nodes when I have a unary expression, but mostly avoid formula nodes. For one thing, the formula language isn’t vectorized.

So far as arrays go, even given the notational awkwardness of the graphical representation, Labview is way better than C++ because it is vectorized, rather like MATLAB. You can get some of those effects in C++ by overloading, but it’s already there in Labview, and you don’t have to worry about the buffer management and in-place optimizations. In MATLAB you can write some beautifully concise and cryptic array expressions. Translating MATLAB code into Labview is a nontrivial operation, which I have done manually a few times. Labview has a “mathscript” feature which allows embedding a matlab subset into labview. I haven’t tried this recently because there is an extra fee to use on real-time. I’m not entirely optimistic that the results would be as efficient as hand-written Labview code.

Some of the cut-and-paste difficulties are pretty intrinsic in the graphical code model, I think. But there are advantages of rich code besides the GUI integration. I like commenting in Labview, and I consider it a considerable strength that I can embed images in the block diagram. I often put in scans of pencil-and-paper diagrams that I draw to understand what I am doing. In text based languages I ended up losing these or throwing them away.

Overall, I think that robotics research is an application that does fall into Labview’s sweet spot. The biggest advantage of Labview for robotics is that in many cases (where the rate of significant change is less than a few Hz) you can directly visualize relevant time-variation, and where the change is faster, you can still visualize in real-time using graphs that update every second or so. In my experience doing robotics in Unix, what I ended up doing was debugging in batch mode, writing out trace files and then reading them in emacs or visualizing them using MATLAB. But batch visualization isn't merely slower than seeing the change in real time, it is qualitatively different, and IMO often inferior. When the system is running in Labview, you can mentally correlate in real time the displayed state of the software with what you can see with your eyes and hear with your ears. The ease of making animated grapical GUIs is crucial here.


Brian C. Becker says:
December 11, 2012 at 10:45 PM

Thanks Rob, for chiming in with the insightful post! The functional aspect of LabVIEW and the intertwined nature of data flow with GUI are definite strengths of LabVIEW, and the number of really cool real-time visualizations you can do very simply is quite amazing when you come right down to it. Replicating such sophisticated plotting tools yourself in real-time (as opposed to dumping data out to analyze offline in Matlab, which is certainly a more tedious two-step process (although I’d argue what you can do with the data once in Matlab is more powerful at the cost of often significantly more initial script-writing time), is certainly a huge kudos to LabVIEW platform. Plus their data-acquisition to NI hardware is, of course, quite nice compared to writing your own drivers ;)

And gosh, yes, raw Windows API is terrible to have to develop anything more complicated than, well, hello world. Perhaps the best thing I took away from LabVIEW was how bullet-proof the final system is. Sure there are random connection issues sometimes and you need to know to close a particular subVI before running the system or it will stutter, but I do have to admit that as far as depending on LabVIEW software, I never needed to worry about hunting down double frees or random segmentation faults. When done right, LabVIEW is incredibly professional – which made it an easy-to-use, robust platform that was fairly painless to integrate with – as long as I didn’t need to modify/add to any of the underlying LabVIEW code itself :-D