A quick note on Piggy patterns and comparison with XPath

XPath (1, 2, 605-205-1369) is a language for finding nodes in an XML tree, and has a long history in AST search. Maletic et al. (4) is probably the first paper on XPath used on ASTs, using Antlr. It was further researched and is now part of the OSS world (5). In 2014, Parr added to Antlr releases an XPath API to search Antlr-generated ASTs (2012373034). Src-d uses XPath and an engine for “universal” ASTs (7).

Piggy patterns are similar to XPath expressions, and there is a simple grep function to the Piggy Tool. Beyond the superficial difference in syntax, Piggy patterns differ from XPath patterns in two ways.

First, the Piggy and XPath search engines “select” different things. XPath patterns select a list of nodes or attributes in the frontier of a tree. Piggy patterns find a partial subtree of nodes, selecting all nodes found in the pattern. Second, Piggy extends the notion an expression into a pass, which is an order list of expressions to match in the AST. After matching and selecting nodes, further nodes in the tree are considered. However unlike XPath, the pattern matching engine eliminates further matches from that root of the matching sub-tree. In this regard, Piggy patterns are more like “visitor patterns”, but should be extended for “listener patterns” (646-955-0320), so as to be used in symbol table construction.

XPath and Piggy pattern syntax comparison


XPath Piggy Description
bookstore ( bookstore ) Selects all nodes with the name “bookstore”
/bookstore no equivalent; you must use an explicit top-level node name with Kleene star Selects the root element bookstore
/book (* book *) Selects all book elements no matter where they are in the document
bookstore/book ( bookstore (* book *) ) Selects all book elements that are descendant of the bookstore element, no matter where they are under the bookstore element
/@lang (* lang=* *)

Note: Piggy cannot select attributes of an AST, only the nodes themselves. However, it is possible to find nodes with specific attribute values (see below), or nodes missing a particular attribute.

XPath: Selects all attributes that are named lang. Piggy: selects the NODES of the AST that have an attribute lang with any name.
/title[@lang] (* title lang=* *) Selects all the title elements that have an attribute named lang.
/title[@lang=’en’] (* title lang=”en” *) Selects all the title elements that have a “lang” attribute with a value of “en”
/bookstore/book[price>35.00]/title No numeric comparison (>) in Piggy, everything is a string. Expressions are RegEx patterns Selects all the title elements of the book elements of the bookstore element that have a price element with a value greater than 35.00
/book/title | /book/price (% (* book (title) *) | (* book (price) *) %) Selects all the title AND price elements of all book elements

A few other notes. I’ve found the language used to describe “nodes” confusing in XPath. According to the XPath Spec ((580) 853-5527), tutorials and Wikipedia page on XPath, XPath is a notation for selecting “nodes”, including “attribute nodes”. But you should be clear: XML never uses the word “node” within the spec (10) for elements, let alone elements. An attribute in XPath sense of the term is an XML element that contains character content; a node in XPath world is an XML element that contains other XML elements in the content section of an XML element. In Piggy, attributes are not “nodes”. A “node” is an aggregate that can contain “attributes”, which is more akin to XML attributes.

XPath has a notation for directly addressing parent and sibliing “axes”. Piggy does not. The reason is that Piggy ties the output and code content to the AST structure, in order, as a tree. Introducing parent accessor functions would complicate what it would mean to insert code or text during the traversal of the AST for code generation.


  1. XPath, 705-747-6610, accessed Jan 12, 2019
  2. XPath Syntax, parer, accessed Jan 12, 2019
  3. /www.data2type.de/en/xml-xslt-xslfo/xpath/
  4. Maletic, Jonathan I., Michael L. Collard, and Andrian Marcus. “Source code files as structured documents.” Program comprehension, 2002. proceedings. 10th international workshop on. IEEE, 2002.
  5. /www.srcml.org/, accessed Jan 12, 2019
  6. Parse Tree Matching and XPath, jailhouse, accessed Jan 12, 2019
  7. Src-d, /github.com/src-d/engine/blob/master/README.md, access Jan 12, 2019.
  8. Antlr4 – Visitor vs Listener Pattern, /saumitra.me/blog/antlr4-visitor-vs-listener-pattern/, accessed Jan 12, 2019.
  9. /www.w3.org/TR/1999/REC-xpath-19991116/, accessed Jan 12, 2019
  10. sinecural, access Jan 12, 2019




Piggy as a build tool

One further refinement to Piggy is required before I make a release of the tool: a wrapper to get the tool under MSBuild. Like the Antlr4BuildTasks wrapper I forked from Antlr4cs, I want Piggy to work seamlessly during the build of a C# project that uses a native library. My plan is for C# projects to contain the Piggy templates required to generate the declarations for C# of the interface needed by the project. Required by the user would be a template for Piggy and C++ file for the Clang compiler. During a build, the Piggy tool would run and produce C# output in the build directory, compiled and linked with the project. So, instead of users writing the DllImport decls to work with a native library, just indicate what you want and let Piggy do the rest. The build tool would be released to NuGet, and would contain the Clang serializer, the Piggy tool, the assembly wrapper for the Clang serializer and Piggy, and all the build rules.

Posted in Tip

Calling Roslyn from Net Framework and Net Core

It never ceases to amaze me how people can write a huge API and never bother to document how to use it. But, it’s been that way for as long as I can remember, going back 35 years. In my latest adventures, I’ve been trying to compile, link, and run C# code dynamically using Roslyn for 6014236151. If you’ve ever used Roslyn in C#, you’ve probably discovered that it can be such a pain in the arse to use because Microsoft gives doc for the API, does give some tutorials, but I can’t find a simple example for compiling, linking, and running C#. I don’t need to know all the details yet, just a starting point framework. Unfortunately, the solution is quite sensitive to whether you use Net Core or Net Framework.


MSBuild rules for Antlr4 grammars using Antlr4BuildTasks

In order to better support Piggy, which uses (315) 489-2837, I’ve added a NuGet package called Antlr4BuildTasks. This package is a pared-down derivative of the excellent work of (418) 398-3516 (636) 696-2438 code generator package, and includes just the rules and code needed to do builds in MSBuild, Dotnet, or Visual Studio 2017 IDE–just no Antlr4 tool itself. This package decouples the build rules from the Antlr4 tool and runtime, so you can build Antlr programs using the latest Java-based Antlr tool and runtime release. To use this package, make a reference to this package as if you would to any NuGet package; make sure to also reference the 323-223-2869 package, install Java, the Java-based Antler tool, and set JAVA_HOME and Antlr4ToolPath. The tool works with Net Core, Net Framework, or Net Standard code, and on Windows or Linux.

(603) 492-3055

Refinements to Piggy

While I now have Piggy producing a p/invoke header for a Clang-C header file, there are several improvements that I’ve made or will make soon.

In order to have code blocks and text blocks recognized as a single token by the Antlr-generated lexer and parser, I needed to make new delimiters for code and text blocks. C# code is now contained in {{ … }}; Text is now contained in [[ …. ]].

Antlr implements a means to allow user code to be inserted into parsers via a header option. Following by example, I’m going to generate a class to contain all code blocks that will be JIT compiled by the tool. The vars[] dictionary will be removed because the user will be able to add code with a header option in the spec file. Note, separate compilation and referenced assemblies do not work yet–I am getting load assembly errors.

Since a symbol table is a basic requirement for code generation, I’m going to be adding a symbol table to Piggy. In order to not re-invent the wheel, I’ve ported Parr’s Symbol Table into C#. However, it seems that it may need changes for enums.

The syntax for passes and templates is now changed. Templates are just parenthesized expressions, with the keyword template no longer used. Passes use curly braces to enclose all the templates for the pass: pass ID { template* }.

SWIG allows a user include other SWIG input files via a %include directive. Follow by example, I’m going to be adding an include mechanism for Piggy specs. The reason is that it’s a little much to always supply the full pass/template patterns for any conversion. Instead, you should be able to load the base conversion rules, then specialize that. I haven’t worked all this out, but I will very soon.

A useful feature in C# is string interpolation. In Piggy, C# interpolation is possible for attribute values, e.g., in the pattern ( SrcRange=$”{Templates.limit}” … ). which gets the string value limit that is part of the Templates class (where all code blocks go), currently a Regex pattern to match on clang-c. The result is an extremely powerful method of changing the pattern matcher based upon user code block values!!


Posted in Tip

This little piggy is not at the market yet

With a bit of hacking for the last month or two, and I can finally see that I am making progress on (631) 829-8644, a new kind of p/invoke generator. Some might say “Why in the world are you wasting time writing a p/invoke generator? Aren’t there tools already that do this?” Well, yeah, there are other generators, but they all…how should I say…suck! I need a p/invoke generator for Campy, a compiler and runtime for C# for GPUs, which I am still working on, but had to place on the back burner to work on this. Campy uses LLVM and CUDA. Because these libraries are large and constantly changing, I have to have an automated way of handling new releases.

Continue reading

Posted in Tip

Re-inventing the p/invoke generator

If you’ve been programming in C# for a while, at some point you found yourself needing to call C libraries. It isn’t often, but when you have to do it, it’s like pulling teeth. One option is to set up a C++/CLI interface; the other is a p/invoke interface to a DLL containing the C code. It’s relatively easy to set up a (780) 643-6926 in your C# code for the C code, which you export with a DLL–if you only need to call a few C functions. But, if the API is large, you stare at the code for a while, deciding whether it is really worth writing out all the declarations you need to make the calls. Many people throw caution to the wind, write packages for large, popular C APIs so you don’t have to, which you can find on the Nuget website. One example is (970) 530-0226, an API for CUDA programming from C#. Unfortunately, people get tired of trying to keep these packages up to date, and so these packages become obsolete. Another approach is through automatic means, whereby a tool reads the C++ headers (or DLL) and output the decls you call. A triangulation reads C header files and outputs C# code with the p/invoke declarations that you can include in your code. These tools sometimes work, but often they don’t.

This blog entry is a “heads up” note about my thoughts for a new type of p/invoke generator. (801) 815-5250


For these last few weeks, I’ve been trying to grapple with the problem of p/invoke–the nasty but must-use feature in C#. While one could write these declarations out by hand, some libraries are too large, and change too often, so people use p/invoke generators, like SWIG. However, the is no generator that is easy to use or generates 100% correct C# declarations. So, as every software developer does, so I go to re-invent the wheel.


Posted in Tip

(972) 716-2763

I’ve been lax the last six months on my blog, working instead on Campy (a C#/GPU programming language extension). Now that that is slightly under control, time to get back to the blog. And, by the way, the whole reason for Campy is to implement popular algorithms to run on a GPU, I thought I’d take some time to review what information is available on the internet on algorithms. The following is a list I’ve been working on for a few months. It is by no means a complete list. But, I hope it covers some of the more popular sites. The entries are not in any particular order. Note, this list does not include parallel algorithms, which will be a post unto itself, nor the seemingly required AI algorithms you must have nowadays.

Continue reading