Skip to main content

‘Tis the season

… to give an end-of-year update of what is happening with the PM language. It is a long time now since I wrote a PM blog. However, behind the scenes there has been a lot of work done on the guts of the compiler with new inference and optimisation passes added and others taken apart, polished, and put back together again. New language features have also been implemented, including methods, improved object lifetime management and, most importantly, a radically simplified approach to coding stencils.

The handling of stencils was one aspect of PM-0.4 with which I was least happy. Since these are a key requirement for nearly all numerical modelling, I have long been looking for a way to make their coding as straightforward as possible. However, this has been one of the greatest challenges in terms of implementation, since the amount of code restructuring need to make a stencil operate efficiently (separating out halo computation, overlapping computation and halo exchange, tiling and the interaction of stencils with sequential loops, merging stencils, etc., etc.) significantly exceeds that of any other language feature and was one of the primary motivators for adding compiler passes. The idea is to enable a simple @ operator to access neighbouring values in the model grid, allowing you to write something like:

 new_cell = (cell@[-1,0]+cell@[1,0]+cell@[0,-1]+cell@[0,1])/4.0  

and then to let the compiler do all of the additional work needed to create optimised distributed stencil code. For this simple example, this is not too difficult. For realistic modelling code, however, this becomes much more complex - stencils will typically be abstracted into their own procedures and then applied to multiple variables at disparate points in the code that make the most mathematical, as opposed to computational, sense. Optimal coding in this context involves inter-procedural inference of halo extents and access patterns and extensive high-level code restructuring, including inlining, outlining and a lot of code motion. Getting all of this to work satisfactorally has taken the best part of a year, but has been a necessary step in making the language as effective as possible in its chosen problem domain.

At the time of writing this, I am planning to release PM-0.5 in the first quarter of 2026, starting with an MPI only version, but then bringing in support to generate hybrid MPI/OpenMP and MPI/OpenAcc target code once these have been tested. I will keep the roadmap on GitHub up to date.

So seasons greetings and hope to have something very interesting for you in the new year!

Comments

Popular posts from this blog

Compile time, run time, coffee time

[ Please note that in the following discussion I will be using PM 0.5 notation, which is slightly different to PM 0.4 and also in some flux as PM 0.5 is still in development. ]   Many programming languages (and most newly developed ones) include some form of compile-time programming, ranging from simple preprocessors ( #define , #if in C) to fully blown macro systems capable of re-writing the abstract syntax tree during compilation (Rust, Julia, etc .). In line with its philosophy of keeping things as simple as possible, but not simpler, PM takes a middle road with compile-time programming supported primarily through the type system. There is nothing too radical here – this is not an area where the language aims to break new ground.  The PM approach centres around the use of compile-time types. Many languages use special types to denote literal values and PM follows this trend. Literal integers, reals, strings and Booleans each have their own types: literal(int) , litera...

The PM Type System

In common with other aspects of PM design, the type system is designed to facilitate the use of flexible programming constructs, such as polymorphism and dynamic dispatch, while emphasising the generation of fast, static, code.  In common with many other modern languages, PM also attempts to combine the safety of static typing with the expressivity of dynamically typed languages such as Python . The type of any PM expression may usually be determined by a static examination of the code. Variables take their types from an initialising expression using the ‘:=’ declaration syntax popularised by Go . a:= 100 ! Declare ‘a’ as an integer (int) variable Variables do not change type during the course of their lifetime – polymorphic programming requires the use of special values, described later. Composite values such as structures are generated using specialised expressions rather than through the invocation of type-specific creator functions: b:= struct var_descriptor...

All change .. not quite

With the recent release of PM 0.4 and the positive reception to my PM presentation at CIUK2023 , it seems like a good time to bring back the PM blog after a long hiatus. Another good reason for its resurrection is that I feel that I now have built the basic semantics of the language into something like a coherent whole, giving me something concrete to write about. There have been a few major changes to the language since my last blog entry. The main syntactic change has been the shift from keyword-delimited control statements to curly-brackets. This is not a statement on my part as to the merits of the two approaches, I am generally agnostic in this debate which can border on the religious. It was simply that with the way that the language was developing, the keyword approach was getting cumbersome – frequently used constructs were taking up far to much space and impeding readability. PM now uses curly brackets to terminate statements and semicolons to separate (and optionally terminat...