Skip to main content

Posts

Showing posts from 2015

Structured parallel communication

One of the PM design goals was to create a combined model for parallelisation and vectorisation. Basic PM parallel structures have the same form when distributing code over a distributed cluster as they do running vector operations on a single core. This not to say that this underlying hardware structure is invisible to the programmer; the mapping from parallel programming structures to hardware is both explicit and configurable, as will be described in a future post. A previous post introduced the PM for statement and communicating operators. The for statement executes its body of statements concurrently for each member of an array or domain.   Communicating operators provide both communication and synchronisation by allowing a local variable to be viewed as an array over the complete ‘iteration’ domain. In the absence of other control structures, communicating operators act as straightforward synchronisation points between invocations: for .. do statements_...

Subexpressions and Assertions

One of the features designed to make PM convenient for numerical computation is its notation for subexpressions. Any PM expression may be followed by a list of subexpressions introduced by the where keyword. Hdist = 1 –sqrt( 2*s1*s2/ss) * exp ( - (m1-m2)**2 / ss / 4 ) where ss = s1**2 + S2**2 Subexpressions may not refer to each other in the same list, but you can add any number of lists after a given expression: Hdist = 1 –sqrt( 2*s1*s2/ss) * exp ( - (m1-m2)**2 / ss / 4 ) where ss = s1**2 + S2**2 where m1=mean(x), m2=mean(y), s1=stddev(x), s2=stddev(y) In some contexts where clauses apply to a list of expressions, for example: for i in a[r], j in b[r], k in c[r] where r=grid(xlo..xhi,ylo..yhi,zlo..zhi) do … endfor A   PM expression may also be combined with an assertion by placing a check clause between the expression and any following subexpressions: x=solve(f,y) check f(x)-y < tolerance The check clau...

At the end of the line

PM adopts a clean approach to statement separation. PM statements are formally separated by semicolons. However, these may be omitted if the next statement starts on a new line. This is the only syntactic role for line breaks – statements and expressions may sprawl across as many lines as you wish. For die-hard C/C++/Java programmers, it is also possible to add a semicolon at the end of a statement list. a=1; b=2; c=1 x = -b + sqrt( b**2 - 4 * a * c) / 2*a print("x="//x) Semicolons are also used in constructors for two-dimensional data structures: arrays and matrices.   Line breaks can be substituted for semicolons here too, enabling you to write something like: id_matrix = ( 1,0,0 0,1,0 0,0,1 )

Communicating Operators - The Heart of PM

  Communicating operators lie at the heart of the PM parallelisation model. They are designed to provide a compromise between the direct access to global data structures (particularly arrays) offered by approaches such as Partitioned Global Address Space  and the straightforward synchronisation provided by Communicating Sequential Processes . I n common with most data-parallel languages, PM contains a parallel version of the for statement which runs all invocations of its enclosed statement list concurrently: for element1 in array1, element2 in array2 do element2=process_element(element1) endfor Most real models will require some interaction between adjacent array elements. In PM this is achieved by using either a local or a global communicating operator. The global operator @v returns an array whose elements comprise the values of loop-local variable v in each invocation of the enclosing for statement. The neighbourhood operator v@{nbd ...

Intro

This blog will cover the development of the PM Programming Language. This is a new language designed to simplify the coding of numerical models on parallel systems of all kinds. Project details may be found at:  www.pm-lang.org Source code and documentation available at: https://github.com/TimBellerby/PM-Programming-Language