IFL 2015 - Keynote

Keynote

Neal Glew, Google: FPLs and Modern Multicore Processors - Experience with iflc

Tuesday, 15 September, 9.00am

Abstract

From 2006 until 2013 my team at Intel Labs worked on a functional language compiler called iflc. We started the project as the optimising compiler for a new functional programming language targeted at writing game engines and games and targeted at Intel's then LRB architecture. The goal was to be within a factor of two of C performance, generate code that could make use of multiple cores (targeting 32-core processors), and generate code that could make use of wide SIMD vector units (16-wide single-precision). The project also included a low-level code generator called Pillar that was inspired by C - -. Over the years we obtained some nice performance, parallelisation, and SIMD vectorisation results on a few simple benchmarks in the new language. However, the language itself never came to fruition, so we decided to repurpose the compiler to compile Haskell. On heavy numeric benchmarks, we obtained performance twice as good as GHC, although on more traditional lazy code, our performance was only half as good. In this talk, I will describe our experiences building iflc and some of the results we achieved, lessons learnt, and where I think research should concentrate in the future. In particularly, I will have things to say about the importance of sequential performance when doing parallelisation and SIMD vectorisation, the importance of memory hierarchy optimisation, and comparison of frameworks like C - -, Pillar, and LLVM.

Bio of speaker

Neal received a PhD from Cornell University in Jan 2000 for work on Typed Assembly Language. He then worked at InterTrust Technologies for a year and half on various things, before joining Intel Labs in 2002. At Intel, Neal worked on Java virtual machines, functional-language compilers, parallelisation, SIMD-vectorisation, and programming models for utilising modern multi-core processors. In 2014, he left Intel and joined Google where he works on the Flume team, a massive distributed data-parallel processing system used internally within Google to process data for various products.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License