Interview with Brian O’ Halloran

What kind of education did you get to prepare you for your current role?

I graduated from NUI Maynooth with a BSc in experimental physics and mathematics in 1998, followed by a PhD in experimental physics from UCD in 2003.

Obviously, I picked up the hard skills for analysing, breaking down and solving problems during this time. What was invaluable though, in terms of my current role, were the soft skills that you pick up by accident and through stealth: I spent quite a lot of time teaching physics and astronomy courses, and learned invaluable soft skills in terms of communication of ideas and people and stakeholder management.

Before joining IMR, where did you work?

Prior to joining IMR, I was lead data scientist at Liberty Information Technology in Dublin, helping them build up a data science function from scratch. Before that, I was lead data scientist at the Daily Telegraph in London, working on things like recommendation systems for users and building election models for Westminster elections. And prior to that, I was in a similar role at eFinancialCareers – again in London – which I joined after leaving academia. I used to be an astrophysicist, working on projects like the European Space Agency’s Herschel Space Telescope, as part of the SPIRE instrument team.

What kind of projects do you work with in your role?

Since I joined IMR in mid-2020, I’ve been working on a number of projects, providing analytics support. Most notably, I’ve been working on the iBECOME project, a Horizon 2020 project to create a virtual Building Management System (vBMS) for optimizing buildings energy performance and comfort conditions, while reducing the operational costs by leveraging demand response. My main focus right now is to define and test algorithms to provide fault detection and predictive maintenance capabilities for the vBMS, as part of Work Package 3 of iBecome.

One of the key issues of a BMS, be it virtual or otherwise, is the ability to detect deviations in the performance of the system, be it through human actions such as leaving lights on over a weekend, or of a component that is on a degradation cycle that will eventually lead to its failure. Previous BMS systems tended to use rule based systems where alerts would be triggered if thresholds exceeded a hard coded limit. This sort of approach is extremely inflexible, needs human interaction/oversight, and does not scale well to differing building environments and scenarios. Instead, we use machine learning techniques to help us find such behaviour: the flexibility of such an approach is that we do not have to be so prescriptive in terms of what constitutes anomalous behaviour, but instead use the intrinsic behaviour of the data to determine what constitutes an anomaly, and to do so with little human intervention.

With regards to iBecome, we use building data from a number of facilities to test our algorithms: along with at least a year of baseline data for buildings and rooms (covering things like temperature, CO2 levels, humidity, energy usage, lighting levels), we also have fault data for a range of scenarios (e.g. setpoint changes, abnormal energy usage, high CO2 levels) that have been generated artificially. We have looked at a range of algorithms, from standard machine learning ones to deep learning, looking at how well they can catch anomalies across such a range of scenarios. So far, the results look extremely promising, and we have narrowed down what the ensemble of algorithms we will use going forward in the vBMS for fault detection. Likewise for predictive maintenance, we use artificially generated data to explore how we can determine how far a component is along the degradation cycle. Deep learning approaches, such as convolutional neural networks and autoencoders have shown really positive results thus far.

In a similar vein, I’ve been working on another EU Horizon 2020 programme, DIGITBrain, an EU innovation program to give SMEs easy access to digital twins. The focus on our experiment was provide a digital twin for additive manufacturing, providing fault detection of the additive process using machine learning, using the approaches outlined above.

I’ve also helped build up IMR’s training offerings to the Irish manufacturing community, by devising and delivering introductory courses on data science and machine learning, with an emphasis on use cases for manufacturing.

What kind of data science technologies do you work with in your role?

Well, that depends on the problem, of course. Most of my actual development time is spent knee-deep in Python, working with analysis, machine learning and visualisation tools such as pandas, seaborn, Tensorflow and Keras. I write a lot of code in Jupyter notebooks, with are great for quick visualisation of results, and to disseminate them to collaborators.

At the end of day, any machine learning models that we build need to be taken out of the R&D sphere and integrated into a production environment where we can extract as much downstream values as possible. To this end, I use data science/software development tools such as Github/Docker, for integration into a production environment.