Author: Patrick Mann, CMC Information Technology Director
Carbon Commons development has been shifting it’s focus from eCollaboration to applications. The Commons collaboration framework is essentially complete, with user authentication, grouping, wikis, discussion groups etc. all functional and reliable. So we can begin to concentrate on more specific requirements from our users.
Broadly we are splitting these requirements into two areas: data and computation. Everyone wants access to data, and as IT professionals we are familiar with the technologies needed to obtain and federate data, visualize it and most importantly search it. Similarly we have a small cluster, currently being used for simulations by grad students, and therefore have the ability to provide computational power to Commons applications.
Prototype applications developed
Over the summer we’ve developed a couple of prototype data applications. The issue has usually been finding the data and then figuring out ways of accessing it. After that it’s pretty straightforward to, for instance, dump it into Google Maps. For instance we have had some interest in mapping Green House Gas emitters as sources for CCS efforts. If I’ve got it right a significant cost for smaller sequestration sites is to actually pipe or ship gas to the storage site so it’s good to find sources that are close to the sinks. Turns out that spreadsheets of point-source data are freely available from the Environment Canada Greenhouse Gas Emissions reporting program. So we’ve built a prototype which downloads that data from the spreadsheets and inserts it into Google maps.
That’s now in the Commons application Tools/Data Visualization where we have a standard Google map with the point sources – just click on one of the sources to see a popup with details. As usual you can zoom or pan, and we’ve included summary graphs. For instance on the intensity map Alberta, no surprise, is at the top of the emitters list.
You’ll see on the same page in the left-side menu a link for Alberta Horizontal Wells. There is of course always interest in the wells themselves, and in Alberta the ERCB has a plethora of pdfs, spreadsheets and text files. In this particular case the data is available as a large text table. However it can be read, and again dumped into Google maps. We used the “Heat Map” functionality to provide an interesting view of the density of wells, and there is also an ability to draw a circle around a particular area which then shows individual wells, and as usual one can zoom in and click on a well to get the details.
Applications kept simple
From these examples you can see that our philosophy is to keep it simple. These are high-level, straightforward applications that give an idea of the data – for more in-depth or technical analyses there are big, powerful GIS and geomodelling stand-alone applications out there.
We are now wondering about two things: more data sources, and provision of machine-readable interfaces. So what do you think? Are there data sources that you are particularly interested in obtaining? Are there particular presentations or analysis techniques that need to read that data? Please give me a call if you’re interested or have comments.
Capacity available on cluster
We’re still thinking about the computational end of things. As you may know the Commons is run on a virtual infrastructure, and we have have some capability to send jobs to either our cluster, or to public clouds like Amazon Web Services.
I would like to note that we currently have some spare capacity on the cluster, so if you want some compute cycles with packages like MFIX, Fluent and CMG, please give me a call.