Cosmogrid: Science

I. Scientific Objectives
A. Astrophysics
  1. Strong Collision-less Shocks in Astrophysics
      [UCD - DIAS - DCU]
  2. Simulations of Astrophysical Jets and Associated Phenomena
      [DIAS - DCU]
  3. Modeling Radiative Processes in Neutron Star Astrophysics
      [NUIG]
  4. Modeling Support for Adaptive Optics Programs
      [NUIG]
  5. Modeling the Solar Transition Region
      [Armagh Observatory]
  6. Gravitational Wave Astronomy
      [UCD]
B. Geophysics
  1. Building Virtual Rocks
      [UCD]
  2. Rheological and Thermal Modeling of Earth Structures
      [DIAS - NUIG]
  3. Seismic Waves in Heterogeneous Structures
      [UCD-DIAS]
  4. Atmospheric Gravity Waves
      [UCD]
C. Atmospheric Physics
  1. Climate Change
      [Met Eireann]
  2. Optical Scattering Properties of Atmospheric Dust
      [NUIG]
II. Sample of Scientific Results
A. Astrophysics
  1. Determining abundances on surfaces of evolved stars
By Simon Jeffery, Armagh Observatory
Our overall objective is the analysis of stellar spectra to measure the abundances of elements on the surfaces of evolved stars and to understand what these tell us about various mixing processes during the previous life of the star. Spectral analysis is a complicated task involving the large-scale computation of stellar spectra and the comparison of these with observational data. However, since the process of measuring stellar parameters from observations is essentially deterministic, our goal is to provide a framework in which both phenomenological classification and detailed parameterization can be highly automated. This will have major benefits for current and future large-scale spectroscopic surveys and for data-mining in existing archives.
The method involves, first, the computation of a grid of model atmospheres appropriate for evolved stars (program STERNE). These models provide a description of the temperature and pressure structure with geometric depth. We are currently introducing new physics (continuum and line opacities, with an opacity sampling procedure) to provide vastly improved models; these come with a x10 cost in computation. Second, the method involves construction of a grid of high-resolution model spectra that would be produced by these mode atmospheres (SPECTRUM). The computational cost scales as the number of wavelengths and the number of models. Third, we are developing a toolkit that aims to find the optimum fit between one or more observations of a stellar spectrum and our model spectra (SFIT). The parameter space is generally large (5 < n < 50), the model grids are not rectangular, and the problem is partially nonlinear and generally noise dominated. Consequently we use a range of methods including an Artificial Neural Network for spectral classification, and various generalized residual minimization tools for parameterization (Levenburg-Marquardt, Downhill simplex, Genetic Algorithm, …).
We have successfully demonstrated an artificial neural network to classify hot subdwarfs using a new scheme based on the Morgan-Keenan classification system. This ANN has been used to classify hot subdwarfs found in the Sloan Digital Sky survey and elsewhere.
An extensive grid of stellar atmospheres and high-resolution spectra was computed to cover a wide range of effective temperatures, surface gravities, and chemical compositions. The latter include H:He ratios 0:100 to 100:0, metallicities from [Fe/H]=-2 to 0, and several grids with enhanced carbon and nitrogen. The latest version of the grid, containing over 7000 models atmospheres is available online at http://www.arm.ac.uk/~csj/models/. Data products include model atmosphere structures, emergent flux distributions and high-resolution spectra. The latter can be inspected online with an interactive plotting tool, or downloaded in their entirety. These models have been used extensively to model several chemically peculiar stars. A spinoff from this work is an enhanced ability to model detailed spectroscopic and colour variations produced by non-radial pulsations in subluminous B stars and other intrinsic variables.
The introduction of new continuous and line opacities has demonstrated that existing model atmospheres (both our own and those in general use by the community) are deficient in some respects, particularly where the chemical composition is significantly different from a standard solar-type mixture. With the development of STERNE3, the model atmosphere grid will be extensively revised over the next 12 months, while the models will be used to analyse a variety of chemically peculiar stars.
As by-products, we have developed a number of smaller software items, including a standardized structure for the LTE codes to facilitate distribution and maintenance (lte_codes), a tool for generating html documentation from properly documented f90 source code (autodoc) and various tools for graphical monitoring of SFIT output (ISFIT and gnuspec)
  1. Data Analysis and Workflow
By Stephen Bourke, NUIG
 
The observational foundation of this project is concerned with two astronomical objects: The B1951+32 Pulsar Wind Nebula, and a high resolution Methanol Maser survey.
In January 2002, a 16 hour L-band MERLIN observation was performed of this central `flat' region with a view to trying to discern high resolution structure within the hot-spot. A recent first analysis of the data yielded a clear detection of the pulsar but also a significant ~4.5 detection of structure within the hot-spot, of approximate dimensions 2.5'' by 0.75'', with the pulsar's motion directly bisecting the observed emission. However phase errors and relatively low signal to noise of the shock structure limits definitive characterization of the overall extended emission, such as whether it is a continuous shock, or broken up, or many shocks in parallel. Follow up observations were completed in early 2003 to try to answer these questions. A simultaneous observation was performed with the VLA to improve the detection of large scale structures and improve the overall dynamic range.
These follow up observations consisted of a large amount of separate observations, which using the traditional radio data-reduction system would have required an unreasonable amount of man-hours. A prototype pipeline in use at Jodrell Bank Observatory was evaluated and used to greatly reduce the time and effort required to process this data set.
It is clear that the application of HPC techniques to the domain of radio astronomy is overdue, and much effort is now been made in this area to catch up. In collaboration with individuals from the Joint Institution for VLBI in Europe we are working to advance this area by the application of distributed object oriented frameworks to the radio data reduction process. The test bed data-set for this work is a ethanol maser survey conducted with the EVN telescopes.
We have fulfilled the objectives of our observations of the B1951+32 Pulsar Wind Nebula in that we successfully completed a high resolution observation and analysis of the object and we have used our Merlin data to register the astro-metrically corrected archival HST observations of the filed. Combined, these data indicate that the previously identified optical 'knot' of synchrotron emission  extends behind the pulsar, along a line that bisects the shock front emission. The dimensions of the optical knot and the VLA determined proper motion argue for a synchrotron cooling time that is inconsistent with particle replenishment from the pulsar wind. The formation of the knot can be attributed to the mechanisms outlined in Lou (1998) with the interaction of MHD wind streams, whilst the knot's luminosity can be maintained by particle injection from the pulsar wind. As indicated in (Lou 1998, Tanvir 1997) the luminosity of the optical knot should change over time as successive quasi periodic disturbances emanate from the pulsar. Effects such as scattering of synchrotron beams from relativistic particles due to Alfvenic fluctuations and MHD shocks in the pulsar wind would cause such luminosity changes. The morphology of the optical knot would be expected to change due to the inhomogeneity of the wind streams emanating from the pulsar. However the knot's confinement behind the pulsar by a supersonic flow from the bow shock may limit such changes in its morphology.
  1. Simulations of Jets from Young Stellar Objects
By Gareth Murphy, DIAS
 
Few binary jets have been observed, although binary star systems constitute a large majority of stars in the Universe. We try to model a binary jet and compare its signature against that of a single jet to determine its distinguishing features. We also model a jet closer to its protostar entering an evacuated cavity. Equations used to model the systems are nonlinear and in general do not have analytical solutions. Computationally intensive numerical methods are required to resolve equations. A finite amount of computing power is available. We can harness Grid technology to make efficient use of limited available resources to solve astrophysical problems. We can also introduce new techniques into current MHD codes to increase our understanding of astrophysical problems. Multiple jets are currently the subject of experimental work. We model twin sources mutually circumnavigating a common axis.
When a star is formed it goes through several stages. It begins as a cloud, which over a long period of time slowly begins to collapse in on itself. The collapse is caused by the gravitional force, and as the star gains mass it draws more mass not directly toward it but swirling in orbit around it in ever-diminishing circles, eventually becoming part of a rotating disk of matter around the young star. The process by which the new star gains its matter is known as accretion and the disk which surrounds the star and delivers the bulk of its matter onto it is known as an accretion disk. The matter in the accretion disk eventually finds its way inside the new star. Most of angular momentum which the rotating disk possesses is lost to the new star. The process by which it is lost is not well understood but it is thought to be transported away from the disk by an outflow. These outflows can take the form of jets - narrow beams of matter travelling at high speeds which plough through the interstellar medium. They can also take the form of more dense, less well-collimated flows with wider opening angles, more like a wind than a jet - these are known as molecular outflows. In particular they causes stupendous differences in velocity, pressure and density to exist over extremely short distances. For matter to move smoothly through a medium, information about the motion must reach that part of the medium which lies in its path. This information has a fixed speed of propagation - the local sound speed. When matter exceeds this speed excess energy quickly builds up and a shock forms. The very existence of a shock can cause matter to break down and emit radiation which can be detected on Earth. Shocks create hot dense regions which behave differently from unshocked regions and in particular are visible due to the excitation of atoms which cause photons to be emitted by electrons in the transition to lower energy levels. In order to explain the observed universe, it is vital to be able to understand the role of shocks. One way to do this is to use a mathematical model or simulation and as it changes over time compare it to observations.
We have produced tri-dimensional pure hydro-dynamical (HD)  simulations of interacting binary jets. We have studied the parameter space of the jet properties in order to see their influence on the interaction. Pure HD can reproduce binary jet characteristics. We show here solutions that reproduce the twisted morphology of observed jets by allowing the bow shock of the fastest jet to strike the beam of the other jet midway through its propagation. We begin by making the simplifying assumptions that the magnetic field can be neglected and that the jets are parallel. We assumed a distance of 140 parsecs to the pulsating jets. We used observed values for the density, temperature and velocities. We assumed a sinusoidal variation of 30% in the velocity with a period of 8 years. The ambient medium is modeled with a uniform density (n = 5,000 cm-3) and temperature (100K). The jets are modeled with density n=500 cm-3, temperature (10,000K) and velocities of 65 and 300 km/s respectively (Liseau et al. 2005). We used a compromise figure of 300 km/s between the estimates of Liseau et al. (2005) and Hartigan et al. (2000). The Mach numbers of the jets are M=33 and M=110 respectively.
Optical polarimetry observations (Scarrott 1988) show what appears to be a toroidal field surrounding the two jets from L1551 IRS 5.  The toroidal field may have been created by the rotation of the large circumbinary disk. Such a field could have a significant effect on the dynamics and may cause the initially angularly separated outflows to change their direction. We used a toroidal magnetic field strength of 0.01 mG. The figure below shows the time evolution of the jets moving into an environment with a toroidal magnetic field. The Lorentz force causes the jets which have an initial angular separation of 10 degrees to bend towards the rotation axis – more closely matching the morphology shown in the [FeII]  observation.
Relative Extinction Mapping
We have also extracted all sources from the 2MASS catalogue with a S/N ratio larger than 5 and conducted star counts. Stars are counted in the range from 10 to 17mag in steps of 0.1mag. The resulting accumulated numbers are used to determine local extinction enhancements compared to the surrounding field. Using all three 2MASS filters, we are able to probe different extinction regimes. Parallel techniques were used to speed up our data processing. An MPI-parallel C++ code was written to count the stars in the 2MASS catalogue. The data was broken up into units and spread around the cluster, processed and recovered. Queuing software was used to balance the load between separate machines. The entire problem was completed in 120 hours, which represented a vast improvement over the performance by a single desktop machine, typically it would have represented a few months of computing time.
 
 
  1. Gravitational wave astronomy
By David Brodigan, UCD
 
This research in Gravitational wave astronomy is currently focusing on modeling the Stochastic Background of Gravitational Radiation. Upper limits have been set on the required radiative strength of small angular scale anisotropic distributions of matter to enable their detection in the current and next generation of detectors comprising the LIGO network. Future work will involve analyzing the data from LIGO and reconstructing a model of any observed anisotropic stochastic background. By correlating the signals from a pair of gravitational -wave detectors, sensitive searches for a stochastic background of gravitational radiation can be made. An anisotropic signal will vary harmonically with the earth’s rotation and if we choose the the cosmic z-axis to be aligned with the earth’s rotation axis then the signal harmonics are simply related to the multi-pole moments which characterize the anisotropy.
Initial work was to extend the paper of  B. Allen and A. Ottewill , Phys. Rev. D 56, 545 (1997). Their paper considered two sources of an anisotropic signal; a dipolar anisotropy generated by the earth’s motion , and a multi-polar one-generated by stochastic sources distributed in our galaxy in the same way as the luminous matter.
An objective of the current work was to obtain detection threshold upper limits on the strength of the signal from arbitrary point sources on the sky. Code was written to generate the multi-polar moments  from such a distribution and calculate the expected signal to noise ratio using typical LIGO noise data. 90% detection confidence limits can then established on the energy density of the source.  ( Shown below ).
A future objective of our work , is to analyze actual data from the next Science Run  scheduled to start late this year, and determine if any anisotropic stochastic signal is present. It is hoped to develop a reliable computational method for solving the ‘inverse problem’ which is concerned with specifying the directionality and nature of any anisotropic signal present.This will involve running many parallel correlations of data from multiple detectors.
The signal output of the detector is periodic with period equal to the earth’s rotation period. It can thus be decomposed into Fourier harmonics. The following graph depicts the required Energy Density (As a fraction of the energy density required to close the universe) that a stochastic signal harmonic from a point like source would need to be detectable with 90% confidence in the initial Ligo detector.
  1. Modeling optical emission from pulsars
By Padraig O' Connor, NUIG
 
Our work focuses primarily on computationally modeling optical (IR-UV) emission from pulsars. Pulsars were first discovered in 1967 and are rapidly rotating neutron stars surrounded by an active magnetosphere. They emit regular, repeating, periodic pulses of radiation, each object having its own unique light curve morphology. A main theme in pulsar research has been to understand the emission mechanism responsible for the non-thermal radiation however after 37 years this task is yet to be accomplished.
Pulsed emission arises from high energy non-thermal radiation processes occuring within the fixed volume magnetosphere but the exact emission location is unknown, in fact the primary high-energy models propose different, multivariate locations – for example, the `outer gap’ and `polar cap’ scenarios. We have developed a model designed specifically to restrict the emission location (R) responsible for optical emission from pulsars and also to place restrictions on other important pulsar variables such as pulsar inclination (), observer viewing angle () and the underlying particle pitch angle distribution (PAD).
We consider unknown pulsar variables (R, , , PAD) as defining a parameter space which is to be restricted. A model magnetosphere (based on the Deutsch field) generates phase resolved polarimetric light curves for any specific combination of input parameters. We thereby simulate a landscape of pulsar light curves as a function of our parameter space. This process is computationally intensive but the large amount of information in phase resolved polarised light curves allows a detailed comparison with observations. Initial results for the Crab pulsar (PSR B0531+21) indicate emission arises within the lowest $10% - 30%$ of the pulsar magnetosphere above both magnetic poles.
Background: Pulsars are neutron stars surrounded by a corotating, plasma filled magnetosphere, usually assumed to be dipolar. Important observational parameters are the pulsars period, period derivative, the inclination between the rotational () and magnetic () axes, designated  and the direction of the observers line of sight, designated  (relative to ). The pulsar inclination angle () is important as the magnetic field structure and orientation depends heavily on  (B()).
Motivation: Pulsar astrophysics is an active field because it consists in fact of a range of unsolved problems. Emission originates somewhere within this magnetosphere but 30 years of observation and theory has not come to a consensus as to where in particular the emission site is located. One of the main hurdles to our understanding is the large number of fundamental parameters which are not constrained either observationally or theoretically, some of which are:
    (a) The magnetic inclination of the pulsar ()
    (b) The angle of the observers line of sight to the pulsar ()
    (c) The form of underlying particle pitch angle distribution (PAD)
    (d) The cutoff of the underlying particle pitch angle distribution (PADco)
Objectives/Methods:
We have designed a generic physical model of the pulsar magnetosphere which we represent as a set of discrete points in 3d space and in our simulations we generate the complete radiation field from our model pulsar which is output in the form of phase resolved polarimetric light curves which we can compare to observations. We approach the unsolved `pulsar problem' by looking at the unconstrained physical variables as defining a parameter space which is to be restricted. The aim is that a restriction of parameter space should allow for more focused theoretical developments.
To this end we have designed conceptually and computationally what is effectively a `search algorithm' or what we call an `inverse mapping' approach in an attempt to restrict the unconstrained parameters. Our approach is computationally intensive and involves a number of steps which are outlined in figure 1 under the headings physical, computational, statistical and inverse mapping components. The computational model of the magnetosphere and associated emission processes is at the hub of our approach and running these simulations is the most time consuming step. To summarize our approach: As we wish to determine the emission location we remove all restrictive assumptions about location and assume a general, global emission process within the pulsar magnetosphere (currently, synchrotron emission from a truncated power law spectrum of particles). Covering the parameter space requires carrying out simulation runs for each possible combination of the variables (a)  (d). As the parameter space is processed (simulated) we generate a large pool of model light curves where each light curve corresponds to a unique combination of physical parameters.  The best fitting light curves map directly to that unique combination of parameters ((a)  (d)) which reproduces most closely the observed pulsar characteristics. In this way we can restrict pulsar inclination [(a)], viewing angle [(b)] and emission characteristics [(c) & (d)] as well as isolating emission regions.
Rich Variability:
Our numerical simulations of pulsar magnetospheres show that model light curves exhibit a smooth and monotonic variation as a function of parameter space (, , R, PAD).  This is a basic but important result. The existence of variability allows the search algorithm approach to restrict parameter space and it provides a `proof of concept’ for the overall search algorithm approach. Simulations reveal smooth and detailed variability as a function of parameter space. Figure 2 illustrates how the observed polarized light curves can vary for different observers viewing the same pulsar object (a model pulsar at =50o).
Restricting pulsar parameter space II: restricting emission location
Deconstructing the photon components of the best fitting model light curves reconstructs the originating location. We find for the Crab pulsar that emission is tightly confined to the lower magnetosphere (within 10%-30% of the light cylinder distance from the poles) with each peak of the light curve originating over a different magnetic pole. The result of the inverse mapping is shown in figure 2.
  1. Modeling Solar observations
By J.G. Doyle, Armagh Observatory
 
With the pending launch of Solar-B, coupled with SoHO and TRACE missions, the Sun is accessible to highly-detailed remote-sensing observations which should lead to a critical understanding of basic physical processes in the solar atmosphere. The suite of projects currently under investigation is extensive and requires people with experience in the reduction of multi-instrument data, e.g. high spectral resolution data, imaging, plus atomic physics knowledge if we are to address all of them. Over the duration of the grant, we have so far published 16 papers in peer referred journals. These publications have resulted from either postdocs employed from COSMOGRID funding or the use of computers with COSMOGRID funding.
 Over the last decade there has been a significant development in the field of Solar Physics which has led to a better understanding of many of the solar activity phenomena. Based on observational facts about their physical origin, we are able to build more precise models aimed at describing these phenomena. All this was mainly achieved due to the great success of various space missions. SoHO and TRACE, providing the first opportunity for simultaneous high temporal, spectral and spatial resolution observations of the solar atmosphere from the photosphere to the solar corona. When observed with high spatial and temporal resolution, the `quiet' Sun is anything but quiet showing its dynamic nature through quasi-periodic fluctuations in the intensity and Doppler shift of spectral lines emitted in the solar chromosphere, transition region and corona.
The dynamics of the solar atmosphere is displayed through transient phenomena such as spicules and macrospicules (jet-like structures) seen in chromospheric and transition region lines; UV (UltraViolet)  brightenings, transition region blinkers, a term introduced to describe an enhancement in the intensity flux in transition region lines registered with the Coronal Diagnostic Spectrometer (CDS); UV bi-directional jets (known as explosive events) and coronal UV and X-ray bright points. It has been noted that the possibility of identifying `strong enough bi-directional jets with the commencement of a large spicule' suggests a close association between both phenomena, but no consistent correlation has yet been established. Blinker properties which support the idea of a possible association with the ejection of chromospheric material has been published, but no connection has been found with bi-directional jets, although the flow patterns of some of the explosive events are very close to those of UV spicules. Recently, a close relationship between blinkers and macrospicules has been suggested. These blinkers were found to occur above regions of dynamic activity, producing evacuation events and  quasi-periodic oscillations. However, despite the numerous works mentioned above, the nature and the driving mechanism of all these phenomena are still poorly known. It is vital that further studies are done on these phenomena because it is commonly believed that magnetic reconnection is one of the drivers of solar activity on all scales from small brightenings in the `quiet' Sun through erupting quiescent solar prominences to massive flares in active regions although waves are thought to play a very important role as well. The small-scale brightenings showing an increase in emission up to the level of microflares represent a scaled down version of an active region. Therefore determining in detail the observational signature of magnetic reconnection or/and propagating waves and finding out which of the many proposed regimes of reconnection or which type of waves are operating, is of fundamental significance in order to understand the chain of heating processes that generates and sustains the million degrees hot solar corona.
In order to investigate these events we use a combination of high quality spectral data from spectrometers onboard SoHO, imagers onboard SoHO and TRACE, ground and space-based magnetogram data, plus MHD and HD modeling, in combination with high quality atomic physics input to investigate the line formation process and to produce model line profiles for comparison with observations.
 
The Effect of Metastable Level Populations on the Ionization Fraction of Li-like ions
Lines from Li-like ions have been known to produce theoretical intensities under-estimated compared to lines of a similar formation temperature. Here we investigate this anomalous behaviour whereby the ionization fractions are calculated using the ADAS code considering the electron density dependence of dielectronic recombination coupled with collisional ionization from metastable levels. For the lines investigated, the line contribution functions show a clear dependence with increasing electron density. For example, C IV 1548A shows over a factor of three enhancement for Ne = 1012 cm-3. The increase in the higher temperature lines is lower, but are still in the range of 30 to 60%. Furthermore, all the lines have their peak contribution shifted to lower temperature. Calculating the total radiative power output at an electron density of 1011 cm-3, we find that the difference in the transition region is 10-15% while above 106 K the difference is around 30% compared to the low density value.
Electron density along a coronal loop observed with CDS/SOHO.
The analysis of a coronal loop observed by CDS and EIT on board SOHO is presented. The loop was situated above the North-East limb at a latitude of ~48o, being clearly visible in the hottest lines of the dataset, Fe XVI 360.76A, i.e. greater than 2,000,000 K. The cooler lines in the sample (i.e. O V 629.73A and He I 584.35A) showed only a brightening at the footpoints location. Based on the Fe XIV 353.84/334.17 line ratio, the electron density along the loop was determined following three different approaches for the background subtraction. No differences, within the error bars, can be found between the three methods. At the apex, the density is 0.9 x 109 cm-3, while at the footpoint it is 50% greater, i.e. 1.4 x 109 cm-3. The inferred filling factor values along the loop are in the range 0.2--0.9.
We preformed one dimensional hydrodynamic modelling of the loop along a given field line, gravity was neglected. A minimum chi2 analysis results in a best fit case where the total energy input is directed preferentially to the loop footpoint (the heating rate is three times larger at the base than at the apex). An isochoric solution can not be ruled out completely. The exercise illustrates the necessity of accurate spectral diagnostics in order to derive definite conclusions from theoretical models and suggests the need for simultaneous density and temperature diagnostics.
Repetitive occurrence of explosive events at a coronal hole boundary
SUMER/SoHO data taken at a coronal hole boundary show a repetitive explosive event occurrence rate of around 3 min increasing to over 5 min towards the end of the activity. We suggest that the neighbouring oppositely directed closed and open field lines at the coronal hole boundary undergo repetitive reconnection seen as a sequence of explosive events. The repetitive reconnection may be triggered by transverse oscillations of the flux tubes in the closed field line region. These oscillations periodically separate and bring together the closed and open field lines on the two sides of the coronal hole boundary. An important indicator favouring the interpretation in terms of a kink mode is the observed increase in the oscillation period.
B. Geophysics
 
  1. Modeling geophysical variations
By Francesca Martini, UCD
 
Coda Wave Interferometry (CWI) uses multiply scattered waves to detect temporal changes in a medium. The time shifted cross-correlation of the coda waves before and after the perturbation helps to identify small changes over time, such as velocity of the medium, source location perturbation or a variation of the scatterers’ location. In this work the technique has been extensively tested through application to a very wide range of synthetic datasets generated with a finite difference simulator in highly heterogeneous models, in order to define successful applications of the technique, the best parameters to use, and its limitations. The technique has also been applied to real data in two different case scenarios: volcanic environment, to test the possible application of CWI as a volcano monitoring tool, and shallow geophysics, for possible environmental studies such as contaminants.
Coda wave interferometry (Snieder et al, Science, 295, 2002) uses multiply scattered waves to detect temporal changes in a medium by using the medium as an interferometer. The perturbation in the medium can be retrieved from the time-shifted cross-correlation of the coda waves before and after the perturbation. Small changes in the medium, which have no detectable influence on the main ballistic arrivals, are amplified by multiple scattering and therefore may be seen in the coda. We take similar waveforms recorded at the same geophone and originating at a repeating seismic source at the same location: while the ballistic arrivals closely match, the coda lose coherency as multiply scattered waves have repeatedly sampled those small changes in the medium. In the case of:
-Velocity change: the mean travel time perturbation is non-zero and proportional to travel time and the variance of travel time perturbation is zero. The slope identified by plot of the time lag for maximum cross correlation per window over time gives the percent velocity variation;
-Source location perturbation: maximum value of cross correlation is independent of time, the mean travel time perturbation vanishes and the variance in non-zero and constant with time. From the variance and knowing the velocity of the medium, the source displacement can be calculated;
-Scatterer random movement: maximum value of cross correlation decreases linearly with time, the mean travel time perturbation vanishes and the variance is non-zero and grows with time.
The first part of this work has been dedicated entirely to testing the correct usage and the limitations of this technique on synthetic data. This has involved a wide range of models; different perturbations were simulated (velocity changes, source signal, source location, scattering potential, some of which are over the entire model and some of which are localized perturbations) recording synthetic traces in a wide range of acquisition geometries, changing for example the position of source and receivers with respect to a localized perturbed area.
Having defined the applicability of the technique and the signature that each of those perturbation leaves on the parameters involved, we moved on to applying in two very different real world scenarios:
-Shallow geophysics- for application in environmental geophysics;
-Volcano environment- for using the CWI technique as a possible volcano-monitoring tool.
Before the analysis of real data, extensive work has been done on synthetic data to understand the cases in which the technique could be successfully applied: what response the different parameters involved would give (cross-correlation maximum, time lag, and variance) and see if the recovery of the perturbation type and value was possible in each of these scenarios.
We tested (in different models and with different acquisition geometries)
Velocity variation
Source displacement
Scatterer random movement
Scattering potential variation (standard deviation of the velocity fluctuations)
Both as variation in the entire medium, or as ‘localized’ variations (with different combination of position of source and receiver with respect to the area of perturbation),
in order to have a extensive compilation of case scenarios.
Some basic examples are shown in this section, as a sample of the work performed within the project.
Velocity variation over the entire model
Synthetic seismograms were recorded in a model before and after its velocity was changed by a certain percent. The time lag, for which we obtain the maximum of cross correlation between windows within the two traces in each time window plotted over time, returns the velocity variation. Figure 1 shows examples of successful application of the technique in the cases in which the velocity of the original model was decreased by 0.5% and 2% .
 
Source displacement
In the same model shown in figure 1, the source was displaced by 2 grid points (10 m). When traces from the original model and the model with the displaced source are correlated, results show that the technique is successful at retrieving the source displacement.
We are currently working on the application of the technique to 3-D data. The synthetic data were simulated in a model created with the real topography of Arenal volcano, Costa Rica (Figure 2), with different source locations. We will apply the CWI method to see if we can track source movement in this model.
 
Conclusions
Synthetic seismic simulations have been extensively used to generate datasets to test the Coda Wave Interferometry technique in a wide range of scenarios. This extensive application to synthetic data has given us the confidence to move to using the method on real data. We have used it in the case of a shallow seismic profile, to see if the technique can be used in environmental geophysics as a monitoring tool for detecting soil water content variations and underground liquid spillage, and in a volcanic environment as an alternative method to monitor changes in the activity of the volcano (such as seismicity source migration or fluid movement).
 
 
  1. Seismology at Mt Vesuvius
By Daniela Pandolfi, UCD
 
Monitoring temporal changes has been the purpose of different studies in seismology for the last few decades. Variations in seismic wave propagation properties are related not just to changes in physical properties of the medium but also to temporal changes as stress vary. Following a large earthquake, the variation in stress will cause a variation in macroscopic observable parameters, such as velocity in the medium and/or seismic coda. In order to detect such changes we study the coda, which is caused by multiple scattering in the medium. The coda is much more sensitive to subtle differences in the medium than P and/or S phase onset. We analyse doublets (i.e. high coherent events with similar source position recorded at the same receiver) in order to minimize a secondary source of error due to hypocentral error, difference in travelling path. We use the Coda Wave Interferometry (CWI) technique in the Mt. Vesuvius volcano region to detect velocity variations (smaller than a unit percentage) over a temporal period spanning 1996-1999.  The variations in velocity have been compared with the number of earthquakes per day, showing a good agreement with the hypothesis that velocity changes are related to stress variation.
 
Looking for subtle temporal changes in seismic velocity at Mt. Vesuvius
The research for changes in the medium following stress redistribution has been discussed by many authors, so far with non-unique answers (Antolik 1996, Baish and Bokelmann 2001). Problems arise in those studies due principally to the detection of such small variations, sometime smaller than the statistical error. Changes associated with stress redistribution, fluid movement or temperature changes can be subtle, and therefore the use of later coda arrivals, dominated by multiply scattered waves, enhances our ability to detect these changes. The use of doublets, i.e. two events recorded at the same station at different times, with high coherent waveforms, can further reduce the error, especially if due to hypocentral parameters or travelling paths. We used the Coda Wave Interferometry (CWI) technique to detect and discriminate between different changes, such as velocity variations in the medium, source location displacements and 'average scatterers displacement' using doublets recorded by a single receiver. We studied a region around Mt Vesuvius volcano in a temporal period of 4 years (1995-1999). The velocity variations computed for each pair are re-organized to allow us to plot a total daily variation in the region, under the hypothesis of a daily linear increase, for the event recorded at the crater station BKE. We also observe a correlation between seismicity rate and daily velocity. The changes appear to be related to the M=3.6 1999 earthquake, the biggest event in the region since 1972.
 
Detecting micromechanical damage due to stress variations using coda Q: Example from California
As we expect coda Q to be closely related to materials intrinsic attenuation properties, we might also expect to see coda Q variations following large events. We monitor Q variation in much wider (300X 450 Km) region, using a spatial stacking method to overcome some of the noise problem associated with coda measurements. Using many source-station pairs we stack Q values from thousands of seismograms onto a grid, assuming a single scattering origin for code generation. We produce images in sliding time windows starting one year before the earthquake and then we compare with a t-test per grid point. This technique has been applied to a region around the1992 Landers (California) earthquake, in a temporal period of more than two years. The images show a strong variation immediately following Landers, over thousands of square kilometers, which seem to recover to their pre-Landers values after about 4 month. Although the data are not ideal (as source station distances are quite large), we suggest that this might indicate large scale micro-mechanical damage, resulting from large scale stress redistribution.Furthermore there is a tentative suggestion of a precursory coda Q change in the region of the 1993 Northridge Earthquake Finally we set the 2D and 3D finite-difference simulation to better constrain the nature of the observed variations.
 
Results for Mt. Vesuvius
CWI technique is written in Matlab code, Fig. 1 shows an example of doublets used and how velocity changes are computed. Doublets in Mt Vesuvius are detected at BKE station, with a distance smaller than 3 km. We observed the velocity variation for the year 1999. We plot a daily variation per day in Fig 2, where we also plot the rate of seismicity per day and we indicate the 9 October 1999, M =3.6 event in time.
We explained the observed variation as a stress variation phenomenon, that can be summarized in a first phase of stress loading (possibly continuous over few years), a second phase of a stress release, much faster than the previous one, detected in a time window spanning a few days, coincident with an opening/enlargement of crack in the medium and a final phase of stress readjustment following the earthquake.
We also compute synthetic seismograms with 2D Visco - Elastic Finite Difference scheme in a heterogeneous fractal 2D medium to test the CWI method before using it on real data and also to analyse the imprint left on the coda from a variations in the source function.
 
Results for Landers in California
In a much wider region the spatial variations are detected by t-test for each point of the grid on the region: the value of 2 for a t-test is the threshold for reliability of 95%. The main variations are observed kilometres away from the main event.
 
  1. 3D Magneto-telluric Inversion problems
By Dmitry Avdeev, DIAS
 
Together with CosmoGrid student Anna Avdeeva, I have been working on development of efficient methods for the numerical solution of the three-dimensional (3-D) electromagnetic (EM) inverse problem. Solution of this problem has a number of inherent bottlenecks. First of all, the problem is high non-linear and ill-posed (as many of inverse problems). In addition, the problem is large-scale, usually with tens of thousands of unknowns to be recovered. Practically, it means that the numerical solution of such a problem requires days and months using regular PCs or workstations, and a proper numerical implementation naturally requires a multi-processor framework. This is why we need powerful computing facilities.
An additional, but important difficulty arises from the theoretical grounds of the 3-D EM inversion. It is that most of traditional methods previously developed for numerical optimisation are simply not applicable to large-scale problems, being prohibitive in terms of the computational time and memory required.
Thus, firstly, we have studied the applicability of several up-to-date numerical optimization methods to practical solution of the inverse problem. As a first step to solution of the full problem, we have applied, and compared these approaches to a 1-D magnetotelluric (MT) problem. The comparisons performed show that a limited memory quasi-Newton optimization method with simple bounds is a reasonable choice. The results of this study are summarized in a paper that we have submitted to Geophysics.
    Then, secondly, a limited-memory quasi-Newton (QN) method with simple bounds has been applied to develop a novel fully 3-D MT inversion technique. This nonlinear inversion is based on iterative minimization of a classical Tikhonov type regularized penalty functional. But instead of the usual model space of log resistivities, the approach iterates in a model space with simple bounds imposed on the conductivities of the 3-D target. The limited-memory QN method requires storage that is proportional to ncpN, where the N is the number of conductivities to be recovered and ncp is the number of the correction pairs (practically, only a few). This is much less than requirements imposed by other Newton type methods (that usually require storage proportional to NM, or NN, where M is the number of data to be inverted). The inversion convergence to the solution is drastically accelerated by an effective calculation of the gradients of the misfit. The gradients are calculated using an adjoint method based on EM field reciprocity. The powerful integral equation forward modelling code ‘x3d’ by Avdeev et al. (1997, 2002) is employed as an engine for this inversion. Convergence, performance and accuracy of the inversion are studied on a 3D MT synthetic, but realistic, example.
Methods applied.
(1) The solution is to be based on one of the modern non-linear optimization methods. Limited memory quasi-Newton (QN) methods are becoming a popular tool for the numerical solution of 3-D EM large-scale inverse problems. The reason is that the methods require calculation of gradients of the misfit only, while at the same time avoiding calculations of second-derivative terms. They also require storing merely several pairs of so-called correction vectors that dramatically diminish the storage requirements.
(2) An effective way to accelerate the numerical solution is to calculate the gradients using an adjoint method. It is known that such a calculation of gradients is equivalent to only two forward modellings and does not depend on the number of conductivities to be recovered. We have been developing the theory and basic equations for the calculation of the gradients, which is based on the adjoint method.
 (3) As an engine for our inversion a 3-D forward modelling solver is chosen. It combines advantages of the volume integral equation methods with those of the Krylov subspace iterations.
To the date we have developed a novel approach to 3-D MT inversion. The most essential part of our derivation is that we developed and implemented the adjoint method to derive explicit expressions for the calculation of the gradients of the misfit. Our development is quite general and is not limited to magnetotellurics alone. It can be applied to a variety of EM problems, such as marine controlled-source EM etc. With a synthetic MT example, we have obtained the first promising results (see Figure 1) of convergence of our solution. The method still needs further development to become a user-end product of universal value to the EM community.
 
  1. Numerical modeling of fault rupturing
By Igor Podgorski, UCD
 
The way in which earthquake ruptures propagate has a significant influence on stress redistribution in the crust. In particular, some recent observations lead to the suggestion that ruptures can propagate faster than the shear wave velocity in rock. If this is the case then the ensuing shock waves (figure 2) could have important implications for seismic hazard estimation. Geological faults as very complex. They are rough, have variable stresses and strengths along their lengths and can be over-pressured by trapped fluids. Here we use numerical schemes based on discrete molecular dynamic models to investigate the controls on rupture velocities, paying particular attention to super-shear wave rupture propagation.
We use a Discrete Particle Scheme which consists of a hexagonal lattice with a large number of 2D disk-particles connected by spring bonds. Figures show strike slip rupture on a planar fault for a mode II crack. The rupture propagates above the shear wave speed, close to speed of the compressional waves. A discrete numerical scheme (DPS) that already works well for wave propagation is being further developed for modelling of complex rock deformation processes such as rupture propagation and co-seismic and post-seismic dynamic and static stress changes. A number of numerical features were added to the scheme such as a mechanism for mapping any number of faults of arbitrary geometry, stiffness and strength; absorbing boundaries; routine for setting arbitrary stress field in the rock model; stabilisation routine for bringing the model to a state of equilibrium. In addition to new features in the DPS, a number of auxiliary programs were developed such as for storing the simulation data and for visualisation purposes of results. Because of an initial focus on short run times and small models the code has not been parallelized yet and only a serial version of the model exists. However, we expect to parallelise the code, as that would greatly expand the application area of the model.
We use DPS to model the effects of surrounding faults on static stress changes caused by an earthquake fault. When an earthquake occurs on a master fault the surrounding slave faults will react to the associated stress changes by sliping. The degree of slip is controlled by the compliance of the neighbouring fault. In turn these slave faults could significantly affect the local static stress change field produced by the master fault and result by either suppressing or promoting aftershocks in the area.
Static stress calculations fro the “master” fault are usually calculated as a dislocation in a homogeneous half space. That is, the “slave” faults are absent in the calculation. Here we use DPS the calculate stress changes due to the Landers earthquake, investigting the effect of the presence of “slave” faults. We impose slip on Landers based on real data inversions.
Fault surfaces during an earthquake do not slide instantaneously but instead rupture propagates at a certain finite velocity along the fault plane. Recently there have been reports on observations of rupture speeds faster than the speed of shear wave. We use DPS to model rupture propagation at sub-Rayleigh speeds along mode I cracks and at supershear speeds along mode II shear crack. We vary lateral properties of modelled faults to observe variations of the rupture speed and resulting kinematic effects.
In addition, we are interested in rupture propagation beyond the existing fault boundaries to intact rock. We observe highly complex phenomena of wing cracks creation, fracture surface branching and rupture branching produced by rupture instability as it nears a maximum rupture speed for given stress conditions.
Modelling static stress changes on a big scale (hundreds of kilometers)
Slip on a fault causes both static and dynamic (seismic waves) stress perturations. The stress change is either positive or negative, hence increasing or decreasing, respectively, the probability of occurrence of aftershocks. We model such static stress changes and the influence of faults surrounding an earthquake fault on stress distribution.
 
We model static stress changes on a set of faults from an area around Landers fault in California. A known slip distribution from the 1992 earthquake imposed along the “master” fault whichruptured during this event. The surrounding rock is allowed to deform to accommodate the slip. Faults that are in proximity to the “master” fault influence stress redistribution (figure 9), which could alter the probability of the occurrence of aftershocks. Results from static stress changes could be used for seismic hazard estimates in highly seismically active areas after big earthquakes.
Future work involves the use of measured intrinsic friction values to calculate the Coulomb stress. The results will be compared with the real 1992 Landers aftershock data.
 
Besides changing the properties of the fracture such as strength or stiffness one could influence rupture process by varying fracture geometry. Faults are almost never smooth and straight but of very irregular shape containing many jogs, kinks, bends and ramps. Such change in geometry would locally change the state of stress and the rupture speed. A jog of varying angle is placed in the middle of a uniform, in strength and stiffness, and smooth planar fault. As the jog angle increases the speed decreases up to an angle of approximately 40%. At angles greater than 40% the jog stays intact. The remaining part of the fault ruptures triggered by a detached P wave front. However, since the DPS uses regular lattice scheme, which shows very strong fracture anisotropy, results from geometric variations should only be useful for studying kinematic effects.
Rupture propagation
Spontaneous dynamic rupture propagation is simulated along the weak path, such as faults, and through the intact rock. Initially, only stress distribution and a micro mechanical brittle fracture criterion are specified. The friction value is obtained from the measurements of the intrinsic friction.
When rupture reaches the end of a fault one of the two possible scenarios can occur. The first one would be for the rupture to stop as the intact rock usually has higher strength than the fault; the second option is to continue rupturing through the intact rock. Rupturing through the intact rock would be of different nature than rupturing along the weak path. Depending on the crack mode the rupturing exhibits complex and rich behavior through phenomena such as kinking, bending, surface roughening, side branching, rupture branching and creation of wing cracks.
Rupture propagates through the intact rock under high tensional stress (as for a mode I crack) and achieves maximum speed as if it was a mode I crack, i.e. of the Rayleigh wave speed. Although, it would not be able to stably propagate at that speed. Instead, as the rupture speed increases the fracture surface becomes rough and side branches appear. Eventually as the speed is increasing towards the maximum value the rupture becomes too unstable and forks into two branches. This process and resulting fracture path is shown in figure 14 and the inset shows surface roughening and side branching.
 

Different stress distributions produce different behavior of the rupture process through the intact rock in a similar way as for the rupturing along the weak path. When the stress distribution applied throughout the model is the same as for a mode II fracture, the rupture propagates at supershear speeds. Figure 15 shows rupture propagating through such a medium when, rupturing is constrained only to one side of the fault. Initially rupture process starts along the weak path and when it reaches the intact rock it continues to rupture but at a slower and highly variable speed and direction. A wing crack, characteristic for mode II cracks, is created during continuation of the rupturing through the intact rock.

 

C. Atmospheric Physics
  1. Climate modeling and C4I
By Ray Mc Grath, Met Eireann/C4I
 
 
C4I has developed a regional climate modelling capability and simulated the future Irish climate for the period 2021-2060 by dynamically downscaling the outputs from Global Circulation Models (GCMs). The regional information has been used to assess the risk of future flooding in several catchment areas. The work is currently being extended by running an ocean model, driven by the climate model, to develop a climatology of storm surges in Irish coastal areas and to assess the impact of climate change on coastal flooding. An ensemble modelling approach, using the outputs from many GCMs, is also underway to quantify the uncertainty in climate modelling. The regional climate model is Grid-capable.
The Community Climate Change Consortium for Ireland (C4I) Project was established to promote climate research in Ireland and to provide a capability for addressing climate issues of national concern in an effective, coordinated way. It is centered on a Regional Climate Analysis Modelling and Prediction Centre (RCAMPC) based at the Headquarters at Met Éireann at Glasnevin, Dublin. While the research work is focused on regional climate modelling the Project also seeks to provide access to its modelling facilities, either locally or in Grid computing environments, for Irish researchers. It also seeks to promote interest in environmental science and offers support for related projects seeking funding in European programmes.  In 2005 C4I became a partner in the EU-funded ENSEMBLES project.
C4I principally uses a Regional Climate Model (RCM) developed from the HIRLAM model by the Rossby Centre. The RCM is used to dynamically downscale the output from global climate models. Two 40-year simulations over a limited area with a horizontal resolution of approximately 13 km were completed in 2004/5: a validation simulation (1961-2000) using ERA-40 re-analysis fields as boundary driving data and a future simulation (2021-2060) using ECHAM4/OPYC3 driving data (B2 SRES scenario).  In 2005 work has focused on quantifying the uncertainty in climate change projections by downscaling the outputs from other GCMs (ECHAM5 and HadCM3 initially); this work is also being carried out in partnership with the ENSEMBLES project.
Work has also focused on extreme weather events by exploring the benefits of downscaling to very high resolution (~5 km).  The outputs have been used to assess the risk of future flooding in catchment areas. C4I has also begun to evaluate the impact of climate change on storm surge frequency in coastal areas by running an ocean model driven by the RCM outputs.

The grid resolution of GCMs used for climate modelling is typically about 2.5◦, representing an ability to resolve features of size not smaller than 500 km (twice the grid interval). This is adequate for a general representation of the global climate, but quite inadequate for the simulation of the detail and pattern of change in a region the size of Ireland. A proven method of generating more precise, accurate and detailed information on a national scale is by use of a regional climate model (RCM). Such a model has a typical resolution of 0.5◦, allowing representation of features down to scales of approximately 100 km. By the technique of double nesting, even greater resolution over Ireland is feasible. Conditions on the domain boundaries are provided from a GCM run on a coarser grid. The Regional Climate Model is capable of producing guidance at a resolution far better that anything which is currently available from international centres. At the same time, the configuration of the RCM (embedded in a global model) ensures that all the information present in the GCM is transferred accurately and efficiently to the higher-resolution model.

 

D. Computer Science
 
  1. Management of the Irish Grid
By John Walsh, TCD
 
We have setup, developed and currently support a national Grid infrastructure providing Grid access to several national virtual organizations (VOs), including CosmoGrid, WebComG, GeneGrid and MarineGrid. Other interested parties are also supported by a default VO called SoloVO.  Grid-Ireland now has a point of presence (PoP) in 18 institutions within Ireland, including two PoPs in Northern Ireland (QUB and Armagh Observatory). In addition, we have firmly established the Irish Grid infrastructure within the European and International Grid via the EU supported EGEE project. Under EGEE, the UK and Ireland (UK/I) form one of nine federations amongst 27 countries. UK/I functions as a distributed Resource Operations Centre (ROC). All 18 sites are registered as EGEE resource centres. Phase II of this project is currently under review in the European Commission. We have developed a fully functional replica of a subset of the national infrastructure. This has allowed us to do non-intrusive deployment testing without overly impacting on the production service. Several basic training courses were offered. To date, we have trained about 60 participants.
As technical manager for Grid-Ireland, I have helped establish the national Grid Infrastructure across 18 sites (including 2 in Northern Ireland). The Grid Operations Centre (GOC), which operates the core Grid-Ireland infrastructure, is based in Trinity College, Dublin. The Grid Operations Centre team meets once weekly to discuss ongoing issues and to plan work schedules etc. The operations team monitors the infrastructure centrally, detecting infrastructure problems in reasonable time. This is done using a combination of Nagios plugins and a series of tests called, Site Functional Tests, which probe the stability of the grid middleware and infrastructure. I coordinate regular Site Manager meetings, involving the site managers donating computational resources to CosmoGrid  and other Grid-Ireland Virtual Organisations.
In addition, we have set up a testbed infrastructure, some of which replicates the national infrastructure. This testbed permits a high degree of testing - allowing us to catch and debug deployment problems, inherent in some middleware releases, in the testbed environment, rather than in the Production Services. The majority of the testbed environment is based around using Virtual Machines. This has proved to be a very fruitful area of work - our ideas/techniques have been used by other members of the EGEE project. We undertook several major upgrades of the middleware and infrastructure using the  Testbed environment. A major upgrade and migration to Scientific Linux/Quattor is scheduled to be complete in mid-december.
I was involved in the organization of the joint CosmoGrid/Webcom-G/MarineGrid/EGEE kickoff conference in University College, Cork (April 2004). This involved providing support for wireless network access to conference delegates.
I currently act as the deputy supervisor for Trinity College’s participation in the EGEE project. The main aim of the project is in the establishment and deployment of a pan-European wide grid infrastructure. This role is expected to continue  into the next phase of the project, starting April 2006.
A helpdesk has been setup to track user requests for support and to handle infrastructure related trouble tickets. This has now been integrated with the EGEE Global Grid User Support (GGUS) helpdesk. We also use the HEANet network trouble ticket system to identify possible grid outages due to network problems.
We have run three user introduction courses to Grid, giving basic Grid Training to approximately 60 people:
•    Site Administrator introduction to Grid and Grid Deployment (2.5 days)
•    Introduction to Grid (2 days, including project work)
•    Introduction to Grid (1 day basic course)
Full details of these courses can be found in the Grid-Ireland 6-monthly reports to the CosmoGrid executive. A further introductory course is planned for December 2005 for site administrators. A generic users course, in conjunction with the National E-sciences centre, Edinburgh (NeSC),  is planned for early 2006.
Grid-Ireland also performs outreach, identifying new communities of users and exposing them to Grid technology. We have identified several new applications, including:
•    BioInfomatics
•    Astronomical data reduction using Artificial Intelligence techniques
“Gridifying” a widely used Radiotherapy Application
  1. Grid Applications in Webcom-G
By Sunil John, UCC
 
The area Grid Computing has been the center of much research recently. Grids can provide access to vast computing resources from hardware to software. Despite all the attention Grid Computing has received, the development of applications for execution on grids still requires specialist knowledge of the underlying grid architecture.
This research is to provide an enhanced application development support for the WebCom-G middleware. The idea of such an enhancement is to provide a framework for high level languages as well as specification languages to interact with WebCom-G middleware. This is achieved by using specialised Translators to translate existing source code, and interpret script files. The output from these Translators is a Condensed Graph representation of the application, capable of being executed by WebCom-G. Expressing these applications as Condensed Graphs exposes any parallelism present and allows them to avail the advantages of  WebCom-G such as it's fault tolerance, load balancing, scheduling and security mechanisms.
Applications executing on the WebCom-G platform are expressed as Condensed Graphs. A condensed graph represents a program as a directed, acyclic graph. The semantics of condensed graphs permit programs to express data-driven, demand-driven and control driven computations using a single, uniform, formalism. In this Research, a programming environment for WebCom-G is developed. This programming environment is used to support execution of Legacy Applications by WebCom-G. This is achieved by using specialised Translators to translate existing source code, and interpret script files. The output from these Translators is a Condensed Graphs representation of the application, capable of being executed by WebCom-G. Expressing these applications as Condensed Graphs exposes any parallelism present and allows them to avail the advantages of  WebCom-G such as it's fault tolerance, load balancing, scheduling and security mechanisms.
 
  1. Grid Administration through Webcom-G applications
By Brian Clayton, UCC
 
The WebCom-G Operating System is a modular system, designed to utilize the benefits of Grid Computing and those of the WebCom system. This work involves creating new modules for WebCom-G to facilitate computation on Grid and cluster systems. These modules address information gathering and presentation, the automation of systems administration and integration with MPI for High Performance Computing. The WebCom-G Operating System is proposed as a Grid Operating System. It is modular and constructed around a WebCom kernel, offering a rich suite of features to enable the Grid. It will utilise the tested benefits of the WebCom metacomputer and will leverage existing grid technologies such as Globus and MPI. The aim of the WebCom-G OS is to hide the low level details from the programmer while providing the benefits of distributed computing.
    Resources in a heterogeneous environment like the Grid join and leave at times of their choosing. Gathering information and analyzing resources is important for resource management, scheduling, load balancing and fault tolerance. To build strict models (process management models, programming models, service oriented models) for quality of service, a proper Grid Operating System is needed. As with most operating systems there will be various components to provide the required functionality.
This work involves creating several new components - GridAdmin and the WebCom-G Information System and WebComMPI. GridAdmin is a module to allow automated administrative tasks within WebCom-G. The WebCom-G Information System is a module within WebCom-G which provides access to Information. This information may be in the form of system information, cluster information of WebCom information. WebComMPI is an MPI job submission portal based on WebCom-G where users need only specify  the name of an MPI binary and the number of processes required for its  execution. It creates an MPI parallel machine chosen from an appropriate  subset of the currently available WebCom clients, creates a machines file and distributes the MPI binary appropriately. These components will be deployed and tested on the clusters in UCC, and in the context of Grid-Ireland.
 
GridAdmin
Administration of Grid resources is a time consuming and often tedious job. Most administrative requests are predictable, and in general, require knowledge of the local resources and the requester. In this paper we discuss a system to provide automation of administrative requests, such as resource reservation and user account management. We propose using trust metrics to decide the merits and suitability of each request. These metrics are implemented using trust management techniques into a practical system we call GridAdmin.
WebCom-G Information System
The WebCom-G Information System consists of three components: the Info Daemon, the Information Module and the Information Manager. Every Information System has an Information Module. The Info Daemon runs in a separate thread and reports to the Information Module. The Information Module can choose to communicate with either the Info Daemon directly (if it's only talking to a single machine) or it can talk to a remote Information Manager (which is a promoted Information Module and so would have information about more than one machine) or to a third party information service provider such as a Globus GIIS. Information is then available to WebCom-G modules and to WebCom-G applications.
WebComMPI
WebCom-G can readily be used to generate a parallel architecture from component machines, to dynamically create a machines file and startup script and, if necessary, to stage the binary and input files. Change management is also addressed by employing WebCom-G to manage user accounts, shared file space, data storage and on-going run-time costs. Users interact with the system by means of a Web Portal. This allows users to submit MPI jobs without needing knowledge of the underlying system.
 
  1. Discrete-Dipole Approximation
By James Mc Donald, NUIG
 
The Discrete-Dipole Approximation is a numerical scheme for the calculation of extinction by non-spherical particles where the wavelength of the incident light is comparable to particle size. Unfortunately, its potential is crippled by the huge computational overheads that the scheme requires. This project proposes to address some of the possible solutions so that the true power of the scheme can be realised. A new kernel has been designed and constructed which exploits Toeplitz symmetries and achieves a reduction of approximately 60% in the calculative and memory requirements for the code. The Discrete-Dipole Approximation (DDA) is an extremely intuitive and flexible numerical scheme based on the so-called Volume Integral Equation for the simulation of light scattering, absorption and extinction by arbitrarily shaped particles. The DDA replaces a continuum scatterer by an ensemble of Nd dipoles situated on a cubic lattice. Each of the Nd dipoles is driven by the incident electric field and by contributions from the Nd-1 other dipoles. The heart and computational “Achilles heel” of this scheme which is a very large and fully populated set of 3Nd complex linear equations. The computational requirements for systems based on such lattices are extremely sensitive to the size of the lattice used. For 3D lattices, even a small increase in the dimensions of the lattice significantly increases the computational overheads and thus, the true potential of the scheme is encumbered by the severe and often prohibitive burdens imposed by the computational scale required to produce meaningful results. It is crucial that every possible algorithmic or computational enhancement be exploited in order to augment their applicability. The discrete nature of the DDA permits the construction of a highly parallel code. The aim of this work is too construct such a parallel code, implementing numerous novel computational improvements. Until now, we have been concentrating on various computational advancements that can be made to the kernel of the scheme. The extremely large linear system is solved using conjugate-gradient methods whose cardinal computational hurdle manifests as matrix-vector products. Exploiting Toeplitz symmetries allows the otherwise impracticable system to be treated in an expeditious manner via Discrete Convolutional theory. This relies on the fact that these systems have the advantage of being able to employ Discrete Fourier Transforms (DFTs) to accelerate the matrix-vector products that are required within the iterative solution schemes. However, due to the manipulations that are required to facilitate the convolutions, the DFTs are either highly degenerate or incorporate an inordinate fraction of calculations which are superfluous. Since the 3D DFT can be dismantled into its constituent 1D DFTs, it is therefore possible to remove this redundancy to acquire the same result with approximately 60% fewer computations and 60% less memory than applying standard DFT techniques. With this complete, we are now turning our attention to the full parallel code.
III. Institutional Testimonials
 
By Andy Shearer
 
CosmoGrid funding has enabled NUI Galway to significantly expand its computational physics programme. We hired a talented group of computational scientists covering atmospheric physics, adaptive optics and computational astrophysics. From this core group we also expanded our interests into other areas – such as the complete end-end modelling of the Euro50 telescope. Here techniques developed under our main programme [in this case parallel ODE solvers] could be used to speed up this modelling by a factor of at least 100. Outside of the direct scientific output as measured by PhD theses completed and journal papers published we were able, with CosmoGrid funding, to develop a community of computational scientists. This team of people has been vital to for the success of the NUI, Galway programme. We would like, as far as is practicable, to keep this group as intact as possible. Clearly the gap in PRTLI funding means that this is not possible for all the researchers.
 
Astrophysics
 
A major part of our activity has be the consolidation of the particle-in-cell code for simulating pulsar and brown dwarf magnetospheres. This builds upon the previous work on reverse engineering optical emission zones on purely geometric grounds. Our modelling work can not be seen in isolation to our separately funded observational programme involving observations of both brown dwarfs in the near IR and radio and pulsars in the radio, optical gamma ray regions. CosmoGrid funding has enabled us to build up a group with expertise in both observations and theory. As such we have developed good international collaborations with groups in Italy, France, Germany, Netherlands and the US. Our adaptive optics programme funded under CosmoGrid has also lead to direct astronomical spin-offs we are preparing a paper on supernova remnant searches using extremely large telescopes. This combined both numerical modelling and observational constraints combined with an understanding of the capabilities of the Euro50 telescope. We are currently preparing two papers that describe our original reverse engineering model to determine emission height in pulsar magnetospheres and one to describe the pulsar workbench. FS Aur paper shows the requirement to use modelling of experimental results to determine the significance of particular observations.
 
Atmospheric Physics
 
The benefit of the CosmoGrid funding allows us to significantly reduce the computational time and memory usage of the resources needed to estimate accurate light scattering parameters over the full instrumentation size bins. These physical parameters are commonly estimated on the only mean diameter per size bin of the instruments implying substantial errors. Proper estimation is now accessible in reasonable time thanks to the computational resources brought by the CosmoGrid. Also the impacts of these new and proper estimations are extended to larger size parameters of particles thanks to the large-scale memory facility available with the CosmoGrid project.
 
Adaptive Optics
 
Frame selection is an astronomical image processing technique. It assumes that imaging an astronomical object through Earth’s turbulent atmosphere as a series of short “snapshots” is a randomly distorting process. If the telescope aperture is matched to the properties of the atmosphere, a large fraction of these images may, in fact, show little distortion. From very many “snapshots” of the star, the best ones are selected and co-added to form the final image. The method, also known as “lucky imaging” or “lucky exposures”, is now often used for high resolution astronomy. Work has concentrated on a hybrid technique involving the use of post-processing by frame selection on the images already corrected in real time by an adaptive optics system. Initially, modelling concentrated upon a parallel implementation of an atmospheric propagation model. The IDL code has been parallelized using MPI and it is routinely executed on a 32-node cluster at NUI-Galway. The results of the simulation – first order statistics describing the process of short-exposure imaging – have been checked against real data obtained through collaboration with the Center for Adaptive Optics, University of California, Santa Cruz. Differences in the statistics were identified and quantified. It was found that the quality of AO-corrected snapshots is best described by different statistical criteria than in the case of images obtained directly, without AO correction. The resulting mathematical description of the process will be useful in developing new observing strategies and post-processing algorithms. This research can be extended to the case of coronagraphic images of extremely faint sources.
 
 
 
By Chris Bean
 
Over the past 5 years the Geophysics Group at UCD has focused on developing a range of new tools to model seismic wave propagation in fractured media and media with fluid filled conduits. These tools are applied in areas of seismic imagery of the Earth’s crust, estimations of the strength of and stress state on geological faults and fluid-driven seismicity (earthquakes) at volcanoes. The physical properties of the Earth’s crust are highly variable at all spatial scales. This means that it is not possible to find analytical solutions to the differential equations describing the phenomena outlined above. We now have two choices. (i) we can ‘smooth’ the models to render then more mathematically tractable. (ii) or we can use numerical schemes to obtain approximate solutions to the equations describing our wave propagation or fluid flow. Geological variability is an essential ingredient in the problems in which we are interested at UCD. The variability controls spatial variations in stress and strength of rocks as well as our ability to image deep into the Earth’s crust. Hence smoothing the models is not an option. Therefore we must resort to numerical solutions to the equations describing the systems of interest. The computational expense involved in solving numerical problems lies in part in the resolution required. As geological variability occurs across a very broad range of scales it is necessary to ‘sample’ our models at fine resolution. This means that if we want to simulate wave propagation or fluid flow in models of realistic physical dimensions for studying ‘real world’ problems, we require very large grids.
Prior to Cosmogrid, the Geophysics Group at UCD only had assess to its own small Beowulf cluster in its own laboratory1. This machine allowed us to develop the code and computational framework which we needed to work on the types of problems addressed above, but we were restricted to small scale 2D simulations. In short, the application of the methods to 3D real world problems was completely out of reach. CosmoGrid change that. When Rowan was commissioned we had access to a machine capable of tackling problems on a scale large enough to allow us to address specific volcanoes and specific geological faults, moving beyond the small generic models on which we had been working. That work and our ability to demonstrate the effectiveness of the methods which we had developed, led to a successful 12 partner EU STREP application on sub surface mass movement in volcanic environments (VOLUME – coordinated by the UCD Geophysics Group). Hence CosmoGrid has already had a major impact on our international research profile. The contribution which CosmoGrid has made to the new ICHEC has, for the first time, given us a computational facility in Ireland which makes us competitive with researchers across the world. Many of the problems on which we work require ensemble averaging, which was completely impossible in the computationally starved environment prior the CosmoGrid and ICHEC.
As added benefit of CosmoGrid has been training. Both early and late stage researchers in our group have benefited significantly from the training CosmoGrid training courses.
 
 
By J.G. Doyle
 
The use of Cosmosgrid funds at Armagh is directed at the inter-comparison of high spectral resolution solar and stellar data with model output. On the stellar side, we are studying means by which to automate the analysis of spectral data and, in due course, to integrate suitable analysis software with astronomical observatory archives using grid-based technologies
 
Solar Astrophysics
 
A great deal of observational effort has been directed towards the coronal heating mechanism(s) based on large-scale magnetic loops, these however, occupy only 10% of the solar surface. Less attention has been paid to the remaining 90%, where the magnetic fields are concentrated into small-scale magnetic flux tubes. From recent satellite observations, we know that the `quiet Sun' contains a variety of transient features: spicules, mottles, explosive events, jets, bright points, blinkers, etc. As to how close this relationship holds and the connection of these small-scale phenomena to the heating in the upper atmosphere is the main thrust of our work.
 
Results based on our work may be downloaded from http://star.arm.ac.uk/preprints/, numbers 416, 417, 419, 434, 435, 436, 440, 441, 442, 443, 444, 449, 452, 453, 460 and 461. Over the pass 7--8 years, most authors (including my-self) have suggested that many of these small features are not related. However, new simultaneous spectral and imaging data and it's interpretation may suggest that it is time to look again. Misinterpretation can and has resulted from single frequency observations. The picture emerging from our work, proposes a sequence of events starting from cancellation of photospheric magnetic fields, which pass through shock formation, resulting in transition region jets or micro-flares. Hence, starting from the collision of flux tubes, if the angle between resulting shocks is head-on then no flows are produced (i.e. only a brightening, termed a blinker), if the shocks have the correct angle then flows are produced (i.e. explosive events jets), if the interacting flux tubes collide in the vicinity of a vertical magnetic field, then perhaps this is the starting point of the fast solar wind.
 
We need to increase the database of transient features observable simultaneous in several spectral lines, coupled with data from an imager. However, to interpret such data requires MHD modelling coupled with accurate atomic data for a proper modelling of the line profile(s).
 
To facilitate such computational work, a high-performance computer was installed. This comprises a cluster with fifteen Xeon 3 GHz dual-processor work nodes, a Xeon 3 GHz dual-processor master node and 30Gb memory. An additional Xeon 3 GHz dual-processor gateway server provides access to/from the grid. All components were supplied by Workstations UK, and this now comprises the Armagh node of the Cosmogrid network. It is known as the Beehive, or M44 for short, after the open star cluster in Praesepe. Mostly Cosmogrid funded the cluster, with some additional input from PPARC grants to Jeffery and Doyle. Machine room, infrastructure and the gateway server were purchased from Observatory funds. Without Cosmogrid funding we could not have purchased such a machine.
 
Stellar Astrophysics
 
We have now nearly completed the automatic analysis of stellar spectra, which includes the use of an artificial neural network for spectral classification, and a system for filtering spectra of interest from large surveys using the methods of principal components analysis. It also implements an adaptive mesh for selecting theoretical models from a non-uniform grid amongst which to interpolate when applying a residual minimization parameter optimisation scheme.
 
Another significant project is the implementation of new opacities in the Armagh model atmosphere code STERNE. This has been successfully completed and the first results accepted for publication. An extensive grid of models incorporating these new opacities has been computed using the Cosmogrid Beehive cluster. The latter was running at full capacity almost continuously during the last semester, primarily carrying out these model atmosphere calculations. We have also implemented a scheme to treat chemical equilibrium due to radiative diffusion in the model atmosphere code. This will be tested in the next 6 months and used for the analysis of chemically peculiar A and B stars.
By Turlough Downes
 
At the start of the CosmoGrid project, Dr. T. Downes was the only member of academic staff working in DCU in astronomy or astrophysics.  There was one PhD student working in computational astrophysics under the supervision of Dr. Downes also. There were strong links between Dr. Downes and the Dublin Institute for Advanced Studies (DIAS) through both Prof. L. Drury and Prof. T. Ray. In addition, there were several active international collaborations, although none with more than modest funding.
 
Impact on research
The most immediate impact of the CosmoGrid project on research was the hiring of one post-doctoral researcher (PDRA) and one post-graduate student during 2003, along with part-funding of another post-graduate student for a short time.  This effectively doubled the size of the astrophysics group, leading to a large increase in the number of publications, with a total of 18 being at least partly supported by CosmoGrid. At the recent Protostars & Planets V conference, 3 presentations involving the DCU astrophysics group were made, indicating the level of activity now ongoing in the group.
 
A collaboration between Dr. T. Downes and Dr. S. O'Sullivan (CosmoGrid PDRA in UCD) has produced a novel numerical method for dealing with multifluid magnetohydrodynamics in the limit of low ionisation levels (O'Sullivan & Downes 2006).  This method now forms the basis of an application for further funding under the Science Foundation Ireland Research Frontiers Programme. The method itself allows efficient large-scale simulations of very important processes in jet formation such as mass-loading.  It also has applications in molecular cloud dynamics and accretion disk processes and as such is a significant step forward in this area. A PhD student, partly supported by CosmoGrid, produced the longest simulations yet run of the propagation of jets from young stellar objects. This enabled us to study how transients effect simulation results, and at what time the transients can be said to have disappeared. It also allowed the study and, to a certain degree, the verification of a model previously proposed by Downes & Cabrit (2003) on jet-driven molecular outflows.
 
Apart from the direct impact described above, there has been significant indirect impact.  Having access to the CosmoGrid computational resources has made several collaborations possible.  One example is the collaboration between Dr. Downes and Dr. A. Marcowith of CESR Toulouse which involves the supervision of a PhD student in Toulouse.  This collaboration has now received funding on two occasions under the Enterprise Ireland/Royal Irish Academy Ulysses programme.
 
Impact within DCU
Some sense of the significance of CosmoGrid can be gained from the fact that the success of the CosmoGrid project has been listed as one of the most significant highlights of the performance of the School of Mathematical Sciences over the last 5 years. The existence of CosmoGrid has been useful in putting astrophysics ``on the map'' to a greater degree within DCU. This, admittedly in combination with other factors, has led to the formation of an astrophysics group in the National Centre for Plasma Science and Technology and an associated seminar series.

News

Participation in
the new initiative e-INIS

e-INIS website

Participation in ICHEC

ICHEC website


Events

n/a