This is a static archive of the previous Open Grid Forum GridForge content management system saved from host forge.ogf.org file /sf/wiki/do/viewPage/projects.gin/wiki/WorkerNodeEnvironmentOGF20 at Fri, 04 Nov 2022 20:12:28 GMT SourceForge : View Wiki Page: WorkerNodeEnvironmentOGF20

Project Home

Tracker

Documents

Tasks

Source Code

Discussions

File Releases

Wiki

Project Admin
Search Wiki Pages Project: GIN-CG     Wiki > WorkerNodeEnvironmentOGF20 > View Wiki Page
wiki1824: WorkerNodeEnvironmentOGF20
Laurence Field is collecting information about standardized execution environments of the various Grid projects. This work is based on a standardization effort within EGEE which is documented in http://edms.cern.ch/document/630962.

This effort could be the starting point of an initiative to define a common set environment variables which can be relied on by jobs submitted accross grid boundaries.

If you are interested in this topic, join the discussion on the gin-jobs mailing list.


Input from OSG

We've started an OSG twiki page on this at: https://twiki.grid.iu.edu/twiki/bin/view/Interoperability/GinJobs

Input from Teragrid

The TeraGrid has standardized on the variables below. They may not necessarily be useful to other grids, but I thought I'd share them for the sake of discussion.

Several are optional. Several can point to the same location. For example, TG_CLUSTER_PFS and TG_CLUSTER_GPFS may point to the same location if the only cluster parallel file-system is GPFS. Sometimes the variables point to a user owned directory, other times they don't (we aren't consistent).

Software related

  • TG_APPS_PREFIX => Where we install coordinated TeraGrid software
  • TG_COMMUNITY => Where communities get shared software space a.k.a. Community Software Areas "CSA"
  • TG_EXAMPLES => User examples

Scratch space

  • TG_NODE_SCRATCH => Local to compute node
  • TG_CLUSTER_SCRATCH => Shared across a cluster/resource
  • TG_GLOBAL_SCRATCH => Shared across the TeraGrid

User home

  • TG_CLUSTER_HOME => User's home on a cluster/resource

High-performance cluster parallel

  • TG_CLUSTER_PFS => The primary parallel for a cluster/resource
  • TG_CLUSTER_GPFS => GPFS parallel if available on a cluster/resource
  • TG_CLUSTER_PVFS => PVFS parallel if available on a cluster/resource

High-performance global parallel

  • TG_GLOBAL_PFS => The primary parallel for the TeraGrid
  • TG_GLOBAL_GPFS => GPFS global parallel if available
  • TG_GLOBAL_GFS => GFS global parallel if available

My previous e-mail listed the environment variables the TeraGrid defines in our default interactive and Grid jobs environment. I didn't explain the method and tool we're using to manage environments.

The TeraGrid has standardized on an environment management tool called SoftEnv, developed in the Mathematics & Computer Science division of Argonne National Lab 15 years ago. It was developed to give users a consistent way to manage their environment across over a dozen different Unix variants.

Functionally, SoftEnv is very similar to modules used by DEISA. As a matter of fact some TeraGrid sites use SoftEnv to manager TeraGrid user environments, and modules for their local non-TeraGrid users.

Some links explaining the TeraGrid's implementation: http://software.teragrid.org/docs/ctss3/softenv/README.overview http://software.teragrid.org/docs/ctss3/softenv/README.teragrid This document has a very detailed rundown of our SoftEnv keys standards: http://software.teragrid.org/docs/ctss3/softenv/README.ctssV3

The TeraGrid has been working to integrate SoftEnv with Grid tools so that Grid users can manage their environments without having to login to discover or alter (thru shell initialization) runtime environments. Our two main integration activities have been to:

Work with the Globus team to extend the GRAM interfaces (pre-WS and WS) so that Grid users can specify their runtime environment using abstract SoftEnv keys. I think GRAM should support both SoftEnv and Modules.

Work with the Condor team so that SoftEnv keys are published in class-add form, and so that jobs that specify SoftEnv prereqs have their runtime environments setup automatically to include their prereqs.

The beauty of SoftEnv and Modules as environment management tools is that they provide a standard interface for manipulating environments across shell flavors, and an abstract namespace to represent target environment configurations. Grids are all about standard abstract interfaces.

If Grid job interfaces could support environment management tools, we would be able to define a standard GIN-Jobs environment that could be requested by GIN Jobs, without GIN participants having to alter or give up their own local or regional standard Grid environments.


Input from Nordugrid

In NorduGrid approach is a bit different. In short nothing is predefined. Instead there is a minimal subset of information about operating system and architecture propagated through information system as properties on computing element. And for rest we use approach called Runtime Environment (RTE). Those include predefined set of anything user program should expect on working node if requested in job description. List of currently defined RTEs with their description is available at RTE Registry http://gridrer.csc.fi/ . Probably it would be possible to define EGEE RTE.


Input from DEISA

In DEISA the following set of environment variables has to be present in the standard job environment:

  • Home directory ($HOME)
  • Other home directory, accessible on each homogeneous site ($DEISA_HOME)
  • Working directory, accessible on each homogeneous site ($DEISA_DATA)
  • Temporary directory during the life of a job ($DEISA_SCRATCH)

Everything else is subject to the use of the module environment.

More detailes information can be found in the DEISA primer at http://www.deisa.org/userscorner/primer/primer.php

 



Versions Associations Attachments Back Links  
Version Version Comment Created By
Version 1 Laurence Field - 06/25/2007



The Open Grid Forum Contact Webmaster | Report a problem | GridForge Help
This is a static archive of the previous Open Grid Forum GridForge content management system saved from host forge.ogf.org file /sf/wiki/do/viewPage/projects.gin/wiki/WorkerNodeEnvironmentOGF20 at Fri, 04 Nov 2022 20:12:38 GMT