This is a static archive of the previous Open Grid Forum GridForge content management system saved from host forge.ogf.org file /sf/wiki/do/viewPage/projects.et-cg/wiki/TInfrastructureExperiences at Thu, 03 Nov 2022 00:15:28 GMT SourceForge : View Wiki Page: TInfrastructureExperiences

Project Home

Tracker

Documents

Tasks

Source Code

Discussions

File Releases

Wiki

Project Admin

Calendar

Mailing List
Search Wiki Pages Project: ET-CG     Wiki > TInfrastructureExperiences > View Wiki Page
wiki1697: TInfrastructureExperiences

Experiences and Issues with t-Infrastructure

Status of This Document

This document provides information to the Grid community on the experiences of different projects in implementing and providing training infrastructures (t-Infrastructures). It does not define any standards or technical recommendations. Distribution is unlimited. This is a draft version of the document, it has not yet been submitted for public comment, but is still being developed within the ET-CG group.

Copyright Notice

Copyright © Open Grid Forum 2008. All Rights Reserved.

Abstract

Contents

1. Introduction

2. Definitions

3. Experiences of different projects

3.1 GILDA

3.2 GENIUS

3.3 P-GRADE

3.4 Open Science Grid

3.5 Synopsis

3.6 OMII UK

3.7 Summer Schools

3.8 eLGrid eLearning t-Infrastructure

3.9 Other Training Events

4. Multi-Middleware Co-existence

5. Academic courses

6. Gap Analysis and Recommendations

7. Future work

8. Contributors

9. IPR statement

10. Disclaimer

11. Full copyright notice

12. References

13. Appendices

1. Introduction

Grid [1] technology has rapidly evolved in the last few years and people involved have mainly worked on the implementation of new middleware services and the deployment of large infrastructures with training and dissemination having played a rather small role. Although induction has been a marginal activity, there are however several important experiences that should be considered in order to define the future of the Grid because the success of a technology strongly depends on the ability to disseminate and promote its knowledge.

The importance of training and induction is also witnessed by the creation of specific projects as the EU co-funded ICEAGE project [2]. The goal of ICEAGE is to support the evolution of Grid technologies by establishing a worldwide initiative inspiring an innovative and effective Grid Education. Grid Education implies not only education about the use of the Grid, but also the use of the Grid in education.

A short description of the dissemination experiences made so far in several projects, including ICEAGE, is made. Moreover, an analysis is offered and several recommendations are highlighted.

2. Definitions

e-Infrastructure – the term is used to denote the digital equipment, software, services, tools, portals, deployments, operational teams, support services and training that provide data, communication and computational services to researchers. An e-Infrastructure is usually multi-purpose and has to be a sustained dependable facility so that researchers can plan to use it for the duration of their work.iv

e-Science – the invention and application of computer-enabled methods to achieve new, better, faster or more efficient research in any discipline. It draws on advances in computing science, computation and digital communications.v

t-Infrastructure – e-Infrastructure adapted to the needs of education, trainers and students. Shared t-Infrastructure would be usable by students and teachers internationally, providing easy access to educational exercises running on e-Infrastructure.

Definitions of other terms related to e-Infrastructure education can be found on Appendix A.

3. Experiences of different projects

3.1 GILDA

GILDA [3] officially started at the beginning of 2004 as an initiative of INFN in the context of the INFN Grid Project [4] and the European EGEE project[5]. The purpose of GILDA was to create a test-bed entirely dedicated to training and dissemination, based on gLite middleware [6] services, comprising of the most useful facilities such as a dedicated Certification Authority and a Virtual Organization and monitoring and support systems for users. Besides the support for training events, GILDA has also been used around the clock in the years by beginners, users or sites, wishing to start their Grid experience.

During these four years, more than 200 training and dissemination events have been supported by GILDA. The level of support went from the simple release of certificates (more than 10000 certificates have been issued since the beginning) to the creation of accounts and demonstrative applications on the resources for the full use of the training infrastructure during the events.

While supporting this large number of events, lots of problems have been faced and their solutions have become a series of best-practices in Grid education now usually adopted worldwide.

  1. Loose identification procedures for hosts and personal user certificates: the strict identification required from a “real” Certification Authority can be discouraging for new users and it can be considered not really necessary when approaching the Grid for the first time. Users have just to fill in a web form, and the certificate will be signed and sent to the email address of the requestor. This practice has clearly increased the number of certificates released, reducing the errors to which a non-experienced user is exposed and making this first, and usually error prone, step a lot easier. The risks that could derive from this non-strict identification, such as a misuse of the certificate, are mitigated by the small scope and time duration of these certificates: typically, just the GILDA test-bed itself, and two weeks, by default.
  2. Use of generic certificates and accounts for tutorials: in the first supported tutorials, users were contacted one or two days before the tutorial, and they were requested to ask for a personal GILDA certificate. As a matter of fact, most of them ignored these email. Many who did complete the certificate request correctly subsequently forgot to bring their certificates with them as advised, leaving them on their own machines. Requesting the certificate during the tutorial was not an effective solution, because the requests, and the various problems which may arise, made it impossible to support all participants quickly. This problem has been solved by forcing the tutorial organizer to specify the expected number of participants. Then, the GILDA CA manager issues the number of generic certificates requested and these are also exported in the requested format. System accounts are created on the official GILDA User Interface machines, and the certificates are copied in there. If the tutorial organizers plan to use different UI's, they can even request for the certificates to be sent separately. This practice also has the risk that certificates can be misused but this is mitigated by the fact that certificates are valid just for the duration of the tutorial, and they also have a limited scope.
  3. Use of wiki pages for on-off/line training: the most used training instruments in the beginning of GILDA have been transparencies. This practice proved to be not very effective, especially for exercises, for at least a couple of reasons. First, they offer a limited space for editing text, which is really a problem when reporting long option commands or command outputs. Second, it is hard to continuously maintain and update them in case of errors or changes due to new middleware releases. To face this issue, a wiki site has been setup in GILDA. The choice of the wiki site was also motivated by the fact that it enhances the collaboration among trainers without requiring physical access to the web server.
  4. Use of virtual machines for training: virtual machines have been proven to be a very effective instrument for Grid site administration tutorials, i.e. those training events where attendees learn how to install and operate Grid services. Since this exercise needs always a machine installed from scratch, the use of real machines is a clear limitation, because requires both a large number of available boxes and a huge amount of time to reinstall the machine from scratch for each Grid element. Virtual machines allow learners to start always from a preloaded image containing just the flat operating system. Students install the Grid service and then, once the exercise is finished, they have just to shutdown the virtual machine, reload the flat image, and they can start the next exercise in just a few minutes. Also, virtual machines are effectively used for dissemination purposes since they have been made available from the GILDA web site with several preinstalled Grid elements ready to be downloaded by users who wish to play with Grid elements even if they can't install a real machine or they don't have the opportunity to set up a full featured Grid.

3.2 GENIUS

GENIUS [7] is a web portal jointly developed since 2002 by INFN and NICE srl with the goal to create a simple, though powerful and customizable, instrument for teaching Grid computing. Many Grid beginners are in fact discouraged by the complexity of the standard command line based interface (CLI) offered by the Grid middleware on a UNIX-like environment, that is hostile to a large part of potential users, that are not skilled computer scientists. GENIUS, which is usually installed on top of a User Interface, offers a graphical, simple, and intuitive interface to the Grid services accessible from a common web browser without any additional requirements. When used during training events, GENIUS proves to be very effective in introducing Grid concepts, because CLI snags are hidden to users who do not have to check command syntax and can easily abstract their meaning.

GENIUS is available in GILDA in two flavours: the first is a full-featured installation, which requires a personal account, and has the same potentialities as a standard user interface. The other installation has been set up for demonstrative purposes. Although this has restrictions, like reduced capabilities for job submissions, it's available to everyone, including those having not a personal certificate or account, and so it allows broad dissemination of the strong capabilities of Grid computing.

3.3 P-GRADE

A P-GRADE Portal installation has been set up for the international GILDA training infrastructure in December 2006. The environment serves as a demonstration, dissemination and learning environment for everybody who is interested in the usage and capabilities of GILDA, the EGEE Grid middleware and the P-GRADE Portal itself. During the roughly one year the GILDA P-GRADE Portal has been used during every mayor EGEE Induction, EGEE Application Developer and ICEAGE event [8][9]. The GILDA P-GRADE Portal is P-GRADE Portal 2.5 installation being connected to the GILDA training infrastructure. It provides graphical environment to perform certificate management, job submission, file transfer, information system browsing, application monitoring on GILDA, eliminating the sometimes cumbersome and hard to memorize commands from the learning curve. As a result the learning time required for Grids can be significantly shortened by the tool. Besides providing graphical interfaces for GILDA middleware services, the GILDA P-GRADE Portal also contains high level tools that extend the capabilities of gLite. Workflows and parameter studies can be defined and managed by the graphical editor the integrated workflow manager components.

Attendees of P-GRADE courses are provided with pre-defined exercises that introduce the general concept of parallel Grid application development and use the EGEE middleware to demonstrate them in practice. These exercises stress gLite middleware services and they are formulated as data parallel parametric studies, functional parallel workflows or some combination of these two. Based on the examples students can understand and distinguish the generic concepts of Grid applications from implementation details specific to a gLite VO or on a Globus VO. (P-GRADE Portal is also compatible with Globus middleware based Grids, however this is not used in the GILDA P-GRADE Portal installation.) We can declare that P-GRADE courses are very beneficial for attendees.

On the other hand, such events are prosperous for the developers of P-GRADE and gLite too. As tutors meet existing and potential new Grid users who are committed to understanding the capabilities and limitations of Grid systems they can easily collect feedback. Feedback collected during training are very valuable in order to set priorities and lay down roadmaps for future developments. Consequently, tutors of P-GRADE Portal sessions take extra care to take note of the experience of every course, to categorize and then to forward these notes to appropriate Grid developer and operator groups.

In order to demonstrate the maturity of Grid and the benefits of the technology we use real life applications during P-GRADE Portal tutorials wherever possible. However, it is necessary to decrease the size of those applications in order to fit them into the time and resource constraints of the events. This is typically achieved by reducing the size of the input data sets – resulting shorter execution times and smaller resource demands – or dropping some application components. An urban traffic simulator and analyzer workflow is a good example for this. The application was originally developed by the University of Westminster in P-GRADE to analyze the density of cars on the roads of Manchester . Since 2006 it is used during P-GRADE Portal events to demonstrate the concept of data driven workflows and basic steps of workflow management. As the original workflow was running for about an hour the input data set had to be reduced and now it finishes in about 10 minutes on GILDA thus suitable for training purposes.

Although students work more eagerly on exercises that are based on real life case studies, sometimes setting up exercises from scratch solely for the sake of training is inevitable. This is the situation when such capabilities of P-GRADE must be demonstrated that are new, or simply not used by any production application yet. As P-GRADE aims to serve as a general Grid application developer and executor environment its new features target potential application classes. Meanwhile other Grid tools are typically designed for one particular experiment, most of the P-GRADE features are not used right after they become part of a release, their take up by Grid communities take some time until new features become known and understood. Some artificial exercises shorten this take up period focusing onto such new features and putting them into theoretical but possible use case scenarios.

The GILDA P-GRADE Portal provides permanent service that anybody can access any time. Having the training environment and infrastructure publicly available all the time is very beneficial, as in this way participants of organized events can continue their P-GRADE studies after the course. These people can stay in the same environment and can reuse applications that they designed during the tutorial. However, as the certificates they use during tutorials expire on the same day (or within a few days) these people must obtain new GILDA certificates and must register to the GILDA VO again. Even if GILDA provides lightweight mechanisms through its web to accomplish these steps, users loose a few days and must invest a few hours extra work just to get GILDA access again. Tutorial certificates with extended lifetime or some simplified GILDA registration procedure could address this issue. (For example in a simplified GILDA registration procedure the user could obtain a GILDA certificate, GILDA VO membership and user accounts on GILDA UI, GENIUS and P-GRADE Portal in a single step, filling out only one web form).

3.4 Open Science Grid

Open Science Grid (OSG) [10] is a consortium of Universities and Research Centres in the United States and other countries worldwide (Germany, United Kingdom, Taiwan, etc.). The main focus of the consortium is to share resources among its members and with other entities willing to use the Grid.

Inside the OSG an important activity is the dissemination which aims to teach how to access and use shared resources. The dissemination consists of the preparation of summer schools and tutorials in the location of its partners, and the sponsorship of other summer schools around the world. The OSG summer schools utilise both local and remote hardware resources for their t-Infrastructures. The connection to the remote resources is often low-bandwidth as the locations of schools can be quite remote.

Moreover, a basic on-line course is offered by OSG without charge, for students asking to learn about Grid but not available to attend an official school. In order to train remote self-paced learners long term certificates are required. Usually for the OSG summer schoold short term certificates are issued via the real CA.

Additionally, they have a permanent training infrastructure made of a few machines. Every student may try the infrastructure and test its exercise with a valid digital certificate recognised by OSG.

As with GILDA, at the OSG Summer Grid Workshop certificates were issued with generic names and details rather than the real student details at the start. Later on as part of the security course they required students to request a training certificate in their own name.

One point which was noted is that it is difficult to provide students with a certificate to take home with them to use on their own national Grids. This is because such certificates would need to be authorised by the local RA which is not feasible where students come from multiple countries. The difference between summerschools, remote learning and masters courses, etc. was noted. Students can get cert from their institute if they are enrolled on a Masters course, different mechanisms are needed for different student types.

3.5 Synopsis

Synopsis runs commercial training courses and provides their own t-Infrastructure for these courses.

Synopsis use a generic unix login which is available for the duration of the course rather than certificates for authentication to the t-Infrastructure. Synopsis staff do their training on the same t-Infrastructure that the public use for Synopsis commercial training. Thus the internal staff training still uses separate training authentication for this t-infrastructure rather than using their work ID. Staff are treated the same as public in this respect.

A thin client approach us preferred with Citrix connections to the servers. The resources at the training centre can therefore be quite limited in power.

The Synopsis training is part-self-paced as students can log in after class-time and work on labs, etc. This is especially for the university students who use their t-Infrastructure.

They have found a strong requirement for monitoring the students' activities and usage, so that students could get feedback on their progress, so that trainers can see the user's progress and identify potential problems before it is too late, and so that they can verify that resources are ony being used for course activities. Whatever data is available to lecturers should of course also be available to the students and they should be made aware that their activities can be monitored.

The following general comments were made about the experience of commercial training

  1. It is vital that the student experience is satisfactory as students are refunded their money if there are any problems with the course.
  2. Collaborative projects may need data sharing so there may be a need for individual storage but also a way to share data. However, for Synopsis and perhaps other Industrial training collaborative team excercises are not common as the students are business users who are often in competition with eachother.
  3. Synopsis need to make sure that the training resources they provide are only used for training. A business user might use some of the training applications with free training licences for real work. To try to ensure that this doesn't happen Synopsis only load up the software that the student is supposed to use and they monitor users. If a user uses some application for hours that they only need to use for 5 mins for the excercise then maybe they're doing something unauthorised. They can then contact students and ask them to clarify what they're doing.
  4. For expensive commercial training there may be some specific requirements relating to student monitoring which differ from training in public projects or academia.

3.6 OMII UK

OMII UK work with OGSA-DAI, Taverna, and other middleware and tools. They produce these tools and they are also involved in producing documentation and training. They have variety of training options

  • format tutorials
  • download examples and excercises and work self-paced
  • be able to connect to resources outside of courses

OMII UK consider Ease of Access to me more than simply the issue of certificates, in particular they want to provide for remote learners and self-paced learners. Accessing and using the t-Infrsstructure must be very simple. This is the first time many of the students have seen the software and they won't use it again if it's not easy to use. To this end the OMII UK team provide staged excercises with easy, middle and hard levels.

A realistic set of services must be deployed on the t-infrastructure, and limits and controls on how long users can run jobs are desirable.

The materials you give students must be comprehensive.

Developing the ability to select technologies is not always important as students may be tied in to a particular middleware based on what production grid they use However, this may be different for middleware and applications.

The student should be able to select not just suitable middlware for his or her task, but portals, services, etc. Some of these may still not be available on certain production grids.

The t-Infrastructure should support multiple technologies, but the production infrastructure they use may depend more on their background, country, field or other factors.

The requirements are also different for application domains and for cs students, scientists interested in learning how to use different tools which should eventually be available on multiple middlewares and multiple grid applications.

Application developers need to know about new software and middleware features so that they can use them in next version of their application, middleware developers need to know about cool new features so that they can build better grid middleware. Science users, however, may not need to know about cool new grid features unless they are useful to them in their particular field. Similarly Grid implementors and admins probably don't need to know too much about new features as we don't want them to think that these new features are here now. They need to know what's available now so that they can implement these features, though they should also have some knowledge of what's coming down the line.

t-infrastructure protects the real infrastructure from students. We also need to protect students from eachother, this could be done through VMs etc. sandboxing. We don't want students to interfere with eachothers' jobs.

3.7 Summer Schools

Summer schools have been among the most important activities to promote the Grid technology around the world. Students who come to these events are generally at their first experience with the Grid and in a couple of weeks they learn how it works and/or how to exploit this new technology in their day-by-day life.

From the experience made with ISSGC’s [13] and other schools around the world [14], we have noticed that the infrastructure where students can exercise themselves has an important role in the learning process. In fact, the quality of the infrastructure in terms of performance, availability, reliability is a key element for the students to decide if they will use the Grid in the future. A frustrating experience at the school can produce a very negative impact on the Grid adoption.

Generally, the organiser of the school, in cooperation with the teaching staff, implements an “ad hoc” training infrastructure just for the school. This infrastructure has to contain all the components of a production Grid infrastructure but at a smaller scale, typically 3-4 sites plus several needed additional services such as centralised brokers, information systems, file catalogues, and others. In order to have the quality envisaged above, the implementation of such training infrastructure requires a precise analysis of the location, of the number of students and of the nature of the curriculum of the school. The location has to be evaluated in terms of network and/or possibility to bring components (i.e,. servers, network switches, cables, etc.). The infrastructure is installed locally so good local network connection, power supply and, of course, a well conditioned room are required. Moreover, some lessons or exercises could require external sites so the bandwidth to access them by network should be granted for the entire duration of the school.

The number of students has a direct impact on the infrastructure because each student produces a workload.

Furthermore, the nature of the curriculum of the school could has an impact on the training infrastructure. If the school is made for computer scientists willing to learn how the Grid middleware works, then few elements are required, but students have to access them in order to understand how each component functions. Differently, if the school is made for application developers willing to develop/try their applications on the Grid, then a lot of power is required because complex applications can be submitted many times on the infrastructure in order to understand how the Grid can improve their execution.

These evaluations on the infrastructure can be complex and a significant part of time required to prepare a Grid school is related to the “ex-ante” analysis. For example, for ISSGC'08 [15], planned in July 2008, the discussion for the infrastructure has started in December 2007, just at the same time when the location was selected. A local staff has been arranged to prepare the infrastructure and a discussion group with middleware experts has been organised to decide how configure the middlewares and test the exercise before to assign them to the students.

After the summer Grid schools the local staff usually writes a document describing their activities in details, including the equipment they brought to the school, its set up and also a short report on the discussion group activities.

3.8 eLGrid eLearning t-Infrastructure

Trinity College Dublin have developed a self-contained t-Infrastructure and integrated eLearning courseware. This system, called eLGrid[21] generates personalised eLearning courses for learners and allows them to run practical exercises from within the eLearning system.

The eLGrid t-Infrastructure makes heavy use of virtualisation to provide a training infrastructure which closely replicates the Grid-Ireland production infrastructure. This gives learners an environment which is as similar as possible to what they will use on the production environment and makes the transition from t-Infrastructure to production infrastructure as painless as possible. The Grid-Ireland sites and services are replicated within a firewalled eLGrid subnet. Access to the Grid UI is provided via normal ssh, and also via web ssh and portals which can be launched directly from within the eLearning system.

3.9 Other training events

The summer schools and academic courses are only a small portion of the Grid training events arranged around the world. Looking at the GILDA tutorial page [14], it is possible to see a very long list of events using the GILDA infrastructure and the majority of them are related to short training events made. These events are very important to spread the Grid know-how in different user communities, because of their nature of “ad hoc” courses.

4. Multi-Middleware Co-existence

Many Grid projects, aiming to develop middleware services and create new infrastructures, organise tutorials and courses around the world as part of their activities in order to make dissemination and spread the knowledge of the middleware they develop. Consequently, the services installed in the infrastructure and its topology are strictly related to the middleware. Nevertheless, there are many courses not related to a single middleware where the aim is just to show how a Grid infrastructure can work. These independent schools include the university courses and several summer schools as the ISSGC’s sponsored by ICEAGE. During the past ISSGC schools middleware experts have required to have an infrastructure executing only their middleware and with free access to modify everything needed. This approach has implied the use of a big number of machines.

To reduce this overhead for the training infrastructure, which determines a waste of machines and resources (person to administrate the machines, electric power, space, etc. ), the ICEAGE project has been started an activity aiming to study the feasibility of a Multi Middleware training infrastructure. This activity is carried out by one of the Work Packages of ICEAGE, WP4, responsible to provide a permanent training infrastructure for people willing to try the Grid and supervise the infrastructure for the schools provided by the project.

The Multi Middleware infrastructure consists of a single hardware infrastructure where different middleware services can be simultaneously deployed and co-exist. This approach has been named “Middleware coexistence” [16]. The main issue to follow this approach is the harmonisation of the interaction among the installed services, that should allow the users to use the same Grid resources independently from the chosen middleware. To solve this issue, users have to access all the middlewares available on the infrastructure with only one personal certificate. Hence, the middlewares have to share the same Virtual Organization of students and share the access policies in order to grant the same privileges to each user.

Although X.509 certificates are accepted by the main Grid middlewares, they are used in different ways. As an example, the user roles in gLite middleware [6] are defined in a centralised server, the VOMS server [17], that authorizes all the Grid users by adding the information in a special proxy certificate. Differently, in the OMII-UK middleware the roles are defined at site level and locally stored.

Other aspects of the harmonisation are the resource availability and the consistence of the resources information, which are strictly related. Each middleware has an Information System showing the status of resources and the user jobs. Such information are used to perform operations on the resources. To make the information among the middlewares consistent and avoid the simultaneous access to the resources from different middlewares, the resource access has been limited only to a single local scheduler which is responsible to handle all the jobs. Moreover, the information provided to the users is related to the status of the queues that are shared by the middlewares, so even though they have different approaches to monitor the state of resources and show information, these have to be consistent since they refer to the same element: the batch scheduler queues.

An infrastructure based on Multi-middleware coexistence has been deployed in the context of the ICEAGE project. A permanent Grid training infrastructure based on GILDA [3] has been integrated with OMII-UK [18], Globus toolkit 4 [19], and UNICORE [20].

5. Academic courses

In the last few years Grid computing has been introduced in the curricula of several undergraduate and master courses in many universities . Initially Grid computing was taught as part of consolidate courses (i.e., distributed system, advanced computing, complex system, etc.) while more recently it has become the topic of dedicated courses [21]. Although university courses should be not related to specific projects and middleware, they commonly allow students to make practice on a real training infrastructure. Consequently, the courses are influenced by the specific middleware/implementation they have use.

The standardisation of a training infrastructure is crucial to make the courses more independent. This will allow teachers/students to choose the middleware they want to teach/learn without having to do a huge work in setting up the training infrastructure.

This freedom in Grid courses can be achieved with a standard permanent training infrastructure such as the one highlighted in section 1.5, especially for courses outside computer science area where Grid has to be just a tool the students have to learn and use for their purposed. In fact, a permanent training infrastructure will allow students to exercise themselves on Grid whenever they wish, constantly improving their skills in the scientific disciplines they are interested in.

GILDA is a good example of such a permanent training infrastructure.

6. Gap Analysis and Recommendations

The proliferation of Grid computing outside its original scope, i.e. scientific research, is very limited. To overcome this limitation it is important that the main Grid stakeholders promote new and more efficient training activities. These should not only involve all the established Grid communities but also the universities which are the most important places for the creation of knowledge.

Grid Organisations should set up policies and standards for training and induction. The discussion on policies and standards has to be carried on by official bodies such as the Educational and Training Task Force (ETTF) of the European e-Infrastructure Reflection Group and the Open Grid Forum.

Students learning Grid need to exercise themselves in order to understand how Grid infrastructures work and how they can be used. Exercises should not be limited to a specific period but students should be able to make test and exercise every time they think that Grid can be useful to solve their problems. Therefore, the public authority financing Grid projects should be encouraged to consider the training as an important part of a project and finance the creation of a permanent multi-middleware training infrastructure.

7. Future work

This document has looked at experiences of various parties in providing training infrastructures and has attempted to identify the common problems faced, and identify best practice for solutions. This document is not, however, a policy recommendations document, and it has thus only made some tentative recommendations based on the best practices identified. Further work is required to determine the policy implications of these best practices and to formulate these for dissemination to the relevant decision-makers.

8. Contributors

Roberto Barbera, Dipartimento di Fisica dell’Università di Catania - Italy, Istituto Nazionale di Fisica Nucleare, Sezione di Catania – Italy

Emidio Giorgio, Istituto Nazionale di Fisica Nucleare, Sezione di Catania – Italy

Marco Fargetta, Dipartimento di Fisica dell’Università di Catania - Italy

Gergely Sipos, MTA-SZTAKI - Hungary

9. IPR statement

The OGF takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the OGF Secretariat.

The OGF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to practice this recommendation. Please address the information to the OGF Executive Director.

10. Disclaimer

This document and the information contained herein is provided on an “As Is” basis and the OGF disclaims all warranties, express or implied, including but not limited to any warranty that the use of the information herein will not infringe any rights or any implied warranties of merchantability or fitness for a particular purpose.

11. Full copyright notice

Copyright © Open Grid Forum (2006-2008). All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the OGF or other organizations, except as needed for the purpose of developing Grid Recommendations in which case the procedures for copyrights defined in the OGF Document process must be followed, or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by the OGF or its successors or assignees.

12. References

  1. Ian Foster and Carl Kesselman, The Grid 2: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 2004
  2. ICEAGE web page: http://www.iceage-eu.org/
  3. GILDA web page: https://gilda.ct.infn.it/
  4. INFN Grid activities web page: http://grid.infn.it/
  5. EGEE web page: http://www.eu-egee.org/
  6. gLite middleware web site: http://glite.web.cern.ch/glite/
  7. Genius portal web page: https://genius.ct.infn.it/
  8. List of P-GRADE Portal training events: http://portal.p-grade.hu/index.php?m=4&s=0
  9. Thierry Delaitre, Ariel Goyeneche, Tamas Kiss, Gabor Z. Terstyanszky, Noam Weingarten, Prince Maselino, Akis Gourgoulis, Stephen C. Winter, "Traffic Simulation in P-GRADE as a Grid Service", DAPSYS 2004 Conference. Budapest, Hungary. 2004
  10. Open Science Grid web site: http://www.opensciencegrid.org/
  11. Commercial Training Market Survey, 2007
  12. Gridwisetech company web site: http://www.gridwisetech.com/
  13. List of schools and tutorials supported by GILDA infrastructure: https://gilda.ct.infn.it/tutorials.html
  14. Official web site for the International Summer School of Grid Computing of 2008: http://www.iceage-eu.org/issgc08/index.cfm
  15. Roberto Barbera, Marco Fargetta and Emidio Giorgio, "Multiple Middleware co-existence: another aspect of Grid Interoperability", eScience'07. Bangalore, India. 2007
  16. R. Alfieri, R.Cecchini, V. Ciaschini, L. dell'Agnello, A. Frohner, K. Lorentey, F. Spataro, "From gridmap-file to VOMS: managing authorization in a Grid environment", 2005
  17. OMII-UK web page: http://omii.ac.uk/
  18. Globus web page: http://www.globus.org/
  19. UNICORE web page: http://www.unicore.eu/
  20. Section of ICEAGE web site listing Grid MSc. courses: http://www.iceage-eu.org/v2/msc%20courses.cfm
  21. eLGrid infrastructure http://www.grid.ie/elgrid/

13. Appendices

 




The Open Grid Forum Contact Webmaster | Report a problem | GridForge Help
This is a static archive of the previous Open Grid Forum GridForge content management system saved from host forge.ogf.org file /sf/wiki/do/viewPage/projects.et-cg/wiki/TInfrastructureExperiences at Thu, 03 Nov 2022 00:15:28 GMT