09/03/2004 3:34 PM
post4046
|
minutes from 2 Sep 2004
(Slightly expanded/amended ...)
Attending:
Dieter Gawlick
Cecile Madsen
Susan Malaika
Chris Kantarjiev
Vijay Dailani
Scenarios - OGSA scenarios are interesting but Dieter thinks there
is an awful lot of boiler plate to get through to read any "meat".
We all need "discovery", but what is discovery?
Susan and Steve are working on the INFOD usage docs to remove/revise
the boilerplate to make it more more accessible, while still
using the OGSA headings/examples.
Should we be repeating the OGSA examples? How do we differentiate
our work?
Dieter and Susan discussed the "Medical Screening Use Case". Currently the use case is very batch and pull oriented.
The suggestion is to screen images with 'Classification Software' and decide what to do with them. This would be done by
using the classification of each image and associating it to the doctors with the best matching profiles (property
values and filters). The addition of new doctors or the change in property values or filters will normally change the
result set.
This approach would allow us to build and leverage large virtual organization, to act on critical images immediately,
and to find out if things get done. We can add quality and speed by pushing images to a set of experts with different
focus, concurrently as well as sequentially. And last not least we could allow doctors to refine the classification and
to involve other experts. Changes in classification may be feed back to the classification engine to improve the
classification and even to trigger the review of existing images.
We should be able to find out if there is no doctor available with the right profile, whether an urgent image has not
been processed in time, and most likely many more things.
Here is the reference: One researcher doing 'classification technology' is Ed Chang, he calls the technology PBIR (
Perception-based Image Retrieval). There are other widely used technologies such as 'Recommendation Engines' known from
Business Intelligence. See http://www-db.stanford.edu/~echang/
Anyway, the dissemination of automatically classified images based on highly dynamic profiles, the supervision of the
delivery of reports, and the classification and dissemination of these reports would be an interesting INFOD application
.
Dieter asserts that there may be a great deal of ID source
material/scenarios in the on-line banking world.
He will come up with some details for Susan/Steve to use.
We (ID) may have something to offer in the discovery model,
since we are planning to do something similar with our registry.
But what is WSRF going to do about service discovery?
OGSA has workflow ... very process-oriented workflow. ID has something
to offer to workflow in the way that subscription causes publication.
So it appears that OGSA workflow can only be done by the process
steps. ID can look at the data and drive the workflow... either
by explicit events/process steps or by non-events. So we can
drive the workflow via the data, rather than by a pre-set
event stream.
No feedback yet on Cecile/Chris' definitions. Vijay will
provide some by Saturday.
Updated spec: we may want to go out and look at other rule
specification systems and consolidate/comment in the doc.
It's not clear which of the rule systems that are out there
are applicable in a high-reliability, scalable world, rather
than just dealing with a few objects one at a time.
Dieter will come up with an example for next time of rule systems that
do/don't work in this environment based on Oracle's discoveries
around AQ.
We are oriented towards putting things in multiple queues right now.
But the consumer may not want to be able to see those queues,
especially cross-VO. So we need a mechanism for replaying
past events without the producer understanding the ID structure
of the consumer. The consumer just wants to get the information;
it shouldn't have to know what the underlying structure was.
Grimshaw document: not much here about caching, but a lot...
View Full Message
|
|
|