[Attempto] generating OWL ontology

Kaarel Kaljurand kaljurand at gmail.com
Wed Apr 1 16:49:32 CEST 2009


Hi Alireza,

On Wed, Apr 1, 2009 at 3:42 PM, Alireza <alireza.khoshkbari at gmail.com> wrote:
> You mean that the problem is due to anonymous Individuals?

Yes, the problem in this case was with anonymous individuals.
I'll try to explain below what goes on here.

> When I tried just one sentence : "Process_1 is a process."
> the output owl was correct, as follows:
>   <owl:Thing rdf:about="#Process_1">
>     <rdf:type>
>       <owl:Class rdf:about="#process"/>
>     </rdf:type>
>   </owl:Thing>
> As it denotes the process_1 is a type of process class.

Yes, if you submit only a single sentence then the ACE->OWL
service returns a single class assertion, and no anonymous individuals.


I incorrectly assumed that you were using ACE View, but actually
you are directly converting ACE into OWL and then loading the OWL
in Protege 3. Right?

So, when the ACE->OWL translator sees something like:

(1) Process_1 is a process.
(2) Software_1 is a technology.

Then it converts it (using anonymous individuals) to:

   ClassAssertion(
      Class(:process)
      AnonymousIndividual(560570469383087235)
   )
   SameIndividual(
      Individual(:Process_1)
      AnonymousIndividual(560570469383087235)
   )
   ClassAssertion(
      Class(:technology)
      AnonymousIndividual(408004904989529692)
   )
   SameIndividual(
      Individual(:Software_1)
      AnonymousIndividual(408004904989529692)
   )

This is semantically OK, because some trivial reasoning would convert this
to an equivalent form:

   ClassAssertion(
      Class(:process)
      Individual(:Process_1)
   )
   ClassAssertion(
      Class(:technology)
      Individual(:Software_1)
   )

Your visualization tool does not perform this reasoning though.
(You could try though to run Protege's build-in reasoner, and try
to visualize then, i.e. on the classified ontology.)

Now, to avoid the clutter caused by anonymous individuals, the ACE->OWL
translator treats single-sentence inputs a bit differently, so if you translate
(1) and (2) separately and then merge the result, then you would only get
the two class assertions and no anonymous individuals. This is how ACE View
does it: all sentences are parsed separately and the translation outputs
are merged.

Of course, the best would be if ACE->OWL managed to avoid creating anonymous
individuals always when it's possible. (Note that there are cases where it's not
possible to have a semantically equivalent ontology without anonymous
individuals).
So, I'm still working on it. The next version of APE will be already a
bit better in that respect.


One (hackish) solution that preserves the semantics but gets rid of
the anonymous individuals.
Is to prefix sentences like (1) and (2) with "Everything that is", like this:

(1') Everything that is Process_1 is a process.
(2') Everything that is Software_1 is a technology.

I.e. you could let the user enter (1) but then convert it
automatically to (1') so that
all your sentences start with "Every". In this case you are guaranteed to have
no anonymous individuals in the output, no matter how many sentences you
translate at the same time.

I'm not sure though that the visualization tool can handle the output in this
case as now the left-hand side of SubClassOf would consist of a singleton, i.e.
this is what ACE->OWL would produce:

   SubClassOf(
      ObjectIntersectionOf(
         Class(owl:Thing)
         ObjectOneOf(
            Individual(:Process_1)
         )
      )
      Class(:process)
   )

But this is easy to map to (the semantically equivalent):

   ClassAssertion(
      Class(:process)
      Individual(:Process_1)
   )

Maybe I should do it already in ACE->OWL...

Hope this helped,
Kaarel


More information about the attempto mailing list