Automated Natural Language Evaluators - (ANLE)

dc.contributor.authorKaikhah, Khosrow
dc.date.accessioned2012-02-24T10:17:47Z
dc.date.available2012-02-24T10:17:47Z
dc.date.issued1993-12
dc.description.abstractBy the turn of the century, it is expected that most computer applications will include a natural language processing component. Both developers and consumers of NLP systems have expressed a genuine need for standard natural language system evaluators. Automated natural language evaluators appear to be the only logical solution lo the overwhelming number of NLP systems that have been produced, are being produced, and will be produced. The system developed here is based on the Benchmark Evaluation Tool [7] and is the first attempt to fully automate the evaluation process. This effort was accomplished in two phases. In phase one, we identified a subset of the Benchmark Evaluation Tool for each class of NLP systems. And in phase two, we designed and implemented a natural language generation system to generate non-causal semantically meaningful test sentences. The generation system can be queued for each class of NLP systems. We followed an Object-Oriented Design (OOD) strategy. In this approach all concepts, including semantic and syntactic rules, are defined as objects. Each test sentence is generated as a chain of words satisfying a number of semantic, syntactic, pragmatic, and contextual constraints. The constraints imposed on the generation process increase dynamically while the sentence is being generated. This strategy guarantees semantic cohesiveness while maintaining syntactic integrity. In this approach, syntactic and semantic knowledge were utilized concurrently in word-objects. Each word-object is an independent knowledge source with local knowledge that can decide whether it can be a part of the sentence being generated, when called upon by the sentence-generator to join the chain.
dc.description.departmentComputer Science
dc.description.sponsorshipAir Force Office of Scientific Research, Boling Air Force Base, Washington, D.C., and Southwest Texas State University.
dc.description.sponsorshipAir Force Office of Scientific Research Boling Air Force Base
dc.description.sponsorshipSouthwest Texas State University
dc.formatText
dc.format.extent40 pages
dc.format.medium1 file (.pdf)
dc.identifier.citationKaikhah, K. (1993). Automated natural language evaporators - (ANLE), Final Report for: Research Initiation Program Rome Laboratory.
dc.identifier.urihttps://hdl.handle.net/10877/3808
dc.language.isoen
dc.sourceOriginally published as "Automated Natural Language Evaluators (ANLE)" Technical Report, Air Force Office of Scientific Research (AFOSR), December 1993.
dc.subjectnatural language processing
dc.subjectintelligent systems
dc.subjectautomated evaluation systems
dc.subjectComputer Science
dc.titleAutomated Natural Language Evaluators - (ANLE)
dc.typeReport

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fulltext.pdf
Size:
436.16 KB
Format:
Adobe Portable Document Format