ENGINEERING INFERENCE ENGINE
WHAT DOES IT DO?
The Engineering Inference Engine is an experimental forward-chaining inference engine used for model-based design. Public cloud-based models are available at models.parkinresearch.com, some of which have tutorials in the ‘Models’ menu above.
Cloud-based solutions can be saved and shared with others via their URL, which can be pasted into e-mails or embedded into reports. Recipients may verify and build upon your results by clicking the URL and using the model via a browser – no need to install new software! URLs that encode the model and your particular input values may be obtained by typing ‘help’ (or by copying the URL in the browser’s address bar if the model has its own window).
FEATURES
Speed
- The Engineering Inference Engine is a super-fast forward chaining inference engine that specializes in propagating engineering quantities.
- Each input value generates a cascade of inferences, with the engine preferring to call the simplest and quickest functions as it goes while also respecting how mathematical branches change the solution procedure.
- Iterative solutions are automatically deduced with minimal computational complexity by omitting anything that does not affect the value that is being solved for. This is important because nested iterations are where poorly-customized routines burn through CPU time, especially in complex engineering designs where iterations can run 5-10 levels deep.
Scalability
- Model size is limited only by memory size.
- Data types are objects within the Engineering Inference Engine, so they can be inherited and combined to make composite types. New data types are built from booleans, integers, doubles, complex doubles, and vectors and matrices thereof. There are also character, string and struct types. For example there is a data type with maximum and minimum limits formed from a composite of three doubles encapsulated into a single object. It is used in connection with iterative functions.
- The Engineering Inference Engine is generic: any problem that can be represented by known functions can be solved. Functions are black boxes; they are simply function handles with lists of parameters. The user manages the fundamental definition of the problem and decomposes it for the inference engine, and the engine then solves it all by inferring which function handles to call in what sequence.
Reusability
- Models are object oriented. Objects and the sub-objects they are built from are addressable using dot notation e.g. Root.Sub.Coordinates.z.
- A standardized engineering/physics ontology mirrors the categorizations used in mathematics and engineering courses taught in schools and universities worldwide. For example, there are objects representing the thermodynamics of a simple compressible substance, the state of a fluid at rest, in motion, and within a duct. Each builds upon the last, forming a highly compact and reusable body of knowledge.
- Knowledge held within the Engineering Inference Engine is inherently more compact and reusable than conventional code. Metacognition defines three types of knowledge: Declarative, procedural and conditional. Traditional code is mostly procedural and that is why it fails; it is a brittle incomplete representation that is fractured by even the slightest change in how a problem is described. Humans don’t work this way; we perceive problems declaratively. Inference engines represent models as declarative and conditional knowledge, approaching an unchanging standard as the ontology becomes fully decomposed. The user is freed from the endless cycle of writing and debugging of subroutines to instead focus on drilling down to the essence of each problem and exploring its solutions.