When designing a traditional architecture with a relational database near or at the bottom of your tech stack, the relational model is constructed to perfectly deliver the requirements and use-cases of the target system. Control is retained within the realms of the organisation and with the technical and data architects. The use cases and business requirements inform the model, and conversely the data model informs the developers building services upon it to meet those use cases. Control of this data model is thus important in ensuring quality and robustness of the software architecture, and in the ability to react in an efficient and agile way to changing requirements. So with an ontology driven data architecture where the model is pervasive throughout the tech stack it is equally as important, if not more so, that this control is retained. Modelling purely using public domain upper ontologies as construction blocks or a cohesive similar domain ontology may indeed deliver your business case, but there is some loss of control. A model constructed purely from upper ontology Lego will be less cohesive, harder to validate domain object data operations against (as the RDF will be more generic), and easier for bugs to creep into your software that binds to the ontology as the model is less domain specific. When adopting a cohesive third party domain specific ontology that is a close fit to your own domain, control is lost purely in the fact that the model is not your own to change at will (physically of course you can, but divergence then brings its own issues) . Unless your domain is a perfect match, it is likely you will need to diverge from the model.



« When designing a traditional architecture... »


A quote saved on Oct. 15, 2014.

#ontology
#use-cases
#data-model
#architecture
#requirements
#domain-specific-ontology


Top related keywords - double-click to view: