Content Integration

DataPorts

Play Video

Product content and content quality are key to commerce, and as supply chains accelerate so the impact of errors escalate. When the wrong product is shipped to a consumer because of errors in an online catalogue it costs the consumer, retailer, wholesaler, and manufacturer in satisfaction, shipping, restocking, lost sales, and reputation. When content errors occur upstream of the consumer, the impact worsens as multiple parties see losses on entire shipments of products. Bad demand and supply planning drives loss at scale. Content errors have the power to break not just individual transactions but longer-term trust and relationships, anywhere in the value chain.
The industry is asking for more efficient, accurate ways to move product content through the value chain - and the DataPorts project has emerged to address this need.

Concept

Today, when a retailer needs to assemble a complete view of information about a product for an online product catalogue, that retailer will typically go to multiple web-based sources, search for information, transform the information obtained into a format fit for purpose, and assemble the information elements for publication to the catalogue.
Today that content integration process is difficult. It is difficult because the steps of the process and the technology required for search and retrieval of information vary sufficiently from source to source, and even from item to item such that it is difficult to fully automate. The content integration problem blocks operational efficiency, introduces product information errors, and slows down the speed to market - resulting in increased costs and lost sales.

One of the approaches traditionally proposed for solving this content integration problem is “data federation”. Data federation works by standardising on a common federated data model and mapping all data sources to that standardised model. Content might be mapped either real-time in response to requests, or it might be stored using the common data model in an intermediate data store ready for consumption.
The challenge though is that agreeing on a standard model is difficult and despite big efforts and successes in standardisation across the value chain there always seem to be exceptions and in most cases there remains data which does not fit the model. We’ve been asked to find a more general way to share information such that partners can choose when and where to apply standards, adapting dynamically to changing needs.

Building Blocks

Role-Specific Data Model

Dynamic diversity will always outperform static unification. This refers to the idea of creating a limited data model which fits the needs of communicating only the relevant information needed to support a specific task or tasks. Role-Specific Data Models adapt perfectly into their respective environment and business relationship. DataPorts fulfill the task of connecting the diverse dots and innovative data models that businesses and sectors will create, test, implement, maintain and improve over time.

DataPorts

These are data virtualisation servers which share role-specific data schemas and enable Content Integration. They work by virtualising the participant data sources, optimising queries, transforming query results inline, and creating an aggregate query response. Enable autonomous Machine-to-Machine (M2M) communication and process integration that take decisions using AI/ML in Real Time: Always provide your systems with the freshest data - where and when you need it. Stored data gets stale fast. Avoid data duplication, make data available at source - avoid creating intermediate central hubs with fast deprecating data quality over time suffering from data maintenance bullwhip effects.

DataPorts | Outbound - Inbound - Peer-to-Peer

DataPorts are specialised web servers which make peer-to-peer content integration transparent.

DataPorts are specialised web servers which make peer-to-peer content integration transparent.

No matter where data sources actually reside, DataPorts make data sources accessible together, through a single, common programming model.
They neutralise the differences in location, programming interfaces and formats, to allow content integration to be automated efficiently. Automation glues together event-driven processes and micro-services into intelligent value networks.

No matter where data sources actually reside, DataPorts make data sources accessible together, through a single, common programming model.
They neutralise the differences in location, programming interfaces and formats, to allow content integration to be automated efficiently. Automation glues together event-driven processes and micro-services into intelligent value networks.

The Anatomy of a DataPort

DataPorts are comprised of three processing layers: Abstract, Transform, Compose - which together enable the rich capabilities which we need to meet the needs of content integration across the value chain: a single query interface, optimized across diverse and distributed data sources, delivering content aligned to requirements in terms of scope, units of measurement, file formats, consistency, and standardisation.

Access any data source efficiently through a common interface, independent of the source implementation details. It is the abstraction layer which virtualises sources ensuring interoperability, inside and outside of the enterprise, from the simplest data sources all the way to full featured partner data hubs.

Perform operations on data retrieved from the different sources, including operations such as unit conversion, image format conversion, and deriving or inferring new information through advanced operations such as machine learning based labeling or the extraction of new insights from the data.

Combine the query result components from each source into a single response respecting the relationship between components according to the role-specific data model.

Do you
want to
invest?

Contact us, and
we will send you
more information