What CodeStrap Said:
There Are 5 Main Principles To Understand The Data Mesh:
Why Do We Need A Data Mesh?
What Are The Issues With The Conventional Model?
Palantir Data Mesh:
What CodeStrap Said:
CodeStrap explained within a recent interview with CEO Amit Kukreja, that a data mesh is a major paradigm shift within the software realm. Currently there is the idea of a centralised data warehouse, in which everyone gets their analytics data. However now, the data mesh idea enables true autonomy of teams.
This Data Mesh solution is brand new, and is “cutting edge” said CodeStrap.
“Palantir built Foundry for this purpose” CodeStrap mentioned.
CodeStrap explained how the central data warehouse is collapsing and that the SaaS model stuck within silos, in which is hard to control. This overall indicates the paradigm shifts in which will occur in the near future. There is currently a disjointed experience, and individuals have to jump between 15 applications in order to derive utility. This causes huge friction for developers.
CodeStrap explains, how the industry needs Palantir Foundry, however these organisations are unaware of this fact currently.
In addition, the idea of a Data Mesh calls for a fundamental shift in our assumptions, architecture, technical solutions, and social structure of our organizations, in how we manage, use, and own
analytical data.
The main objective of data mesh is to eliminate the challenges of data availability and accessibility at scale. Data mesh allows business users and data scientists alike to access, analyze, and operationalize business insights from virtually any data source, in any location, without intervention from expert data teams.
Simply put, data mesh makes data accessible, available, discoverable, secure, and interoperable. This equates to faster time-to-value.

There Are 5 Main Principles To Understand The Data Mesh:
The 5 main principles of the data mesh is split into organisational, architectural, technological, operational, and principal changes, in comparison to the current conventional methodology.
- Organizationally, it shifts from centralized ownership of the data by specialists who run the data platform technologies, to a decentralized data ownership model pushing ownership and accountability of the data back to the business domains where it originates from or is used.
- Architecturally, it shifts from collecting data into monolithic warehouses and lakes to connecting data through a distributed mesh of data accessed through standardized protocols.
- Technologically, it shifts from technology solutions that treat data as a by-product of running pipeline code, to solutions that treat data and code that maintains it as one lively autonomous unit.
- Operationally, it shifts data governance from a top-down centralized operational model with human interventions, to a federated model with computational policies embedded in the nodes on the mesh.
- Principally, it shifts our value system from data as an asset to be collected, to data as a product to serve and delight the users.
Why Do We Need A Data Mesh?
From a macro-perspective, there are evident drivers in which have caused society to move towards this new initiative of a data mesh. Firstly, the evident increase within business complexity, combined with uncertainty, proliferation of data, and the overall variability of data from a range of different sources.
This is in addition to the necessity for business agility, the importance of getting value from data and the importance for organisations to change and become resilient.
Therefore, this means that society now has a vital decision to make. If as a society, we will continue with the current convectional approach, there is an evident danger of plateauing, and therefore failing to capitalise on the utility of data.
However, if we use the data mesh approach, this will mean that organisations will capitalise on the reality of the data trend, in turn being able to gain value from data at scale.

What Are The Issues With The Conventional Model?
1) Organisations today use a centralised strategy to process extensive amounts of data, with various sources, types and overall use cases. However, centralisation requires users to transport data from edge locations to a central data lake to be queried for analytics. This innately is time consuming and very expensive.
Interestingly, a data mesh can solve this issue because, the data mesh is a decentralised ownership model in which reduces the time-to-insights, and time-to-value by empowerment of teams to access and analyse non-core data quickly and easily.
2) In addition, as the growth rate of data expands exponentially, the current query methodology used within a centralised management model requires changes to scale There is an evident slowdown in the response time to new data sources, as the overall sources increase within the conventional model. This is negative for a business whom want to gain agility in order to derive the most value from data.
Once again, the data mesh can solve this issue because data mesh delegates ownership from centralised locations, to the individual teams or users in which enables agility. Furthermore, this means that businesses can experience real-time decision making by closing the time and space between an event occurring, and the analysis.
3) Another issue within the current model is the susceptibility to data residency and the guidelines in which prohibit data migration if the data is stored within a certain geopolitical region, or if the data is stored within the EU, however needs to be accessed by a user within another nation. Aiding by data governance regulations can be costly and time consuming, therefore delaying the overall time for businesses to analyse data.
Within the data mesh, there is a decentralised data management system in which domains are responsible for the quality, security and transfer of their data products. Via this decentralised approach, there is a connectivity layer in which enables direct transfer access to data sets where they reside, in turn avoiding costly data transfer issues, and residency concerns.
Palantir Data Mesh:
Palantir explains: “Palantir Data Mesh can upgrade historical investments in data warehouses, data lakes, and other legacy infrastructure to unlock additional value. By meeting the enterprise wherever they may be along the digital modernization journey, Palantir Data Mesh seamlessly integrates the enterprise’s legacy systems through its interoperable architecture and automated deployment.”
Alternative platforms charge customers to put their data in and get their data out — and the solutions themselves are insufficient. This includes the data warehouse, and data lakes:
1) Data warehouses rely on known schema, neat hierarchies, and rigid formats to store structured data, leaving little room for flexibility.
2) Data lakes are designed for basic query and analysis — but data legibility and usability is crippled by exponential data volume.
“Palantir Data Mesh comes with data connections to hundreds of source systems. It allows domains to combine this source data, along with models of the business, to form an operational digital twin of the enterprise. The Mesh automatically deploys and manages the infrastructure required to build and refresh these domain-produced data objects.”
Furthermore, the product automatically generates and stores pipeline code and lineage as an integrated part of each dataset — and automatically builds, deploys, and runs the code for data products. In addition, data objects contain instructions on how they are to be used and combined, enabling each domain to use data independently while all data in the Mesh remains wholly in control of the enterprise.
Palantir Data Mesh provides multiple planes of infrastructure to support any type of user — including software-driven data integration, templates for common data pipeline workflows, auto-scaling of computation nodes, and more. The Mesh empowers data product developers to independently release and globally federate new data products. To keep this tightly organized and easily accessible, the Mesh also automatically forms a graph of globally connected data products, each still owned by individual domains.
Palantir Data Mesh automatically standardizes interoperability across domain objects, while also federating updates back to legacy / source systems. To address the automated decision execution requirement of this principle, the Mesh also enables ML, AI, or other logic-based decision automation that not only captures the full context surrounding those decisions, but also enables global scenario evaluation and cross-domain optimization.
READ HERE FOR MORE: https://www.palantir.com/japan/data-mesh/