Bridgestone is the world’s largest manufacturer of tyres and a global leader in mobility solutions. In order to support their dealer network they wanted to implement a data driven application with millions of data points that can predict tyre demand down to the micro-market level. The goal is to increase a dealer’s inventory turns and overall sales by finding gaps in product assortments within specific trade areas while on Bridgestone side, the application helps to optimize the production planning.
In order to support the application, Bridgestone was looking to setup a Data Lake with the following specifications;
• The Data Lake must offer centralized active archival and a centralized suite of tools to process any kind of data.
• Cloud based elastic and parallel distributed processing capabilities on demand.
• Safeguards to help protect against data breach in support of GDPR obligations.
In order to meet the customer’s requirements, Big Industries designed and deployed an Azure based Data Lake. We were responsible for the overall solution design, cross-domain technology governance and alignment of the solution with the continued evolution of technology and trends in data processing. We developed data ingestion pipelines, data was joined, combined, cleansed, enriched into the format and structure required for further processing and analysis. Once our prototype was accepted by the business, we industrialized the solution by orchestrating the analytics pipelines, promoting features to production and automating quality controls, system and process monitoring and service stability.
The Data Lake will be used as shared service for multiple programs within Bridgestone, and will help this organization in the following ways;
• Answer questions, build applications, and unlock projects that hitherto been too expensive to tackle, or that Bridgestone might not have thought of yet, and might not be able to solve with data silos.
• Unlock innovation by granting secure, controlled and flexible access to tools and data.
• Handle bigger data: data volumes will explode in the coming years, continuing to use yesterday's systems will not be cost effective.