Case Study: Intel Accelerated Time-to-Information using Data Virtualization

September 23rd, 2015

Prerequisite: None

Saji Mathew

Data Virtualization Program Manager


Anil Varhadkar

Enterprise Architect


For many companies, data landscape is fairly diverse, with data residing in heterogeneous systems such as traditional and legacy databases, big data, NoSQL databases, in-memory databases, and so on. This can be especially true of customer data. In addition to the variety of data sources, there are a range of protocols for consuming the data from those systems.  While ODBC and JDBC continue to be widely used, XML, JSON and several other proprietary and web service-based methods are growing. The emergence of Cloud/SaaS and new data consumption channels like mobility offer flexibility to business customers, but they also present increased integration complexity to IT.

As data gets more distributed (inside & outside of enterprise), as well as data sources continue to change or get added, data integration to enable business decision becomes more complex and challenging. Difficulty in addressing the problem can impact how quickly a company can respond to changes in customer demand. Traditional methods of copying data from one source to another or to common data container like XLS or enterprise data warehouse (EDW) is no longer a scalable option from business velocity and adoptability perspectives. An agile data integration method that simplifies information access is needed.

This session will discuss how to address this data integration challenge using data virtualization. It will discuss best practices for accelerating time-to-information for the business. This session is geared to Business or IT managers, executive sponsors, xnterprise architects, and program/project managers.

You Will Learn

  • De-coupling data consumers from data sources /providers
  • Merging structured and unstructured data
  • Ease of external (Cloud/SaaS) data integration
  • Simplifying enterprise architecture and capability stack