Digitalt seminar 12. april Data Virtualization: Technology and User Cases

Data virtualization is a new and agile technology for integrating data from all kinds of systems. This seminar discusses the benefits and characteristics of data virtualization technology; products are compared and user cases are discussed. In addition we look into the relationship with topics, such as MDM, Data Governance, and the IoT.

NB: Seminaret holdes på engelsk.

Faggruppen BI & Analytics

arrangeres av Faggruppen BI & Analytics og er en del av serien Webinarer

Medlempris: Pris: 3000.00 eks.mva (3 750,-) Ordinær pris: Pris: 3 700.00 eks.mva (4 625,-) Meld på

Abstract: Data virtualization is the new data integration technology. It allows for more agile data integration through decoupling data consumers from data stores.

But why do we need a new technology? Data is increasingly becoming a crucial asset for organizations to survive in today’s fast moving business world. In addition, data becomes more valuable if enriched and/or fused with other data. Unfortunately, enterprise data is dispersed by most organizations over numerous systems all using different technologies. To bring all that data together is and has always been a major technological challenge.

In addition, more and more data is available outside the traditional enterprise systems. It’s stored in big data platforms, in cloud applications, spreadsheets, simple file systems, in weblogs, in social media systems, and so on. and stored in traditional databases. For each system that requires data from several systems, different integration solutions are deployed. In other words, integration silos have been developed that over time has led to a complex integration labyrinth. The disadvantages are clear:

  • Inconsistent integration specifications
  • Inconsistent results
  • Decreased time to market
  • Increased development costs
  • Increased maintenance costs

The bar for integration tools and technology has been raised: the integration labyrinth has to disappear. It must become easier to integrate data from multiple systems, and integration solutions should be easier to design and maintain to keep up with the fast changing business world.

All these new demands are changing the rules of the integration game, they demand that integration solutions are developed in a more agile way. One of the technologies making this possible today is Data Virtualization.

This  seminar focuses on Data Virtualization. The technology is explained, advantages and disadvantages are discussed, products are compared, design guidelines are given, and use cases are discussed.

What you will learn:

  • How Data Virtualization could be used to integrate data in a more agile way
  • How to embed Data Virtualization in Business Intelligence systems
  • How Data Virtualization can be used for integrating on-premised and Cloud applications
  • How to migrate to a more agile integration system
  • How Data Virtualization products work
  • How to avoid well-known pitfalls
  • How to learn from real-life experiences with Data Virtualization

Main Topics:

  • Introduction to Data Virtualization
  • The changing world of data and application integration
  • Under the hood of a Data Virtualization server
  • Caching for performance and scalability
  • Query optimization techniques
  • Data Virtualization and the Logical Data Warehouse Architecture
  • Data Virtualization and Big Data
  • Data Virtualization and Master Data Management
  • Data Virtualization, Information Management and Data Governance
  • The future of Data Virtualization

Topics:

1. Introduction to Data Virtualization

  • What is data virtualization?
  • Use case of data virtualization: business intelligence, data science, democratizing of data, master data management, distributed data
  • Differences between data abstraction, data federation, and data integration
  • Open versus closed data virtualization servers
  • Market overview: AtScale, Cirro Data Hub, Data Virtuality, Denodo Platform, Dremio, FraXses, IBM Data Virtualization Manager for z/OS, RedHat JBoss Data Virtualization, Stone Bond Enterprise Enabler, and Tibco Data Virtualization

2. How Do Data Virtualization Servers Work?

  • The key building block: the virtual table
  • Integrating data sources via virtual tables
  • Implementing transformation rules in virtual tables
  • Stacking virtual tables
  • Impact analysis and lineage
  • Running transactions – updating data
  • Securing access to data in virtual tables
  • Importing non-relational data, such as XML and JSON documents, web services, NoSQL, and Hadoop data
  • The importance of an integrated business glossary and centralization of metadata specifications

3. Performance Improving Features

  • Caching of a virtual table for improving query performance, creating consistent report results, or minimizing interference on source systems
  • Differences styles of refreshing caches: full, incremental, live, query-based, trigger-based, and offline refreshing
  • Different query optimization techniques, including query substitution, pushdown, query expansion, ship joins, sort-merge Joins, statistical data and SQL override

4. Use Case 1: The Logical Data Warehouse Architecture

  • The limitations of the classic data warehouse architecture
  • On-demand versus scheduled integration and transformation
  • Making a BI system more agile with data virtualization
  • The advantages of virtual data marts
  • Strategies for adopting data virtualization
  • The need for powerful analytical database servers
  • Migrating to a data virtualization-based BI system

5. Use Case 2: Data virtualization and Master Data Management

  • How can data virtualization help with creating a 360° view of business objects
  • Developing MDM with a data virtualization server – from a stored to a virtual solution
  • On-demand data profiling and data cleansing

6. Use Case 3: From the Physical Data Lake to the Logical Data Lake

  • Practical limitations of developing one physical data lake
  • Shortening the data preparation phase of data science with data virtualization
  • Sharing metadata specifications between data scientists
  • Implementing analytical models inside a data virtualization server

7. Use Case 4: Democratizing Enterprise Data

  • Increasing the business value of the data asset by making all the data available to a larger group of users within the organization
  • The business value of consistent data integration
  • Using lean data integration to make data available for analytics and reporting faster
  • One consistent data view for the entire organization
  • How the business glossary and search features help business users
  • The coming of the data marketplace

8. Use Case 5: Dealing with Big Data

  • Big data can be too big to move – data can’t be transported to the place of integration
  • Data virtualization pushes data processing to where the data is produced
  • Hiding the physical location of the data
  • With data virtualization, the network becomes the database

9. Closing Remarks

  • The Future of Data Virtualization
  • Data virtualization as driving force for data integration
  • Potential new product features

Målgrupper:

IT arkitekter, Business Intelligence spesialister, dataanalytikere, datavarehus designere, forretningsanalytikere, tekniske arkitekter, systemanalytikere, IT konsulenter,  database utviklere,  løsningsarkitekter og mange andre.

Takk til vår sponsor:

Foredragsholdere: