Guide

BI Implementation: How Reporting Transforms from Request to Product

Koray Çetintaş 10 February 2026 13 min read

BI Architecture: Layers and Components

Business intelligence architecture and data flow

Modern BI architecture moves data from source to decision

A business intelligence reporting system consists of layers that feed into one another. Each layer has a distinct responsibility, and an issue in one layer affects the entire system.

1. Data Source Layer

Where data is born. Systems such as the ERP platform, CRM, MES, spreadsheets, IoT sensors, and web analytics reside in this layer.

  • Operational systems: ERP, CRM, SCM, HRM
  • File sources: Spreadsheets, CSV, XML, JSON
  • External sources: APIs, web services, market data
  • IoT and sensor data: Machine data, SCADA systems

2. Data Integration Layer

The layer where ETL/ELT processes live. Data is extracted, cleaned, transformed, and loaded into the target system.

  • Extract: Reading data from source systems
  • Transform: Cleaning, merging, calculating
  • Load: Writing to the target system

3. Data Storage Layer

Where transformed data is stored. This can be a data warehouse, data mart, or data lake.

  • Data Warehouse: Corporate, structured data repository
  • Data Mart: Department-based subset (sales, finance, production)
  • Data Lake: Raw data repository (for ML and advanced analytics)

4. Semantic Layer

Translates technical data structures into business language. Business users say “customer,” while the system understands the “DIM_CUSTOMER” table.

  • Business definitions: Metric and dimension definitions
  • Calculations: KPI formulas, derived metrics
  • Relationships: Connections between tables

5. Presentation Layer

The layer where the end-user interacts. Dashboards, reports, and ad-hoc query tools.

  • Dashboards: Visual, interactive summary panels
  • Reports: Standard, scheduled outputs
  • Ad-hoc analysis: User-generated queries
  • Mobile access: Tablet and phone compatibility

Architectural Principle

Each layer should be independently replaceable. Changing the ETL tool should not affect the dashboard; changing the BI tool should not require redesigning the data warehouse. The principle of loose coupling is essential.


Data Warehouse Design

Data warehouse architecture

The data warehouse is the foundation of business intelligence reporting

A data warehouse is a subject-oriented, integrated, time-variant, and persistent data store optimized for business intelligence reporting. Unlike operational systems, it is designed for analytical queries.

Dimensional Modeling

The approach popularized by Ralph Kimball divides data into “fact” and “dimension” tables.

Fact Tables

Contain measurable, numerical business events. These are generally large, narrow tables (many rows, few columns).

  • Example: Sales fact table – date, customer, product, quantity, amount
  • Granularity: The lowest level of detail (e.g., order line item)
  • Measurements: Aggregatable metrics (quantity, amount, duration)

Dimension Tables

Tables used for filtering and grouping that define facts. They have a wide (many columns), short (few rows) structure.

  • Example: Customer dimension – customer ID, name, segment, region, industry
  • Hierarchy: Parent-child relationships (e.g., country > city > district)
  • Attributes: Characteristics used for filtering and reporting

Star Schema vs. Snowflake Schema

Star Schema

The fact table is at the center, surrounded by dimension tables. It is simple, clear, and offers high query performance.

  • Dimension tables are denormalized
  • Fewer joins, faster queries
  • The preferred approach for BI reporting

Snowflake Schema

Dimension tables are normalized and split into sub-tables. It is storage-efficient but query-complex.

  • More tables, more joins
  • Storage space savings
  • More difficult to manage

Slowly Changing Dimensions (SCD)

Dimension data changes over time (customer address, product price). Managing these changes is critical.

SCD Type 1: Overwrite

The old value is deleted, and the new value is written. History is lost. Simple, but historical analysis is impossible.

SCD Type 2: Add New Row

A new row is added for every change. Historical tracking is possible with validity dates. This is the most common approach.

SCD Type 3: Add New Column

Previous and current values are kept in separate columns. Limited history (usually only the previous value).


ETL/ELT Processes

Data integration and ETL processes

ETL transforms raw data into analyzable information

ETL (Extract-Transform-Load) is the heart of business intelligence reporting systems. It turns scattered, dirty data from source systems into clean, consistent, and analyzable information.

Extract Phase

The process of reading from data sources. It is essential to place a minimum load on the source system.

Extraction Methods

  • Full extraction: All data is pulled. Simple but slow and resource-intensive
  • Incremental extraction: Only changed data is pulled. Efficient but requires a tracking mechanism
  • CDC (Change Data Capture): Captures changes via database logs. Real-time with minimum impact

Key Considerations

  • Scheduling during off-hours to avoid impacting source system performance
  • Network bandwidth and latency
  • Error management against source system outages

Transform Phase

The process of processing raw data through business rules. This is the most complex and time-consuming stage.

Common Transformation Operations

  • Cleaning: Missing value management, format correction, duplicate removal
  • Standardization: Unit conversions, code mapping, naming consistency
  • Merging: Matching data coming from different sources
  • Derivation: Calculated fields, categorization, grouping
  • Filtering: Sorting out unnecessary data
  • Aggregation: Creating summary tables

Data Quality Controls

  • Completeness: Ratio of missing values
  • Accuracy: Correctness check (comparison with reference data)
  • Consistency: Uniformity across different sources
  • Timeliness: Data freshness

Load Phase

Writing the transformed data to the target system.

Loading Strategies

  • Full load: The target table is completely refreshed. Simple but slow
  • Incremental load: Only new/changed records are added. Efficient
  • Upsert (Merge): Update if it exists, insert if it doesn’t. The most flexible approach

ETL vs. ELT

With modern cloud data warehouses, ELT (Extract-Load-Transform) has become popular.

Feature ETL ELT
Transformation location In the ETL tool (middle layer) In the target system (data warehouse)
Data volume Suitable for small-to-medium scales Ideal for big data
Flexibility Predefined transformations Raw data is stored, then transformed
Speed Transformation can be a bottleneck High parallel processing power
Cost ETL tool licensing Cloud processing cost

Dashboard Development Lifecycle

Dashboard development process

A successful dashboard is the product of a systematic development process

Transforming a business intelligence reporting request into a dashboard requires a systematic lifecycle. A structured process ensures quality and continuity over ad-hoc requests.

1. Requirements Gathering

Understanding the real needs of business users. A request like “make me a dashboard” requires deep questioning.

Questions to Ask

  • What decisions will you make with this dashboard?
  • To whom will you report, and who will use it?
  • How often do you need to view it?
  • Which metrics do you want to track?
  • How do you currently access this information?
  • How will success be measured?

Output

Requirements document: Business questions, metric definitions, user profiles, data sources, update frequency.

2. Data Discovery and Profiling

Verifying the existence and quality of the data required to calculate the requested metrics.

Actions

  • Identification of data sources
  • Data quality analysis (missing data, inconsistency, format issues)
  • Verification of business rules and calculation logic
  • Data volume and performance expectations

Critical Question

“Can the requested metric be calculated?” Sometimes the data the business user wants does not exist in the systems, or the calculation logic is unclear. This stage is critical for expectation management.

3. Data Modeling

Designing the data structure that will feed the dashboard. Which tables and relationships in the data warehouse?

Steps

  • Determining fact and dimension tables
  • Deciding on granularity (lowest level of detail)
  • Definitions of calculated metrics
  • Filtering and slicing dimensions

4. ETL Development

Creating the data flow from source systems to the data warehouse.

Steps

  • Setting up source connections
  • Coding transformation rules
  • Data quality controls
  • Scheduling and orchestration
  • Error management and logging mechanisms

5. Dashboard Design and Development

Creating the visual interface. User experience and information architecture are at the forefront.

Design Principles

  • Visual hierarchy: Important metrics first
  • Context: Targets, trends, comparisons
  • Interactivity: Filters, drill-down, hover details
  • Simplicity: No unnecessary visual clutter
  • Consistency: Color coding, terminology

6. Testing and Validation

Verifying that the dashboard is accurate, reliable, and performant.

Test Types

  • Data accuracy: Are the dashboard figures consistent with the source system?
  • Calculation accuracy: Are metrics calculated correctly?
  • Performance: Is the loading time acceptable?
  • Usability: Can business users use it easily?

7. Deployment and Training

Moving the dashboard to the live environment and training the users.

Training Content

  • Purpose and scope of the dashboard
  • Metric definitions and calculation logic
  • Filtering and navigation
  • Data update times
  • Support and feedback channels

8. Monitoring and Improvement

Continuous improvement of the live dashboard.

What to Monitor

  • Usage metrics (who, how often, which sections)
  • Performance trends
  • User feedback
  • Data quality issues

Self-Service BI Approach

Self-service BI and user authorization

Self-service BI connects business users with data

Self-service BI allows business users to create their own reports and analyses without IT support. When implemented correctly, it reduces IT bottlenecks; when implemented incorrectly, it creates data chaos.

The Promise of Self-Service BI

  • Speed: Instant reports independent of the IT queue
  • Flexibility: Users analyze what they want, when they want
  • Business-IT Collaboration: IT prepares the data, the business user consumes it
  • Innovation: Business users discover new insights

Risks of Self-Service BI

  • Data inconsistency: Everyone calculates metrics differently; the “single source of truth” is lost
  • Quality issues: Incorrect analyses due to lack of training
  • Security risks: Unauthorized access to sensitive data
  • Performance issues: Uncontrolled queries strain the system

Requirements for Successful Self-Service BI

1. Reliable Data Model (Certified Data Sets)

IT-approved, documented data sets. Users utilize these “golden sources” and do not access raw tables directly.

2. Data Literacy Training

Basic data analysis, statistics, and visualization skills for users. Without training, self-service produces incorrect results.

3. Governance Rules

  • Who can access which data?
  • Which metrics are considered “official”?
  • How is user-generated content shared?
  • Version control and change tracking

4. IT and Business Collaboration Model

IT provides the data infrastructure and certified data sets. Business users perform analysis on this foundation. There must be a clear distribution of roles and responsibilities between the two.

Self-Service Maturity Levels

Level Description User Capability
Level 1: Consumption Viewing ready-made dashboards Using filters and drill-downs
Level 2: Exploration Ad-hoc queries on existing data Pivoting, simple calculations
Level 3: Creation New visuals and dashboards Dashboard design, sharing
Level 4: Modeling Creating new data models Data merging, advanced calculations

In most organizations, Level 1-2 is sufficient. Level 3-4 requires investment in data maturity and training.


Field Example: BI Implementation Project

Real Case (Unbranded)Production facility and data analysis

Status

A medium-sized manufacturing firm (representative: 220 employees, 3 production lines). Current status: Exporting from the ERP to spreadsheets, manual merging, weekly presentations. The IT team of 2 spends 2 days every week preparing reports. Management wants “real-time data,” but the current infrastructure is insufficient.

BI Implementation Roadmap (representative duration: 16 weeks)

  1. Weeks 1-2 – Discovery: Current reporting needs were inventoried. 47 different report requests were identified, and 12 core metrics were determined.
  2. Weeks 3-4 – Architectural Design: Data warehouse architecture (star schema), ETL tool selection, BI platform evaluation.
  3. Weeks 5-8 – Infrastructure Setup: Data warehouse environment setup, ERP-DW connection, core ETL processes.
  4. Weeks 9-12 – Pilot Dashboard: Production efficiency (OEE) dashboard developed, tested, and validated.
  5. Weeks 13-14 – Rollout: Sales and finance dashboards added, user training conducted.
  6. Weeks 15-16 – Stabilization: Performance optimization, documentation, support process.

Results (observed after 6 months)

  • Report preparation time: Automated updates instead of 2 days per week.
  • Data freshness: Daily instead of weekly (hourly for operational KPIs).
  • IT report load: 75% reduction (freeing up time for new requests).
  • Management satisfaction: Feedback stating, “For the first time, we are speaking with numbers.”
  • OEE improvement: Visibility through the dashboard led to an 8-point increase in 6 months.

7 Most Common Mistakes in BI Projects

1. Ignoring Data Quality

The “garbage in, garbage out” principle. Data quality issues emerge after the BI system is established, and credibility is lost. Data quality analysis and a cleaning plan must be created before ETL.

2. Skipping Business Requirements

A technology-focused approach: “Let’s buy the tool, then we’ll see what we can do.” Dashboards are built without defining business questions, and no one uses them. The question “which decisions are we supporting?” must be answered first.

3. Trying to Do Everything at Once

The “big bang” approach: All departments, all reports, all at once. This leads to scope creep, delays, and failure. An iterative approach starting with pilot projects is safer.

4. Neglecting Performance

If a dashboard takes 30 seconds to open, no one will use it. Performance is the biggest determinant of user adoption. Data model optimization, indexing, and aggregation tables are critical.

5. Skipping Training

The assumption that “the dashboard is built, everyone will use it.” Without training, users return to old habits and retreat to spreadsheets. A training plan before and after launch is essential.

6. Not Defining Governance

Unclear data definitions (e.g., “revenue” is calculated differently in every department), scattered access permissions, and no update responsibility. A data governance framework must be established.

7. Forgetting Continuous Improvement

Thinking a BI project is “finished.” Business needs change, and new questions arise. Dashboards should evolve like living organisms. A periodic review and feedback loop must be created.

BI project mistakes and solutions

The failure rate in BI projects exceeds 70%; learn from these mistakes


BI System Success Metrics

Use the following metrics to evaluate the effectiveness of your BI system (representative values):

Metric Baseline Target Measurement Method
Dashboard usage rate 20% 80%+ Active users / total authorized users
Dashboard load time 15+ sec <5 sec Average page load time
Data update lag 7+ days <24 hours ETL completion to user presentation time
Report request fulfillment time 2+ weeks <3 days Request to delivery time (for new reports)
Data quality score 70% 95%+ Error-free records / total records
Self-service ratio 10% 50%+ User-generated reports / total reports
IT report load reduction Baseline 60% reduction IT team report preparation hours / week

These metrics allow you to measure the return on your BI investment. We recommend monthly or quarterly monitoring.


BI Implementation Checklist

Check the following items when setting up or evaluating your BI system:

A. Strategy and Planning

  • BI vision and business goals documented
  • Management sponsorship and budget approved
  • Priority use cases identified
  • Success criteria and KPIs defined

B. Data Infrastructure

  • Data source inventory created
  • Data quality analysis performed
  • Data warehouse architecture designed
  • ETL/ELT processes developed and tested
  • Data update scheduling determined

C. Dashboard Development

  • Business requirements documented
  • Metric definitions and calculation logic approved
  • Dashboard wireframe/mockup approved
  • Data accuracy tests completed
  • Performance tests successful

D. User Experience

  • User access permissions defined
  • Training materials prepared
  • Tested with a pilot user group
  • Feedback mechanism established

E. Governance and Continuity

  • Data dictionary created
  • Data ownership and responsibilities determined
  • Change management process defined
  • Support and maintenance plan created
  • Periodic review schedule determined

Frequently Asked Questions (FAQ)

For a basic BI infrastructure (data warehouse + first set of dashboards), it takes an average of 3-6 months. Scope, the number of data sources, data quality, and organizational readiness affect this duration. With an iterative approach, a pilot dashboard can go live in the first 6-8 weeks, followed by gradual expansion.

A data warehouse stores structured data and is optimized for analytical queries. Data is cleaned beforehand, and a schema-on-write approach is used. A data lake, on the other hand, stores all types of data (structured, semi-structured, unstructured) in its raw form and adopts a schema-on-read approach. A data warehouse is generally preferred for BI reporting, while a data lake is suitable for data science and ML.

ETL (Extract-Transform-Load) transforms data first and then loads it into the target system; this is the traditional data warehouse approach. ELT (Extract-Load-Transform) loads data in its raw form first, and the transformation is done in the target system. ELT has become popular with the power of modern cloud data warehouses. ETL is sufficient for small-to-medium scale BI projects, while ELT may be preferred for big data and cloud environments.

Self-service BI is an approach that allows business users to create their own reports and analyses without IT support. Its advantage is reducing IT bottlenecks, while its disadvantage is data inconsistency and the weakening of the ‘single source of truth’ principle. Successful self-service BI requires a reliable data model, data literacy training, and governance rules. It is not suitable for every organization; it depends on the level of data maturity.

The most common reasons are: 1) Inadequate data quality – the ‘garbage in, garbage out’ principle, 2) Unclear definition of business requirements – a technology-focused approach, 3) Neglecting user adoption – the dashboard is built but no one uses it, 4) Performance issues – slow reports reduce usage, 5) Lack of governance – unclear data definitions, access permissions, and update responsibilities. For success, technical and organizational dimensions must be handled together.

ERP-BI integration is generally done with these steps: 1) Extracting data from the ERP database via ETL (direct DB connection or API), 2) Data cleaning and transformation in a staging area, 3) Loading into the data warehouse (dimension and fact tables), 4) Connecting the BI tool to the data warehouse. An important point: do not run BI queries directly on the live ERP database, as this leads to performance issues. Creating a separate reporting layer is a best practice.

About the Author

Koray Çetintaş is an expert consultant in digital transformation, ERP architecture, process engineering, and strategic technology leadership. He applies a “Strategy + People + Technology” approach with field experience in business intelligence, data governance, and corporate reporting.

About the Author

Koray Cetintas is an advisor specializing in digital transformation, ERP architecture, process engineering, and strategic technology leadership. He applies a "Strategy + People + Technology" approach shaped by hands-on experience in AI, IoT ecosystems, and industrial automation.

Get Support for Your Project

I can help guide your digital transformation initiative. Book a free preliminary call to discuss your priorities.