Guide

AI Use Cases: Forecasting and Anomaly Detection

Koray Çetintaş 10 February 2026 9 min read


AI Scenario Selection: Where to Start?

AI Scenario Selection

Choosing the right scenario is the most critical step determining the success rate of an AI project

AI use cases require different model architectures and data structures depending on the nature of the business problem. Selecting the wrong scenario leads to projects that may be technically successful but fail to generate business value.

When selecting a scenario, you should ask the following questions:

  • What is the output? A number (regression), a category (classification), or normal/abnormal (anomaly)?
  • What is the data structure? Tabular, text, image, or time series?
  • Is there labeled data? Supervised or unsupervised?
  • Real-time or batch? Instant decision or periodic analysis?

5 Core AI Scenario Categories

Operational AI applications are examined in five main categories:

  1. Forecasting (Regression): Predicting future numerical values
  2. Classification: Categorizing input into predefined classes
  3. Anomaly Detection: Identifying non-normal data points
  4. Natural Language Processing (NLP): Extracting meaning from text data
  5. Computer Vision: Extracting information from visual data

Tip

Select a single scenario for your first AI project. Attempting to implement multiple scenarios simultaneously leads to scattered resources and a lack of depth in any of them.


Forecasting Models

Demand Forecasting

Forecasting models enable proactive decision-making by predicting the future from historical data

Forecasting models are used to predict future numerical values by learning from historical data. It is one of the most common AI use cases.

Types of Forecasting Models

Time Series Forecasting

Predicts future periods in time-dependent data:

  • ARIMA/SARIMA: Statistical approach with seasonality support
  • Prophet: Optimized for holiday effects and trend changes
  • LSTM: Complex patterns via deep learning
  • XGBoost/LightGBM: Enhanced forecasting with feature engineering

Regression (Numerical Prediction)

Predicts a dependent variable from independent variables:

  • Linear Regression: Simple, interpretable, baseline model
  • Random Forest Regression: Capturing non-linear relationships
  • Gradient Boosting: High accuracy, complex data structures

Forecasting Use Cases

Use Cases

Demand Forecasting

  • Product-based sales forecasting (units to be sold next month)
  • Inventory replenishment optimization (when and how much to order)
  • Capacity planning (production line, warehouse capacity)

Financial Forecasting

  • Cash flow forecasting (collections/payments for the next quarter)
  • Revenue projection (supporting budget planning)
  • Cost estimation (raw materials, energy, labor)

Operational Forecasting

  • Predictive maintenance (when equipment will fail)
  • Energy consumption forecasting (peak hour optimization)
  • Staffing requirements (call center, field team)

Data Requirements

  • Minimum data duration: 12-24 months (2 years recommended for seasonality)
  • Data frequency: Daily, weekly, or monthly (problem-dependent)
  • Additional variables: Promotion calendar, holidays, economic indicators
  • Data quality: Missing values should be below 5%

Success Metrics

  • RMSE (Root Mean Square Error): Average error magnitude
  • MAE (Mean Absolute Error): Absolute mean deviation
  • MAPE (Mean Absolute Percentage Error): Percentage error (target: below 10-15%)
  • Forecast Bias: Tendency for systemic over/under-forecasting

Classification

Classification Models

Classification models accelerate decision-making processes by automatically categorizing data

Classification models sort input into predefined categories (classes). The output is discrete and categorical (A/B/C, Yes/No, Low/Medium/High).

Types of Classification Models

Binary Classification

Distinguishes between two classes:

  • Churn prediction: Will stay / Will leave
  • Credit risk assessment: Approve / Reject
  • Spam detection: Spam / Not Spam

Multi-class Classification

Distinguishes between more than two classes:

  • Customer segmentation: A / B / C / D segment
  • Product categorization: Automatic assignment to 10+ categories
  • Support ticket routing: Technical / Billing / Return / Other

Multi-label Classification

An input can belong to multiple classes:

  • Document tagging: A document can be both “Contract” and “Confidential”
  • Product features: A product carries multiple attributes

Classification Use Cases

Use Cases

Customer Analytics

  • Customer segmentation (RFM + behavioral features)
  • Churn prediction
  • Lifetime value (LTV) category
  • Purchase probability scoring

Risk Management

  • Credit risk (low/medium/high)
  • Supplier risk scoring
  • Fraud detection
  • Insurance claim estimation

Operational Classification

  • Request prioritization (urgent/normal/low)
  • Quality classification (A/B/C quality)
  • Work order routing (department assignment)
  • Automatic document classification

Common Algorithms

  • Logistic Regression: Simple, interpretable, baseline
  • Random Forest: Balanced performance, resistant to overfitting
  • XGBoost/LightGBM: High accuracy, competition winner
  • SVM (Support Vector Machine): Effective for high-dimensional data
  • Neural Networks: Complex patterns, large datasets

Success Metrics

  • Accuracy: Correct prediction rate (for balanced classes)
  • Precision: Ratio of positive predictions that are truly positive
  • Recall: Ratio of actual positives captured
  • F1 Score: Balance between Precision and Recall (target: >0.75)
  • AUC-ROC: Model discriminative power

Caution: Imbalanced Classes

If one class has significantly more samples than others (e.g., 95% normal, 5% fraud), the model will tend to predict the majority class. Balancing should be performed using SMOTE, class weights, or undersampling techniques.


Anomaly Detection

Anomaly Detection

Anomaly detection identifies risks early by automatically capturing irregularities

Anomaly detection is the process of automatically identifying non-normal points (outliers, anomalies) within data. It profiles normal behavior and flags data that does not fit this profile.

Anomaly Detection Approaches

Statistical Methods

  • Z-Score: Deviation measurement based on standard deviation
  • IQR (Interquartile Range): Outlier detection based on box plots
  • Moving Average: Trend deviations in time series

Machine Learning Methods

  • Isolation Forest: Isolation-based, fast and scalable
  • One-Class SVM: Training with only normal data
  • DBSCAN: Outlier detection via density-based clustering
  • Autoencoder: Deep learning, complex patterns

Rule-Based Methods

  • Business rules: “Invoice amount 10x average is an anomaly”
  • Threshold alerts: Predefined threshold values
  • Hybrid approach: ML + business rules combined

Anomaly Detection Use Cases

Use Cases

Financial Anomalies

  • Invoice discrepancies (amount, date, supplier mismatch)
  • Fraud detection (credit card, payment irregularities)
  • Expense report violations
  • Revenue recognition anomalies

Production and Operations

  • Quality deviations (size, weight, color out of tolerance)
  • Equipment behavior anomalies (vibration, temperature, pressure)
  • Sudden increase in scrap rate
  • Energy consumption anomalies

IT and Security

  • Cyberattack detection (abnormal network traffic)
  • Unauthorized access attempts
  • System performance anomalies
  • Event detection via log analysis

Supply Chain

  • Delivery time anomalies
  • Changes in ordering patterns
  • Supplier performance deviations
  • Inventory level irregularities

Data Requirements

  • Normal data heavy: Anomalies are usually 1-5%
  • Labeled data (for supervised): Known anomaly examples
  • Timestamp: Critical for temporal anomalies
  • Multi-variable: Situations that are normal individually but anomalous together

Success Metrics

  • Precision: Rate of anomaly alarms that are truly anomalies
  • Recall: Rate of actual anomalies captured
  • False Positive Rate: False alarm rate (should be kept low)
  • Detection Latency: Delay time in anomaly detection

NLP Applications (Natural Language Processing)

NLP

NLP provides automation by extracting meaning from unstructured text data

Natural Language Processing (NLP) provides the ability to extract, analyze, and generate meaning from text and speech data. It makes unstructured data processable in business workflows.

NLP Task Types

Text Classification

  • Sentiment analysis: Positive / Negative / Neutral
  • Topic classification: Email, support ticket routing
  • Spam/ham detection
  • Intent classification

Information Extraction

  • Named Entity Recognition (NER): Person, company, date, amount
  • Relationship extraction: Connections between entities
  • Keyword extraction
  • Summarization

Text Generation

  • Automatic response suggestions
  • Drafting reports
  • Translations

NLP Use Cases

Use Cases

Customer Experience

  • Complaint analysis and prioritization (via sentiment score)
  • Chatbot and virtual assistant (customer support automation)
  • Social media monitoring (brand perception analysis)
  • Survey response analysis (open-ended questions)

Document Processing

  • Automatic parsing of invoices, contracts, orders
  • Contract clause analysis (risk detection)
  • Email classification and routing
  • Document search and matching

Knowledge Management

  • Technical document indexing
  • FAQ automation (similar question matching)
  • Meeting notes summarization
  • Research and report analysis

NLP Technical Infrastructure

  • Tokenization: Splitting text into words/sub-words
  • Word Embeddings: Word2Vec, GloVe, FastText
  • Transformer Models: BERT, GPT, T5 family
  • Pre-trained Models: BERTurk, mBERT for Turkish

Challenges

  • Turkish NLP: Agglutinative language structure, resource limitations
  • Domain-specific vocabulary: Adaptation to industry jargon
  • Ambiguity: Different meanings of the same word
  • Data privacy: Texts containing personal data

Computer Vision

Computer Vision

Computer vision digitizes operations by automatically extracting information from visual data

Computer Vision provides the ability to extract information by analyzing visual data (photos, video). It is widely used in production, logistics, and quality control.

Computer Vision Task Types

Image Classification

  • Determining which category an image belongs to
  • Example: Is the product defective or flawless?

Object Detection

  • Detecting objects in an image along with their locations
  • Example: How many products are on the shelf, where are they?
  • Algorithms: YOLO, Faster R-CNN, SSD

Segmentation

  • Classifying every pixel in an image
  • Semantic segmentation: Class-based
  • Instance segmentation: Object-based distinction

OCR (Optical Character Recognition)

  • Digitizing text content in images
  • Reading invoices, labels, license plates

Operational Use Cases

Use Cases

Quality Control

  • Defect detection (scratches, stains, deformation)
  • Dimension and measurement verification
  • Color consistency check
  • Assembly verification (missing part detection)

Inventory and Logistics

  • Shelf counting (planogram compliance)
  • Package sizing (volumetric measurement)
  • Damaged product detection (during shipping)
  • Barcode/QR code reading

Safety and Compliance

  • PPE (Personal Protective Equipment) compliance
  • Unauthorized area entry detection
  • Crowd density analysis
  • License plate recognition (parking, fleet management)

Document Processing

  • Invoice/waybill digitization
  • Identity document verification
  • Handwriting recognition
  • Form data extraction

Technical Infrastructure

  • CNN (Convolutional Neural Networks): Basic image processing architecture
  • Transfer Learning: Pre-trained models (ResNet, VGG, EfficientNet)
  • Edge Computing: Processing near the camera (low latency)
  • GPU Infrastructure: Required for training and inference

Data Requirements

  • Image quality: Sufficient resolution and lighting
  • Labeled data: 100-1000+ images per class
  • Diversity: Different angles, lighting, background conditions
  • Balance: Balance of sample counts between classes

Scenario Comparison Table

Review the comparison table below to understand which AI scenario is appropriate for which situation:

Scenario Output Type Data Structure Typical Usage Min. Data
Forecasting Continuous number Time series, tabular Demand forecasting, cash flow 12-24 months
Classification Category Tabular, text, image Segmentation, risk scoring 100+ samples/class
Anomaly Detection Normal/Abnormal Tabular, time series Fraud, quality deviation Normal data heavy
NLP Text analysis Unstructured text Sentiment analysis, chatbot 1000+ documents
Computer Vision Visual analysis Image, video Quality control, counting 100-1000+ images/class

Scenario Selection Flow

Follow these steps to select the right scenario:

  1. Define the business problem: “Which decision do I want to automate?”
  2. Determine the output type: Number, category, or normal/abnormal?
  3. Evaluate the data structure: Tabular, text, image, or time series?
  4. Check the status of labeled data: Supervised or unsupervised?
  5. Select the scenario based on the table: Refer to the table above

Frequently Asked Questions (FAQ)

AI use cases are examined in five main categories: (1) Forecasting models (demand forecasting, cash flow forecasting, maintenance forecasting), (2) Classification (customer segmentation, risk scoring, prioritization), (3) Anomaly detection (invoice errors, production deviations, security breaches), (4) NLP (text analysis, sentiment analysis, automatic categorization), (5) Computer vision (quality control, counting, visual inspection). Each scenario requires different data structures and model architectures.

A demand forecasting model predicts future demand using historical sales data, seasonality, promotion calendars, and external factors (holidays, economic indicators). Time Series models (ARIMA, Prophet) or machine learning algorithms (XGBoost, LSTM) are used. The model is trained with a minimum of 12-24 months of data, targeting 80-90% average accuracy.

Classification sorts input into predefined categories (e.g., customer A/B/C segment, risk low/medium/high). The output is discrete and categorical. Regression predicts a continuous numerical value (e.g., next month’s sales of 4,200 units). Accuracy and F1 score metrics are used for classification, while RMSE and MAE are used for regression.

Anomaly detection is widely used in: Finance (invoice discrepancies, fraud detection), production (quality deviations, pre-failure equipment signs), IT/security (cyberattack detection, abnormal access behavior), and supply chain (delivery delay predictions, inventory anomalies). The model extracts a statistical profile of normal behavior and identifies deviations.

NLP applications include: Customer complaint analysis (prioritization via sentiment score), document classification (automatic parsing of invoices, contracts, orders), chatbot and virtual assistant (customer support automation), text summarization (report and email summaries), and keyword extraction (demand trend analysis). NLP extracts meaning from unstructured text data.

Computer vision operational use cases: Quality control (defect detection, dimension measurement), inventory management (shelf counting, stock level tracking), security (unauthorized access, PPE compliance), and logistics (package sizing, damaged product detection). Convolutional Neural Networks (CNN) and object detection models (YOLO, Faster R-CNN) are used.

About the Author

Koray Cetintas is an advisor specializing in digital transformation, ERP architecture, process engineering, and strategic technology leadership. He applies a "Strategy + People + Technology" approach shaped by hands-on experience in AI, IoT ecosystems, and industrial automation.

Get Support for Your Project

I can help guide your digital transformation initiative. Book a free preliminary call to discuss your priorities.