top of page
Search

From Forecast to Factory — Building the AIS-DPP™ Demand-to-Production Predictor

ree

In every manufacturing project, planners face the same question:“How do we keep production aligned when demand keeps changing?”

SAP Joule gives us a conversational window into enterprise data. You can ask questions like:

“What’s our current backlog?”

“Which finished goods are trending up in demand?”

Joule answers well — but it stops at describing what’s happening.The next step is to predict what will happen — and to recommend an actionable plan. That’s where AIS-DPP™ (Demand-to-Production Predictor) comes in.It’s part of my Archive Intelligence Suite™, designed to extend SAP Joule by connecting SAP transactional data to a lightweight Python forecasting and optimization engine. It forecasts demand, proposes feasible production schedules, and lets planners run what-if scenarios — all using data they already have in SAP.


How AIS-DPP™ Works

Architecture:

SAP ECC / S/4HANA
     ↓
ABAP RFC (Z_DTP_GET_DATA)
     ↓
Python Forecast & Optimizer
     ↓
Streamlit Dashboard  ←→  SAP Joule (via REST API)

It doesn’t need a data lake or new infrastructure — just an RFC connection, a few Python libraries, and one ABAP extractor.


Step 1 — ABAP Extractor (Z_DTP_GET_DATA)

The extractor gathers billing quantities, stock levels, capacity, and a flat BOM view — all in weekly buckets.

FUNCTION Z_DTP_GET_DATA.
*"----------------------------------------------------------------------
*"* IMPORTING
*"*   VALUE(I_BEGDA) TYPE DATS
*"*   VALUE(I_ENDDA) TYPE DATS
*"*   VALUE(I_WERKS) TYPE WERKS_D OPTIONAL
*"* TABLES
*"*   T_DEMAND STRUCTURE ZDTP_DEMAND
*"*   T_STOCK  STRUCTURE ZDTP_STOCK
*"*   T_CAP    STRUCTURE ZDTP_CAP
*"*   T_BOM    STRUCTURE ZDTP_BOM
*"----------------------------------------------------------------------
  DATA: lv_week TYPE vrsd-week, lv_calweek TYPE char7.

* Demand proxy from Billing (VBRP/VBRK)
  SELECT vbrp~matnr, vbrp~fkimg AS qty, vbrk~fkdat
    FROM vbrp INNER JOIN vbrk ON vbrk~vbeln = vbrp~vbeln
    INTO TABLE @DATA(lt_bil)
    WHERE vbrk~fkdat BETWEEN @I_BEGDA AND @I_ENDDA
      AND vbrp~matnr IS NOT INITIAL.

  DATA: lt_dem TYPE HASHED TABLE OF zdtp_demand WITH UNIQUE KEY matnr calweek.
  LOOP AT lt_bil ASSIGNING FIELD-SYMBOL(<b>).
    CALL FUNCTION 'DATE_GET_WEEK'
      EXPORTING date = <b>-fkdat
      IMPORTING week = lv_week.
    lv_calweek = |{ lv_week(4) }-{ lv_week+4(2) }|.
    READ TABLE lt_dem ASSIGNING FIELD-SYMBOL(<d>) WITH TABLE KEY matnr = <b>-matnr calweek = lv_calweek.
    IF sy-subrc <> 0.
      INSERT VALUE zdtp_demand( matnr = <b>-matnr calweek = lv_calweek qty = <b>-qty ) INTO TABLE lt_dem.
    ELSE.
      <d>-qty += <b>-qty.
    ENDIF.
  ENDLOOP.

  LOOP AT lt_dem ASSIGNING FIELD-SYMBOL(<dd>).
    APPEND <dd> TO T_DEMAND.
  ENDLOOP.

* Stock (MARD)
  SELECT matnr, werks AS plant, SUM( labst ) AS qty
    FROM mard
    WHERE labst > 0
      AND ( @I_WERKS IS INITIAL OR werks = @I_WERKS )
    GROUP BY matnr, werks
    INTO TABLE @T_STOCK.

* Capacity (Z table ZRCP_CAP_WEEK)
  SELECT line_id, calweek, cap_hrs
    FROM zrcp_cap_week
    INTO TABLE @T_CAP
    WHERE calweek BETWEEN |{ I_BEGDA(4) }-01| AND |{ I_ENDDA(4) }-53|.

* Flat BOM
  SELECT a~matnr AS parent, b~idnrk AS component, b~mngko AS qty_per
    FROM stko AS a
    INNER JOIN stpo AS b ON b~stlnr = a~stlnr
    INTO TABLE @T_BOM.

ENDFUNCTION.

This creates four small export tables: demand, stock, capacity, and BOM — everything the Python engine needs.


Step 2 — Python Forecasting and Optimization

All logic sits in a lightweight Python project:

ais_dpp/
├─ app.py                ← Streamlit dashboard
├─ api.py                ← Optional Joule REST API
├─ data_loader.py
├─ demand_forecast.py
├─ optimizer.py
├─ routing.py
└─ sample_data/*.csv

data_loader.py

Handles RFC extraction or CSV fallback.

import os, pandas as pd
from pyrfc import Connection

def load_data(begda, endda, werks=None):
    if os.getenv("DTP_USE_CSV", "1") == "1":
        base="sample_data"
        return (pd.read_csv(f"{base}/demand.csv"),
                pd.read_csv(f"{base}/stock.csv"),
                pd.read_csv(f"{base}/capacity.csv"),
                pd.read_csv(f"{base}/routing.csv"))
    conn = Connection(
        ashost=os.getenv("SAP_ASHOST"),
        sysnr=os.getenv("SAP_SYSNR"),
        client=os.getenv("SAP_CLIENT"),
        user=os.getenv("SAP_USER"),
        passwd=os.getenv("SAP_PASSWD"),
    )
    result = conn.call("Z_DTP_GET_DATA", I_BEGDA=begda, I_ENDDA=endda, I_WERKS=werks or "")
    return (pd.DataFrame(result["T_DEMAND"]),
            pd.DataFrame(result["T_STOCK"]),
            pd.DataFrame(result["T_CAP"]),
            pd.DataFrame(result["T_BOM"]))

demand_forecast.py

Uses RandomForestRegressor to project demand by week.

import pandas as pd, numpy as np
from sklearn.ensemble import RandomForestRegressor

def forecast_weekly(demand_df, horizon_weeks=8):
    if demand_df.empty:
        return pd.DataFrame(columns=["matnr","calweek","yhat"])
    out=[]
    for mat,g in demand_df.groupby("matnr"):
        g=g.sort_values("calweek")
        s=pd.Series(g["qty"].values)
        if len(s)<4: continue
        X=pd.DataFrame({f"lag{i}":s.shift(i) for i in [1,2,3,4,8,12]}).dropna()
        y=s.loc[X.index]
        model=RandomForestRegressor(n_estimators=300,random_state=42)
        model.fit(X,y)
        hist=list(s)
        for _ in range(horizon_weeks):
            feats=np.array([hist[-i] if len(hist)>=i else 0 for i in [1,2,3,4,8,12]]).reshape(1,-1)
            yhat=float(model.predict(feats)[0]); hist.append(max(0,yhat))
        out+=[{"matnr":mat,"calweek":f"F{w+1}","yhat":v} for w,v in enumerate(hist[-horizon_weeks:])]
    return pd.DataFrame(out)

A mixed-integer linear model that respects capacity, overtime, and backlog.

import pandas as pd, pulp

def optimize_plan(forecast, stock, capacity, routing,
                  lot_size=50, backlog_penalty=5.0, overtime_penalty=2.0):
    if forecast.empty:
        return pd.DataFrame(columns=["matnr","line_id","calweek","prod_qty","backlog"])

    weeks=sorted(forecast["calweek"].unique())
    mats=sorted(forecast["matnr"].unique())
    lines=sorted(capacity["line_id"].unique())
    cap_map={(r.line_id,r.calweek):r.cap_hrs for r in capacity.itertuples()}
    r_map={r.matnr:(r.line_id,r.hours_per_unit) for r in routing.itertuples()}
    stock_qty=stock.groupby("matnr")["qty"].sum().to_dict()
    dem={(r.matnr,r.calweek):r.yhat for r in forecast.itertuples()}

    prob=pulp.LpProblem("AIS_DPP", pulp.LpMinimize)
    k=pulp.LpVariable.dicts("lots",(mats,weeks),0,cat=pulp.LpInteger)
    b=pulp.LpVariable.dicts("back",(mats,weeks),0)
    ot=pulp.LpVariable.dicts("ot",(lines,weeks),0)

    prob+=pulp.lpSum(b[m][w]*backlog_penalty for m in mats for w in weeks)+\
           pulp.lpSum(ot[l][w]*overtime_penalty for l in lines for w in weeks)

    for l in lines:
        for w in weeks:
            prob+=pulp.lpSum(k[m][w]*lot_size*r_map[m][1] for m in mats if r_map[m][0]==l)\
                   <=cap_map.get((l,w),0)+ot[l][w]

    for m in mats:
        for i,w in enumerate(weeks):
            demd=dem.get((m,w),0); prod=k[m][w]*lot_size
            if i==0: avail=prod+stock_qty.get(m,0)
            else: avail=prod
            prev=weeks[i-1] if i>0 else w
            prob+=b[m][w]>=demd-avail+(b[m][prev] if i>0 else 0)

    prob.solve(pulp.PULP_CBC_CMD(msg=False))
    return pd.DataFrame([{"matnr":m,"line_id":r_map[m][0],"calweek":w,
                          "prod_qty":int(pulp.value(k[m][w])*lot_size),
                          "backlog":float(pulp.value(b[m][w]))}
                         for m in mats for w in weeks])

app.py — Streamlit Dashboard

A simple visual layer for planners.

import streamlit as st, pandas as pd
from datetime import date
from data_loader import load_data
from demand_forecast import forecast_weekly
from optimizer import optimize_plan

st.set_page_config(page_title="AIS-DPP — Demand-to-Production Predictor", layout="wide")
st.title("AIS-DPP™ — Demand-to-Production Predictor")

with st.sidebar:
    plant=st.text_input("Plant","US01")
    from_date=st.date_input("From Date",date(2025,1,1))
    to_date=st.date_input("To Date",date(2025,10,1))
    horizon=st.slider("Forecast Horizon (weeks)",4,16,8)
    lot_size=st.number_input("Lot Size",1,1000,50,step=10)
    backlog_pen=st.number_input("Backlog Penalty",0.0,20.0,5.0,0.5)
    overtime_pen=st.number_input("Overtime Penalty",0.0,10.0,2.0,0.5)
    cap_mult=st.slider("Capacity Multiplier (%)",50,200,100,5)
    run=st.button("🚀 Run")

if not run:
    st.info("Configure settings and click Run.")
    st.stop()

with st.spinner("Loading SAP data and computing plan…"):
    demand, stock, cap, routing=load_data(str(from_date),str(to_date),plant)
    fc=forecast_weekly(demand,horizon)
    plan=optimize_plan(fc,stock,cap,routing,lot_size,backlog_pen,overtime_pen)

st.subheader("Forecast")
st.dataframe(fc)
st.subheader("Optimized Plan")
st.dataframe(plan)
kpi=plan.groupby("calweek").agg(prod=("prod_qty","sum"),back=("backlog","sum"))
st.bar_chart(kpi)

api.py — Joule REST Interface

So SAP Joule can call it directly:

from flask import Flask, request, jsonify
from data_loader import load_data
from demand_forecast import forecast_weekly
from optimizer import optimize_plan
from datetime import date

app=Flask(__name__)

@app.post("/plan")
def plan():
    p=request.get_json(force=True)
    beg=p.get("from_date","2025-01-01"); end=p.get("to_date",str(date.today()))
    plant=p.get("plant","US01")
    horizon=int(p.get("horizon_weeks",8))
    demand,stock,cap,routing=load_data(beg,end,plant)
    fc=forecast_weekly(demand,horizon)
    plan=optimize_plan(fc,stock,cap,routing)
    return jsonify(plan.to_dict(orient="records"))

if __name__=="__main__":
    app.run(host="0.0.0.0",port=8080)

Sample CSVs (for testing)

demand.csv

matnr,calweek,qty
FG-100,2025-35,120
FG-100,2025-36,140
FG-100,2025-37,100
FG-200,2025-35,80
FG-200,2025-36,60
FG-200,2025-37,90

stock.csv

matnr,plant,qty
FG-100,US01,300
FG-200,US01,120

capacity.csv

line_id,calweek,cap_hrs
LINE-1,2025-38,120
LINE-1,2025-39,120
LINE-1,2025-40,120
LINE-1,2025-41,120
LINE-1,2025-42,120

routing.csv

matnr,line_id,hours_per_unit
FG-100,LINE-1,0.05
FG-200,LINE-1,0.06

What Planners See

The Streamlit dashboard shows:

  • Forecast by week and material

  • Optimized production plan

  • Backlog and capacity charts

  • CSV export for SAP re-upload


Joule in Action

In SAP Joule, a planner can simply ask:

“Plan production for plant US01 for the next eight weeks.”


Joule calls the /plan API and replies:

“Forecasted demand for FG-100 rises 15% in week 42.AIS-DPP recommends producing 300 extra units; 10 hours of overtime reduce backlog from 120 to 15 units.”


Why It Works

This solution doesn’t reinvent SAP PP or APO.It just uses what’s already there — your billing data, stock, and routing times — and gives it predictive insight.

AIS-DPP adds a forecasting brain to SAP Joule’s conversational interface, turning a descriptive assistant into a prescriptive one.

Joule explains the present. AIS-DPP plans the future.

 
 
 

Comments


Featured Blog Post

bottom of page