Production Ready

Inti Aplikasi ML SaaS

Serial Bagian 2: Membangun logika bisnis, validasi data, layanan Machine Learning, dan API Endpoint menggunakan FastAPI.

5. Database Models (SQLAlchemy ORM)

Model ini adalah representasi Python dari tabel SQL yang kita buat di Bagian 1. Kode ini memungkinkan kita berinteraksi dengan database tanpa menulis query SQL secara manual.

File: /www/wwwroot/app-collection/ao/core/models.py
from sqlalchemy import Column, Integer, String, Boolean, DateTime, Text, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.sql import func
from core.db import Base

class User(Base):
    __tablename__ = "users"

    id = Column(Integer, primary_key=True, index=True)
    username = Column(String(50), unique=True, nullable=False, index=True)
    email = Column(String(100), unique=True, nullable=False, index=True)
    api_key = Column(String(255), unique=True, nullable=False, index=True)
    is_active = Column(Boolean, default=True)
    created_at = Column(DateTime(timezone=True), server_default=func.now())

    # Relationship jika ingin melihat history job user
    # ml_jobs = relationship("MLJob", back_populates="owner")

class MLJob(Base):
    __tablename__ = "ml_jobs"

    id = Column(Integer, primary_key=True, index=True)
    user_id = Column(Integer, ForeignKey("users.id", ondelete="CASCADE"), nullable=False)
    job_type = Column(String(50), nullable=False)
    status = Column(String(20), default="pending", index=True)
    input_params = Column(Text, nullable=True) # Disimpan sebagai JSON string di PostgreSQL Text/JSONB
    result_path = Column(String(255), nullable=True)
    error_message = Column(Text, nullable=True)
    created_at = Column(DateTime(timezone=True), server_default=func.now())
    updated_at = Column(DateTime(timezone=True), onupdate=func.now())

    # owner = relationship("User", back_populates="ml_jobs")

6. Pydantic Schemas (Validasi Data)

Schemas berfungsi sebagai "Gatekeeper" yang memvalidasi data masuk (Request) dan format data keluar (Response) agar sesuai standar API.

File: /www/wwwroot/app-collection/ao/app/schemas/ml_schemas.py
from pydantic import BaseModel, Field
from datetime import datetime
from typing import Optional

# --- Request Schemas ---

class PredictionRequest(BaseModel):
    """
    Schema untuk request prediksi.
    Sesuaikan field 'features' dengan input model ML Anda.
    Contoh ini untuk regresi sederhana.
    """
    # Contoh: menerima daftar angka atau dictionary
    features: list[float] = Field(..., description="Input features untuk model ML")
    model_version: Optional[str] = "v1.0"

# --- Response Schemas ---

class MLJobResponse(BaseModel):
    id: int
    job_type: str
    status: str
    result_path: Optional[str] = None
    error_message: Optional[str] = None
    created_at: datetime
    
    class Config:
        from_attributes = True # Agar bisa mapping dari SQLAlchemy Object

7. ML Inference Service

Ini adalah "otak" dari aplikasi. Service ini bertugas memuat model (`.pkl`) dan menjalankan prediksi. Kode ditulis agar modular, sehingga mudah diganti modelnya tanpa mengubah API.

Catatan: Pastikan Anda memiliki file model (misal model_v1.pkl) di folder /www/wwwroot/app-collection/ao/ml_models/. Kode di bawah ini adalah template produksi untuk memuat model tersebut.
File: /www/wwwroot/app-collection/ao/app/services/inference_service.py
import os
import joblib
import pandas as pd
from pathlib import Path
from datetime import datetime
import logging

from core.config import get_settings

settings = get_settings()
logger = logging.getLogger(__name__)

class MLInferenceService:
    def __init__(self):
        self.model = None
        self.model_version = "v1.0"
        self._load_model()

    def _load_model(self):
        """Memuat model dari disk saat service diinisialisasi."""
        try:
            # Mencari file .pkl di direktori model
            model_path = Path(settings.MODEL_DIR) / "model_v1.pkl"
            
            if model_path.exists():
                logger.info(f"Loading model from {model_path}")
                self.model = joblib.load(model_path)
                logger.info("Model loaded successfully.")
            else:
                logger.warning(f"Model file not found at {model_path}. Inference will fail.")
        except Exception as e:
            logger.error(f"Failed to load model: {e}")
            raise

    def predict(self, input_features: list[float]) -> dict:
        """
        Menjalankan prediksi.
        Returns dictionary berisi hasil prediksi.
        """
        if not self.model:
            raise ValueError("Model is not loaded.")

        try:
            # Preprocessing data sederhana
            # Ubah list features menjadi DataFrame (sesuaikan dengan preprocessing model Anda)
            df_input = pd.DataFrame([input_features])
            
            # Prediksi
            prediction = self.model.predict(df_input)
            
            # Post-processing (contoh mengubah numpy array ke list Python)
            result = {
                "prediction": float(prediction[0]),
                "model_version": self.model_version,
                "timestamp": datetime.utcnow().isoformat()
            }
            return result

        except Exception as e:
            logger.error(f"Inference error: {e}")
            raise e

# Singleton instance agar model hanya dimuat sekali
inference_service = MLInferenceService()

8. API Routes & Logic

Membuat Endpoint API yang menerima request dari user, memvalidasi, menyimpan ke database, memanggil ML Service, dan mengembalikan hasil.

File: /www/wwwroot/app-collection/ao/app/api/v1/endpoints/ml_jobs.py
from fastapi import APIRouter, Depends, HTTPException, Header
from sqlalchemy.orm import Session
from typing import Annotated

import app.schemas.ml_schemas as schemas
import core.models as models
from core.db import get_db
from app.services.inference_service import inference_service
from core.config import settings

router = APIRouter()

# --- Fungsi Helper Autentikasi Sederhana ---
def verify_api_key(x_api_key: Annotated[str, Header()]) -> models.User:
    """
    Memverifikasi API Key dari Header.
    Dalam produksi, gunakan hashing API Key untuk keamanan.
    """
    db: Session = next(get_db())
    user = db.query(models.User).filter(models.User.api_key == x_api_key).first()
    
    if not user:
        raise HTTPException(status_code=403, detail="Invalid API Key")
    
    if not user.is_active:
        raise HTTPException(status_code=403, detail="User account is inactive")
        
    return user

@router.post("/predict", response_model=schemas.MLJobResponse)
async def create_prediction_job(
    request: schemas.PredictionRequest,
    current_user: models.User = Depends(verify_api_key),
    db: Session = Depends(get_db)
):
    """
    Endpoint untuk memproses prediksi ML.
    1. Terima data -> 2. Simpan Job (Pending) -> 3. Run Model -> 4. Update Job (Completed)
    """
    
    # 1. Buat record Job baru di DB
    new_job = models.MLJob(
        user_id=current_user.id,
        job_type="prediction",
        status="processing",
        input_params=str(request.model_dump_json()) # Simpan input sebagai JSON string
    )
    db.add(new_job)
    db.commit()
    db.refresh(new_job)

    try:
        # 2. Jalankan Inference
        result_data = inference_service.predict(request.features)
        
        # 3. Simpan hasil ke file (Opsional, jika hasil berupa gambar/file besar)
        # Disini kita simpan path dummy saja untuk contoh
        result_filename = f"result_{new_job.id}.json"
        # result_full_path = f"{settings.PUBLIC_ROOT}/results/{result_filename}"
        
        # 4. Update Job dengan hasil sukses
        new_job.status = "completed"
        new_job.result_path = f"/results/{result_filename}"
        new_job.error_message = None
        
        db.commit()
        db.refresh(new_job)
        
        return new_job

    except ValueError as ve:
        new_job.status = "failed"
        new_job.error_message = str(ve)
        db.commit()
        raise HTTPException(status_code=500, detail=str(ve))
        
    except Exception as e:
        new_job.status = "failed"
        new_job.error_message = "Internal Server Error during inference"
        db.commit()
        raise HTTPException(status_code=500, detail="Internal Server Error")

9. Main Entry (App)

File utama main.py yang menggabungkan semua komponen: Config, Database, dan Router API.

File: /www/wwwroot/app-collection/ao/main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

# Import Routers
from app.api.v1.endpoints import ml_jobs

# Import Config
from core.config import get_settings

settings = get_settings()

# Inisialisasi Aplikasi
app = FastAPI(
    title=settings.APP_NAME,
    description="AI SaaS Production API",
    version="1.0.0",
    docs_url="/docs" # Aktifkan Swagger UI hanya jika diperlukan/VPN
)

# --- Middleware ---
# Konfigurasi CORS agar frontend (ao.baktimakmur.com) bisa akses API
app.add_middleware(
    CORSMiddleware,
    allow_origins=["https://ao.baktimakmur.com"], # Domain Frontend
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# --- Include Routers ---
app.include_router(
    ml_jobs.router, 
    prefix="/api/v1", 
    tags=["Machine Learning"]
)

# --- Health Check Endpoint ---
@app.get("/health")
def health_check():
    return {
        "status": "healthy",
        "app_name": settings.APP_NAME,
        "environment": settings.APP_ENV
    }

# Jika ingin run secara manual untuk test development:
# if __name__ == "__main__":
#     import uvicorn
#     uvicorn.run(app, host="0.0.0.0", port=8000)

Bagian 2 Selesai

Logika bisnis, API, dan layanan ML sudah terpasang. File yang dibuat di Bagian 2 meliputi:

  • core/models.py (Mapping Database)
  • app/schemas/ml_schemas.py (Validasi)
  • app/services/inference_service.py (Logic ML)
  • app/api/v1/endpoints/ml_jobs.py (Router API)
  • main.py (Entry Point)
Tes Jalankan (Manual):
Sebelum masuk ke Supervisor, coba jalankan aplikasi manual sekali untuk memastikan tidak ada error import:
cd /www/wwwroot/app-collection/ao
source venv/bin/activate
uvicorn main:app --host 0.0.0.0 --port 8000

Jika sukses, lanjutkan ke Bagian 3.

Langkah Selanjutnya (Bagian 3):
Bagian terakhir akan membahas:
  1. Konfigurasi Supervisor (Process Manager).
  2. Konfigurasi Nginx (Reverse Proxy).
  3. Manajemen Log dan Restart Otomatis.

Ketik: "Lanjut ke Bagian 3"