SOCAL Lab · Kyungpook National University
Research program
We develop statistically principled methods where standard pipelines degrade: federated optimization under distribution shift; calibration and generative augmentation under imbalance; and privacy-aware evaluation frameworks for LLM deployment in accountable settings.
Federated learning & privacy
“Data is never identical.”
Real federated learning breaks when local clients are wildly non-IID. We build statistical tools so global models stay representative and secure — without ever requiring raw private data in one place.
Imbalanced data
“The tail is not noise.”
Classical models chase the majority. We use calibration, mixup, and high-fidelity synthesis so imbalanced regression stays faithful to rare but high-impact regions of the distribution.
AI alignment (LLM)
“Trust has to scale with capability.”
Policy and procurement move slowly; models do not. We study auditing for public-sector AI and privacy-aware evaluation for LLMs so organizations can ship powerful systems without losing accountability.