About Research Prospective students Publications Join SOCAL Lab

SOCAL Lab · Kyungpook National University

Research program

We develop statistically principled methods where standard pipelines degrade: federated optimization under distribution shift; calibration and generative augmentation under imbalance; and privacy-aware evaluation frameworks for LLM deployment in accountable settings.

Federated learning is bidirectional: the central server distributes global model weights to clients; clients train on private local data and send model updates back for secure aggregation. Central server Federated clients · private local data Global weights down · local updates up · secure aggregation
Theme 01

Federated learning & privacy

“Data is never identical.”

Real federated learning breaks when local clients are wildly non-IID. We build statistical tools so global models stay representative and secure — without ever requiring raw private data in one place.

Theme 02

Imbalanced data

“The tail is not noise.”

Classical models chase the majority. We use calibration, mixup, and high-fidelity synthesis so imbalanced regression stays faithful to rare but high-impact regions of the distribution.

Histogram of a skewed response with a heavy right tail; translucent bars show synthetic data boosting mass in rare bins—mixup and synthesis improve tail coverage. Empirical frequency Outcome / target value →
AI alignment for LLMs: evaluation and privacy-aware auditing alongside accountable deployment—public institutions, safety controls, risk and compliance, and auditable records. AI alignment (LLM) Policy · procurement · post-deployment accountability
Theme 03

AI alignment (LLM)

“Trust has to scale with capability.”

Policy and procurement move slowly; models do not. We study auditing for public-sector AI and privacy-aware evaluation for LLMs so organizations can ship powerful systems without losing accountability.

Public auditing LLM privacy Deployment risk