research

Notes, preprints, and ongoing work on how intelligent systems should reason — not just generate.

My research sits at the intersection of large language models, cloud-native infrastructure, and system architecture. The questions I keep returning to:

  • How do we make LLMs reason about systems, not just describe them?
  • What does a trustworthy autonomous control plane look like?
  • Can neural networks generate, verify, and operate the systems they propose?

Publications & preprints

2026

  1. Preprint
    Your paper title goes here
    Andrii Petruk
    arXiv preprint, 2026

Ongoing research

  • LLM architectural reasoning. Probing whether language models can produce internally consistent system designs under realistic constraints (latency, blast radius, failure modes).
  • Self-operating infrastructure. Closed-loop agents that propose, simulate, and apply Kubernetes-level changes — with verifiable rollbacks.
  • Reliability lessons across scales. What distributed systems teach us about training, inference, and deployment of frontier models.