Asking questions. Finding answers. Sometimes.
Our team won the DLR Überflieger 2 competition with ADDONISS, an experiment studying how microgravity affects neuron cell cultures relevant to Alzheimer’s research. As Software Lead, I built the autonomous control system: a Raspberry Pi orchestrating microscopy, temperature regulation, and telemetry inside a 2U CubeLab from Space Tango. The entire experiment ran unattended for over a month, 400 km above Earth.
A week at Kennedy Space Center. Cleanroom protocols, Space Tango hardware, and the awareness that every solder joint, every line of code, has to survive launch on a Falcon 9.
SpaceX CRS-27 · Falcon 9 · March 15, 2023
Can a parameterized quantum circuit learn to sample Bayesian neural network weights that capture meaningful uncertainty? That was the question behind my CS thesis at Fraunhofer IKS. The setup: a classical CNN handles feature extraction, while a PQC generates continuous stochastic convolutional weights trained via an adversarial learning loop. The test bed was BreastMNIST, a clinical ultrasound classification dataset where uncertainty quantification actually matters.
My contribution was the quantum sampler implementation, multiple PQC architectures, a Wasserstein-style loss variant, uncertainty metrics including the “average certainty difference between correct and wrong predictions” (ACD), and automation for running large experiment sweeps. The main finding: architecture choices in PQCs make or break training stability, and the best quantum samplers outperformed matched classical samplers, particularly on the uncertainty metric.
These results led to the paper “Building Continuous Quantum-Classical Bayesian Neural Networks for a Classical Clinical Dataset”, published at ACM ReAQCT ’24 in Budapest. The paper’s stated contribution: enabling continuous quantum-sampled weights for application datasets and a systematic PQC architecture study linking circuit design to predictive and uncertainty metrics.
For my physics thesis I built a scalable noise characterization workflow for superconducting quantum processors, using the 127-qubit IBM Osaka device. Instead of exponentially expensive full process tomography, the approach learns Sparse Pauli-Lindblad (SPL) models: compact Pauli channel descriptions that capture single-qubit and nearest-neighbor pair error terms. The method uses Cycle Benchmarking with Pauli twirling to extract SPAM-free fidelity estimates, then solves a non-negative least-squares fit to obtain an interpretable “noise spectrum” per qubit and qubit pair.
Key findings: noise drifts significantly within hours on cloud-accessed hardware, gate choice matters (CX vs. native ECR produces measurably different error profiles, with hints of a dynamical decoupling effect from extra single-qubit gates in the CX compilation path), and the SPL-based noise appeared stronger than the standard backend model, suggesting that effects like crosstalk are underrepresented in default simulators.
Standard cross-entropy training treats number tokens as purely categorical symbols. Predicting “5” when the target is “4” is penalized exactly as much as predicting “9”, which ignores numerical proximity entirely. In “Regress, Don’t Guess,” we introduce Number Token Loss (NTL): a lightweight, token-level add-on to cross-entropy that injects a regression-like inductive bias for number tokens. NTL is model-agnostic, integrates into existing LM training pipelines, and improves quantitative reasoning without hurting standard text capabilities.
Results: on DeepMind Math Q&A with T5, accuracy improved from 0.64 to 0.75 (interpolation) and from 0.367 to 0.432 (extrapolation). Scaled to 3B parameters on GSM8K, NTL-WAS pushed top-1 accuracy from 13.5% to 17.7%.
My contribution centered on the benchmarking infrastructure. I designed and implemented the full runtime evaluation pipeline: measuring loss computation overhead across NTL variants (MSE, Huber, Wasserstein) against baseline cross-entropy, profiling both isolated loss steps and end-to-end training iterations, running sweeps across batch sizes, sequence lengths, and vocabulary sizes, and building the analysis and visualization code that produced the benchmarking figures in the paper. This work demonstrated that NTL-WAS is practical at scale, adding negligible overhead to real training runs.
A novel quantum error mitigation procedure, developed jointly with IBM and Universität der Bundeswehr München. The goal: reducing the quantum resource overhead needed to extract useful results from noisy near-term hardware.