I study how states of the world map to states of the mind and build systems to explore how.

I'm a neuroscientist with a PhD from UCLA, where I built a distributed research platform from scratch to study flexible spatial decision-making — how the brain adapts and chooses when the environment shifts beneath it. That question doesn't stop at neuroscience. Understanding how biological agents generalize to novel situations is how I think about building artificial agents that can do the same. I write distributed software, build ML pipelines, and design experiments. Before grad school, I spent a decade in precision manufacturing, where I taught myself to code by building a full ERP system. The throughline is the same: understand the system, then build what it needs.

Ryan Grgurich

About

Who I Am

I grew up around manufacturing. My dad was a machinist who moved into quality assurance, and as a kid I'd tag along on weekend side jobs — free run of the shop floor, sometimes checking parts on a die fixture. By high school I was running parts on a lathe in the garage for extra money. There's a little bit of oil and coolant in my blood.

I ended up running operations at a precision machine shop for six years. We made parts for aerospace, automotive, and industrial clients — Haskel high-pressure systems, Borla exhaust components. I handled everything from CNC programming and process optimization to client relationships and scheduling. When I realized our paper-based workflow was the bottleneck, I taught myself FileMaker Pro and built a full ERP/MRP system from scratch — purchase orders, inspection reports, job routing, document scanning. Users preferred it over the commercial software they'd seen at other shops. The buyer of the business wanted to turn it into a product.

But I'd been pulled toward science for years. I went back to school, starting from the lowest math classes at a community college and working my way through linear algebra and differential equations. Transferred to UCLA, majored in Computational & Systems Biology, and eventually earned a PhD in Computational & Behavioral Neuroscience.

What I Built During the PhD

My dissertation asked a specific question: how does the brain make flexible spatial decisions when the environment changes? Rats can learn to navigate a maze, but what happens when the rules shift — doors rearrange, landmarks disappear, reward locations move? Do they fall back on memorized routes, or can they adapt on the fly? And what kinds of information — self-motion cues, visual landmarks, or some combination — make that flexibility possible?

To answer that, I needed an experimental platform that didn't exist. The Corner Maze is a fully automated, closed-loop behavioral rig I designed and built from scratch: 12 actuated doors, 4 stimulus display monitors, stepper-driven reward pumps, real-time markerless video tracking, and a control interface that lets researchers program entire experimental protocols without touching code. Under the hood, it's a multi-node distributed system — an Ubuntu workstation running ~8,200 lines of Python orchestrating 3 Raspberry Pis and 21 Arduino microcontrollers over ZeroMQ and MODBUS RS-485 networks. I designed all the physical hardware in Fusion 360, built all the custom circuitry, wrote all the firmware, and built the GUI.

Then I asked the same question computationally. I built a virtual replica of the maze as a reinforcement learning environment, trained PPO agents to navigate it, and designed a dual-stream CNN encoder to process synthetic visual input — testing whether the decision-making strategies I observed in real animals could emerge from a simple learning algorithm given similar sensory constraints. I also developed spike-train simulation and decoding frameworks for a paper on how the brain encodes position and velocity simultaneously — work that formalized how neural populations represent the world through parallel information channels.

What we found: rats can form flexible spatial representations and rapidly adapt to environmental changes when they have reliable self-motion cues — but when the relationship between self-motion and landmarks breaks down, that flexibility collapses. The results say something fundamental about how biological agents generalize, and I think they have direct implications for how we build artificial agents that need to do the same.

How I Work

A few principles I keep coming back to, whether I'm writing software, building hardware, or designing experiments:

Start with the question, not the tool.

The maze platform exists because I needed a specific experimental capability, not because I wanted to build a distributed system. The RL simulations exist because I needed to test a hypothesis about what information drives flexible behavior. The best engineering decisions I've made started with clarity about what I was actually trying to learn. Tools are in service of understanding — when that relationship flips, you end up building things that are impressive but don't answer anything.

Other people need to use what you build.

This is the lesson manufacturing drilled into me. The maze GUI wasn't built for me — it was built so other researchers could run complex experiments independently. The ERP system wasn't built for my workflow — it was built so everyone in the shop could track jobs without paper. If only you can operate it, it's a prototype, not a system.

Don't just fix the problem — prevent the next one.

Root-cause thinking. In a machine shop, you don't just remake a bad part; you figure out why the process produced it and put a system in place so it doesn't happen again. Same principle applies to software reliability and experimental design.

Calibrate your tools to the job.

I use AI-assisted development as a daily part of my workflow, and I think it's a genuine productivity multiplier. But the value depends on matching the level of oversight to what's at stake. A quick exploratory analysis gets different treatment than a distributed control system running live experiments with real data on the line. I built thousands of lines of production code before these tools existed — that depth is what lets me evaluate and architect around AI-generated output, not just accept it.

What I'm Looking For

I'm looking for roles where understanding how intelligent systems work — biological or artificial — drives what gets built. AI safety, research engineering, ML infrastructure, computational science. I want to be in a room where the questions are hard and the systems that answer them need to be built well.

Based in LA, open to remote.

Projects

Corner Maze — Distributed Behavioral Control Platform

A fully automated, closed-loop rodent navigation rig built from scratch — custom hardware, distributed control software, and a GUI that lets researchers run experiments without writing code. Still in active use at UCLA.

Research context

How does the brain make flexible decisions when the world shifts beneath it? My dissertation studied this through spatial navigation — a domain where you can precisely control what information an animal has access to and measure how it adapts when conditions change. The Corner Maze platform is the tool I built to run those experiments. No commercial system could do what I needed, so I designed one from scratch.

What it is

An automated behavioral neuroscience platform for studying spatial navigation and decision-making. The system runs experiments without human intervention — doors open and close, visual cues appear on monitors, rewards are delivered, and every event is logged with millisecond precision.

The hardware

I modeled the physical maze in Fusion 360 and produced technical drawings with GD&T tolerances for vendor fabrication. I designed all the custom circuitry controlling 12 linear actuators (doors) and 4 stepper-driven syringe pumps (reward delivery). The maze has 4 wall-mounted monitors for visual cue presentation and an overhead IR camera for real-time markerless position tracking.

The software architecture

The system is a multi-node distributed application spanning 5+ devices:

  • Master control node (Ubuntu workstation): ~8,200 lines of Python running a multi-threaded PyQt application. Handles session control, device orchestration, real-time video display, SQLite metadata storage, and synchronized event logging.
  • Camera node (Raspberry Pi): Multi-process Python application capturing 480x480 frames, running OpenCV-based markerless tracking (background subtraction, morphological filtering, contour detection, zone classification), and streaming frames over ZeroMQ.
  • Stimulus display nodes (2x Raspberry Pi): Pygame fullscreen applications receiving cue commands over ZeroMQ. Each Pi drives two monitors for four total stimulus displays.
  • Actuator controllers (16x Arduino): MODBUS RTU slaves controlling linear actuators with acceleration/deceleration ramping and EEPROM-based cycle counting for maintenance tracking.
  • Syringe pump controllers (4x Arduino): MODBUS RTU slaves driving stepper motors with step-forward-then-retreat motion to prevent liquid seepage between deliveries.
  • Light controller (1x Arduino): MODBUS RTU slave for PWM-controlled room and IR illumination.

Network topology: MODBUS RTU over RS-485 for all Arduino communication; ZeroMQ TCP for Raspberry Pi nodes (REQ/REP for control, PUB/SUB for video streaming).

Key design decisions

Protocol selection — MODBUS vs CAN bus: I evaluated both for the Arduino communication bus. MODBUS won because mature libraries existed for both Arduino and Python, and TTL-to-RS485 converters were significantly cheaper than CAN bus equivalents. Practical engineering trade-offs, not theoretical preference.

ZeroMQ for video and display control: Started with raw TCP for video streaming, hit reliability issues, discovered ZeroMQ and switched. Would have used ZeroMQ for the Arduinos too, but no good Arduino library existed and I didn't have time to write one.

GUI designed for multi-user operation: The PyQt interface wasn't built for me — it was built so other researchers could run experiments independently. The Action Vector Table is essentially a domain-specific session programming API: users parameterize entire experimental protocols (trial phases, zone triggers, cue configurations, reward delivery, performance criteria) through the interface without modifying code. Research assistants were trained to run sessions independently using step-by-step documentation I created.

Outcomes

The platform has been in continuous use for years. Other graduate students designed and ran their own experiments on it. One built her own experimental protocols using the session API and general system architecture. I ran 100+ animals through various experimental protocols, refining trial logic and quality control criteria across iterative pilots.

Corner Maze RL Simulation & Dual-Stream CNN Encoder

A virtual replica of the Corner Maze built as a reinforcement learning environment, with PPO agents trained to navigate it using synthetic visual input processed through a custom dual-stream CNN encoder.

Research context

If the flexible decision-making I observed in real animals depends on specific types of sensory information, could the same strategies emerge in a reinforcement learning agent given similar constraints? That's the question this project was built to answer. I created a virtual replica of the physical Corner Maze as a Gymnasium environment and trained PPO agents to navigate it — not as an exercise in RL engineering, but as a computational model of the biological behavior I was studying.

The simulation

Built on Gymnasium/MiniGrid with a configurable session framework that dynamically mirrors real experimental protocols. The environment generates analysis-ready trajectory and event data in the same format as the real behavioral data, enabling direct model-animal comparison.

Synthetic visual input pipeline

I modeled the maze environment in Fusion 360 and rendered dual left/right perspective views to approximate rodent visual input. Dataset engineering in PyTorch included preprocessing, deduplication of near-identical images across adjacent positions, and structured data storage for classification training.

CNN encoder design

I prototyped multiple single-stream CNN architectures, evaluating each with UMAP projections and cosine-similarity correlograms to assess how cleanly the learned embeddings separated spatial positions and orientations. The final dual-stream design emerged from this systematic evaluation — two input streams (left eye, right eye) feeding into a shared embedding space. The approach was principled model selection, not architectural innovation.

RL agents

Trained PPO agents using Stable-Baselines3 with state-dependent action masking to enforce physical constraints (can't walk through closed doors) and task constraints (must follow trial phase rules). Benchmarked trained agent policies against real animal trajectory data using standardized performance metrics.

An Uncertainty Principle for Neural Coding

How does the brain encode two things at once? We showed that neural populations embed position and velocity through separate information channels — firing rates and co-firing rates — subject to a fundamental trade-off analogous to the uncertainty principle in physics.

The question

A neuron's firing rate can encode where an animal is — but the brain also needs to know how fast it's moving and in what direction. How does a single population of neurons encode both position and velocity at the same time? And is there a cost to carrying both signals?

What we found

Neural populations carry two conjugate codes simultaneously. Individual firing rates (what we call the sigma channel) encode position — head direction, location on a track, spatial phase. But the timing relationships between neurons (the sigma-chi channel — co-firing rates across cell pairs) encode velocity. Increasing the precision of one channel degrades the other, analogous to the position-momentum uncertainty principle in physics.

This isn't a loose metaphor. The math formalizes the trade-off: the same spiking activity that gives you a clean position readout necessarily limits how much velocity information you can extract from the population's temporal structure, and vice versa.

My contributions

This project started as my capstone thesis in UCLA's Computational & Systems Biology major — a genuinely underappreciated program that used to be the cybernetics department. I built the computational groundwork:

  • Sigma-chi decoder: I implemented the core decoder framework under my advisor's guidance — the linear readout that separates position information (in firing rates) from velocity information (in co-firing rates) using exponential integration kernels and pseudoinverse regression.
  • Head direction to velocity decoding: I built the simulations showing that a ring of head direction cells encodes angular position in their firing rates and angular velocity in their co-firing patterns. Populations of 12-32 simulated neurons with von Mises tuning, Poisson spike generation, and temporal optimization across ±250 ms latency windows.
  • Grid cells to speed cells: I showed that the same principle applies to a different circuit — grid cells encode spatial position in their firing rates, and sigma-chi units computed from their co-firing rates behave as speed cells. This wasn't assumed from the head direction result; it had to be demonstrated with different tuning geometry and real behavioral speed data.

My advisor Tad Blair extended the framework to theta-modulated phase coding, where the conjugate relationship inverts — position moves into the co-firing channel and velocity into firing rates. That work completed the paper's argument that the uncertainty principle is a general property of neural population codes, not specific to one cell type.

Methods

All simulations in MATLAB. The pipeline runs from behavioral data (real head direction and position recordings from rats) through spike generation (von Mises-tuned Poisson and regular-interval generators), exponential decay kernels that convert spike trains to firing rates and co-firing rates, and linear decoders trained via pseudoinverse regression. Evaluation used circular distance metrics, latency-accuracy trade-off curves, and hold-out validation.

Why it matters

This work formalized something that had been intuited but never proven: that neural populations face a fundamental information-theoretic constraint when encoding multiple variables simultaneously. It connects to active questions in computational neuroscience about efficient coding, population geometry, and how the brain represents continuous variables — and to questions in AI about how artificial networks can encode multiple factors in shared representations without interference.

Publication

Grgurich, R. & Blair, H.T. (2020). An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co-firing rates of neural spike trains. Hippocampus, 30(4), 396-421. DOI: 10.1002/hipo.23197

IntervalsWellnessSync — iOS + watchOS Health Data App

A dual-platform Apple app that syncs HealthKit wellness data to the Intervals.icu training platform, featuring a custom overnight HRV capture pipeline on Apple Watch. Built with AI-assisted development. Currently in TestFlight.

What it is

An iOS and watchOS app that bridges Apple HealthKit with Intervals.icu, a training analysis platform popular with endurance athletes. It syncs 32 wellness metrics daily and includes a custom overnight heart rate variability (HRV) capture system that runs on Apple Watch during sleep.

Why I built it

I'm an avid road cyclist and I use Intervals.icu to track training load and recovery. The platform has a wellness feature, but there was no good way to automatically populate it with Apple Health data. I saw a gap, and I built the tool.

How I built it — AI-assisted development

This project is the clearest demonstration of how I use AI coding tools in practice. I don't write Swift — I directed the entire project using AI-assisted development, making all architectural and design decisions while the AI handled implementation in a language I hadn't written in before.

This isn't "I prompted an agent and shipped whatever it produced." I designed the service-oriented architecture, specified the OAuth 2.0 flow, defined the HRV pipeline stages based on sports science literature, identified the data model patterns, and debugged platform-level issues that required understanding what the code was actually doing. The AI wrote the Swift; I engineered the product.

Architecture

  • Service-oriented design with clear separation between SwiftUI views, singleton services, and value-type models
  • OAuth 2.0 authorization with a Cloudflare Worker handling server-side token exchange (client secret stays off-device, credentials stored in iOS Keychain)
  • SwiftData with App Group shared containers for cross-platform persistence between iOS and watchOS
  • Background sync via BGProcessingTask with smart sleep detection — the system checks HealthKit for recent sleep samples and reschedules if the user is still asleep

The HRV pipeline

The overnight HRV system uses HKWorkoutSession and HKHeartbeatSeriesBuilder to record raw beat-to-beat RR intervals during sleep. A duty-cycle manager alternates between 5-minute capture windows and rest periods tuned by sleep stage, balancing data quality against battery life.

Raw RR intervals go through a three-stage artifact correction pipeline: physiological range filtering (300-2000 ms), successive-difference ectopic beat removal using the Plews method, and IQR-based outlier rejection. Epoch-level quality gating uses established thresholds from sports science literature. The final nightly metric is median Ln(rMSSD) across valid epochs — a standard metric in athletic recovery monitoring.

Platform debugging war stories

  • Discovered an undocumented HealthKit constraint: writing heartbeat series data requires read permission for .heartRateVariabilitySDNN, and missing it causes an uncatchable Objective-C exception with no documentation trail.
  • Found that Cloudflare Worker's Response.redirect() silently rejects custom URL schemes — had to build manual response construction to complete the OAuth callback.
  • Replaced an unconstrained SwiftData query that was loading all records into memory with targeted fetch descriptors, eliminating memory issues during historical backfill.

Publications

Peer-Reviewed

An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co-firing rates of neural spike trains

R. Grgurich & H.T. Blair — Hippocampus, February 2019 — DOI: 10.1002/hipo.23197

We showed that position and velocity information are simultaneously encoded in neural populations through two separate channels — individual firing rates carry position, while correlated firing between neuron pairs carries velocity. Increasing the precision of one channel reduces accuracy in the other, analogous to the uncertainty principle in physics. I built the spike-train simulation and decoding frameworks (Poisson generators, exponential integration kernels, population vector readout) and ran all computational analyses.

Preprints / Forthcoming

Path Integration Promotes Flexible Decision Making During Navigation

R. Grgurich, S. Wang, J. Pimenta, K. Delafraz, H.T. Blair — forthcoming April 2026; preprint on bioRxiv

First-author paper from my dissertation. We showed that rats can form flexible spatial representations and rapidly adapt to changes (reversal learning, novel routes) when they have reliable self-motion cues, even without stable external landmarks. When the relationship between self-motion and landmarks is disrupted, flexible learning collapses. I built the maze platform, designed the experiments, collected all behavioral data, and performed the analyses.

Habit and the hippocampus: Model-based representations without outcome-sensitive control in spatial navigation

S. Wang, R. Grgurich, S. Dong, H.T. Blair — submitted March 2026; preprint on bioRxiv

Collaborative paper investigating the relationship between hippocampal representations and habitual versus goal-directed navigation strategies. I contributed the behavioral platform and experimental data collection.

Contact

Want to talk? I'm looking for roles where understanding how intelligent systems work — biological or artificial — drives what gets built.