Technology Developer · AI Specialist · Project Manager
These are the areas I've spent years working in - documented in white papers, prototyped and tested across real projects.
I've developed an AI architecture across multiple white papers and several major iterations - covering persona theory, dialectical cognition models, ethical scaffolding, multi-sampled consensus, and persistent memory systems. It draws on evolutionary psychology, psychoanalytic theory, and formal language design to build AI systems with interpretive coherence - not just fluency. Prototyped and tested across healthcare, legal, and game AI projects.
The core problem I've been working on: LLMs produce plausible but ungrounded output. I've developed architectural mechanisms that make AI systems structurally resistant to sycophancy and hallucination - not through surface-level guardrails, but through deep cognitive architecture that makes truthful output the path of least resistance. Written up with case studies from real projects. Currently collaborating with a clinical researcher on AI implementation safety to empirically ground this work.
I've built AI in domains where getting it wrong has real legal and ethical consequences. Systems handling forensic litigation data (Azure SaaS), healthcare worker training (adaptive learning), and social-sector documentation. GDPR and data sensitivity compliance from building systems that handle this data for real.
Co-founded DataInFlow, building an AI-native collective data network. AI-driven ontology alignment that reconciles fragmented data across heterogeneous sources using multi-sample techniques - solving the problem that the same entity gets described differently across different systems.
Built enterprise infrastructure from the ground up on bare-metal hardware (Dell R740xd): hypervisor provisioning (Proxmox VE), infrastructure-as-code (Terraform), containerized workloads (Docker-in-LXC), intelligent API routing, and message bus architecture. The full stack, from racking hardware to running workloads.
PRINCE2-certified. I've managed delivery across cloud SaaS (Azure Container Apps), desktop applications (Electron), and bare-metal infrastructure - 8+ phased implementations with quality gates at each stage. Experienced working in multilingual teams across timezones, including as CMO and stakeholder liaison at Aalberg Audio and through international collaboration at DataInFlow.
How this experience connects to what Mai Life is building.
In the psychiatric and child welfare sector, a hallucinated summary is not a glitch - it is a clinical liability and a potential lawsuit. This is the core problem I've spent years on: making truthful output the path of least resistance through deep cognitive architecture, tested across multiple real-world systems.
Currently collaborating with a clinical researcher on AI implementation safety - working to empirically ground the architectural approaches described above.
I've built systems that ingest, structure, and analyze complex documents - from forensic litigation (LegalHawk) to persistent knowledge compilation (DAG-Vault).
LegalHawk's compliance architecture handles sensitive legal data on Azure. The DAG framework's ethical scaffolding ensures AI systems refuse to produce ungrounded output.
Years of direct experience at Dynamisk Helse and building SkillAid for healthcare worker training. Deep understanding of how health professionals work, what they need from AI tools, and the regulatory landscape they operate in.
DataInFlow and ONAIA demonstrate AI-driven reconciliation of fragmented data across heterogeneous systems and formats.
Matrix_Infra is the full infrastructure stack I built from prototype to deploy, including intelligent API routing and message bus architecture. Not just ideas - secure orchestration from bare metal through to running workloads.
PRINCE2 certification combined with a track record across multiple deployment patterns. Formal quality gates and phase coupling contracts ensure disciplined delivery through every stage. Used to working across borders and languages.
A selection of projects across healthcare, legal, infrastructure, data, and game AI.
Startup building a seamless system-to-system exchange for product data, replacing manual, centralized data collection with AI agents that retrieve, harmonize, and distribute data across heterogeneous sources.
Prototype for AI-based data harmonization using multi-sample techniques to reconcile product data from heterogeneous sources into a collective ontology. Solves the fundamental problem that the same entity gets described differently across different systems and formats.
Built from the ground up on an enterprise server (Dell R740xd). Three-tier topology: Proxmox VE hypervisor, Terraform provisioning, Docker orchestration in LXC containers, intelligent API routing (LiteLLM), and message bus architecture (NATS). Enables rapid prototyping-to-deploy pipelines.
Handles highly sensitive legal documents with complete regulatory compliance. Deployed on Azure Container Apps with PostgreSQL backend, Next.js frontend, and a phased migration strategy spanning 8+ phases with formal quality gates at each stage.
Integrates AI with pedagogy for scalable, user-centered training in the healthcare sector. Designed to fit how healthcare professionals actually learn and work, with adaptive content delivery and progress tracking.
AI system that generates adaptive NPC personalities in real-time, responding to in-game events and player decisions. Desktop application built with Electron and TypeScript, featuring automatic installation and update systems.
A comprehensive framework for building AI systems that are structurally resistant to hallucination and sycophantic behavior. Developed through multiple years of research and hands-on work across healthcare, legal, and game AI. Documented across multiple white papers spanning cognitive architecture theory, ethical scaffolding, and empirical validation.
Infrastructure for persistent knowledge management where AI compiles and maintains a structured knowledge graph over time. Version-controlled with git, quality-assured with CI/CD. Distributable as a git submodule into any project, enabling compounding institutional memory across workstreams.