Experience Systems Neural Stack PT Connect
CORE_ENGINEER // V2.0.4

Engineering High-Performance Backend & AI Systems.

Architecting mission-critical infrastructures where logic meets scalability. Specialized in large-scale data processing, autonomous agents, and low-latency neural architectures.

-80%
RAM Usage Optimized
-40%
Export Time Reduced
-25%
Stock Divergence
+30%
Operational Efficiency
DEPLOYMENT_HISTORY

Professional Experience

ZNAP Technologies

Senior Backend Engineer // 2022 — PRESENT
Node.jsGoPythonSAP

Sonova Integration Architecture

Engineered complex SAP data synchronization pipelines for global hearing care leader Sonova, ensuring 99.9% data consistency across multi-region deployments.

Venturus / Nissin ERP Optimization

Optimized complex ERP integrations for Nissin via Venturus, achieving a 80% reduction in RAM overhead for batch processing and a 25% drop in stock data divergence via improved transactional logic.

Unified Call Center Ecosystem

Architected a distributed microservices environment connecting legacy call center platforms with modern AI-driven analytics, reducing latency by 45ms per request.

Reag Asset Management Platform

Built mission-critical investment tracking systems for Reag, processing millions of transaction points daily with high-integrity auditing.

NEURAL_OPERATIONS

AI & Research

Abstract visualization of neural network pathways glowing in deep teal and white against a black background

AI Curriculum Analysis

42 SCHOOL INNOVATION LAB

Development of an automated semantic analysis engine for 42 School curricula. Leveraging LLMs to map skill acquisition clusters and optimize student learning paths through pattern recognition and vector embeddings.

NLPVector DBPython
Digital art representing autonomous software agents as interconnected luminous nodes in a complex three-dimensional lattice

Autonomous Agents & MCP

PROPRIETARY R&D

Implementation of Model Context Protocol (MCP) servers to bridge local dev tools with large-scale agentic workflows. Focus on multi-agent collaboration and long-term memory persistence for engineering automation.

MCPLangChainTypeScript
Abstract visualization of GPU processing pipelines with flowing data streams

AI Video Upscaling Pipeline

COST-OPTIMIZED GPU INFRASTRUCTURE

Built a dynamic video upscaling service that spins up GPU instances on RunPod on-demand to process legacy content. Optimized costs from $10 to $0.20 per 20-minute video — a 98% reduction — while achieving 12-minute processing time through intelligent resource scheduling and auto-scaling.

RunPodGPUPythonFFmpeg
SYSTEM_CAPABILITIES

Technical Stack

AI & Machine Learning

TensorFlow
PyTorch
NLP
LLMs
LangChain
Vector Embeddings

Languages

  • Go Expert
  • Node.js Expert
  • Python Adv
  • TypeScript Adv

Frameworks

FastAPI
NestJS
Gin-Gonic
Express

Architecture & Databases

Backend Core

Microservices
Event-Driven (Kafka)
gRPC / GraphQL

Persistence

PostgreSQL
MongoDB
Redis
Pinecone (Vector)

Cloud & DevOps

AWS GCP Docker Kubernetes Terraform GitHub Actions

Ready to architect your next-gen core?

I'm currently open to selective consulting and high-impact engineering roles.

Initialize Connection
COMMUNICATION_PROTOCOL

Initialize Connection

// Availability_Check
Standard Response Latency: < 24h
Active Timezone: BRT (UTC-3)