My focus on data architecture complements my engineering and ML skills, enabling me to translate business requirements into resilient, high‑performance data platforms, always trying to select the best design patterns and architectural approach for the problem at hand.
Core capabilities:
- System design & architecture: blueprinting end‑to‑end solutions that balance scalability, cost, and maintainability
- Data modeling & lake‑house patterns: crafting relational and dimensional models, plus lake architectures (e.g., S3 + Iceberg) that support both analytics and ML workloads
- ML‑ready pipelines: integrating feature stores, model‑training jobs, and real‑time/ batch inference into unified workflows
- Cloud infrastructure on AWS: provisioning secure, elastic stacks with services like S3, Glue, Athena, EMR, Lambda, and Redshift
- Docker containers: containerizing applications for consistent and repeatable deployments.
Continuous improvement mindset:
- Expanding DevOps proficiency with Terraform/IaC, Jenkins CI/CD, and Kubernetes for container orchestration
- Evaluating emerging frameworks: Apache Iceberg, Delta Lake, dbt and data‑contract tooling—to adopt best‑in‑class patterns
- Applying proven design principles (e.g., medallion, domain‑driven design, event sourcing) to ensure each architecture fits the problem rather than forcing a one‑size‑fits‑all approach
I actively pursue new technologies and architectural patterns, always aiming to deliver data platforms that are reliable today and adaptable for tomorrow’s demands.
Data Architecture
Designing scalable, cloud-native data platforms and lake-house architectures.