Chelsoft Solutions
Senior Data Platform Engineer (Kafka + AWS + Observability)_Remote_ONLY ON W2
Chelsoft Solutions, New York, New York, United States
Overview
We are seeking a
Senior Data Platform Engineer
to join our team in building the next-generation
Enterprise Integration Data-as-a-Platform (TaaP)
initiative. This role is a
conversion from a Solution Architect
position to a
hands-on senior engineer , emphasizing deep technical delivery and system-level thinking. The ideal candidate will have
strong Kafka experience , deep knowledge of
AWS (RDS & DynamoDB) , hands-on use of
observability tools , and a proven background in
platform automation . In this role, youll contribute directly to the development of scalable, high-performance
enterprise APIs and events , enabling reliable and transparent data integration pipelines across the organization. Key Responsibilities
Design and develop core software aligned with enterprise integration goals Architect and implement data streaming, caching, and processing pipelines using Kafka and AWS Build APIs and events that are highly available, resilient, and scalable Lead and contribute to the delivery of outcomes using modern DevOps practices Cloud Infrastructure & Automation
Manage cloud-native resources including AWS RDS, DynamoDB, and networking components Automate infrastructure with Terraform and integrate deployments with CI/CD pipelines Apply Infrastructure-as-Code and GitOps principles to daily operations Observability & Resilience
Implement observability tooling using Prometheus, Grafana, CloudWatch, etc. Ensure systems are measurable, debuggable, and maintainable Use logs, metrics, and traces to proactively monitor platform health and performance Collaboration & Coaching
Participate in design discussions, product planning, and cross-team initiatives Share technical expertise and coach junior developers Collaborate with platform, API, and product teams to integrate TaaP capabilities Engineering Culture
Contribute to defining engineering standards, patterns, and practices Participate in hiring processes, mentoring, and continuous improvement Drive a strong DevOps mindset and platform ownership within the team Required Skills & Experience
7+ years in engineering roles with enterprise-grade system delivery Expert-level experience with Apache Kafka (setup, scaling, and operations) Deep hands-on expertise in AWS RDS and DynamoDB Proficiency in observability tools (Grafana, Prometheus, CloudWatch, ELK, etc.) Strong experience in platform automation (Terraform, CI/CD with GitHub Actions or Azure DevOps) Solid grasp of microservices architecture and event-driven systems Familiarity with caching strategies, data hydration pipelines, and enterprise integration concepts Demonstrated ability to work independently and drive solutions from concept to implementation Sound knowledge of SDLC, version control, system design, and secure architecture Strong analytical, debugging, and performance tuning skills Bachelor's degree in Computer Science or equivalent work experience Bonus Skills
Experience with enterprise data caching frameworks Familiarity with data mesh or distributed data ownership models Knowledge of Azure infrastructure and Databricks, though not required Seniority level
Mid-Senior level Employment type
Contract Job function
Engineering and Information Technology Industries
IT Services and IT Consulting
#J-18808-Ljbffr
We are seeking a
Senior Data Platform Engineer
to join our team in building the next-generation
Enterprise Integration Data-as-a-Platform (TaaP)
initiative. This role is a
conversion from a Solution Architect
position to a
hands-on senior engineer , emphasizing deep technical delivery and system-level thinking. The ideal candidate will have
strong Kafka experience , deep knowledge of
AWS (RDS & DynamoDB) , hands-on use of
observability tools , and a proven background in
platform automation . In this role, youll contribute directly to the development of scalable, high-performance
enterprise APIs and events , enabling reliable and transparent data integration pipelines across the organization. Key Responsibilities
Design and develop core software aligned with enterprise integration goals Architect and implement data streaming, caching, and processing pipelines using Kafka and AWS Build APIs and events that are highly available, resilient, and scalable Lead and contribute to the delivery of outcomes using modern DevOps practices Cloud Infrastructure & Automation
Manage cloud-native resources including AWS RDS, DynamoDB, and networking components Automate infrastructure with Terraform and integrate deployments with CI/CD pipelines Apply Infrastructure-as-Code and GitOps principles to daily operations Observability & Resilience
Implement observability tooling using Prometheus, Grafana, CloudWatch, etc. Ensure systems are measurable, debuggable, and maintainable Use logs, metrics, and traces to proactively monitor platform health and performance Collaboration & Coaching
Participate in design discussions, product planning, and cross-team initiatives Share technical expertise and coach junior developers Collaborate with platform, API, and product teams to integrate TaaP capabilities Engineering Culture
Contribute to defining engineering standards, patterns, and practices Participate in hiring processes, mentoring, and continuous improvement Drive a strong DevOps mindset and platform ownership within the team Required Skills & Experience
7+ years in engineering roles with enterprise-grade system delivery Expert-level experience with Apache Kafka (setup, scaling, and operations) Deep hands-on expertise in AWS RDS and DynamoDB Proficiency in observability tools (Grafana, Prometheus, CloudWatch, ELK, etc.) Strong experience in platform automation (Terraform, CI/CD with GitHub Actions or Azure DevOps) Solid grasp of microservices architecture and event-driven systems Familiarity with caching strategies, data hydration pipelines, and enterprise integration concepts Demonstrated ability to work independently and drive solutions from concept to implementation Sound knowledge of SDLC, version control, system design, and secure architecture Strong analytical, debugging, and performance tuning skills Bachelor's degree in Computer Science or equivalent work experience Bonus Skills
Experience with enterprise data caching frameworks Familiarity with data mesh or distributed data ownership models Knowledge of Azure infrastructure and Databricks, though not required Seniority level
Mid-Senior level Employment type
Contract Job function
Engineering and Information Technology Industries
IT Services and IT Consulting
#J-18808-Ljbffr