Eames Consulting
Location:
Manhattan, NY (Five days onsite, full-time) Rate:
$70 an hour C2C or 1099 Overview Our client is seeking a highly experienced Data Engineer to drive technical execution for enterprise-scale data engineering solutions. The successful candidate will bring deep expertise in big data architecture, hands-on development, and team leadership, with a strong commitment to onsite collaboration at their New York City office. The Position This role empowers the engineer to guide end-to-end technical design, solution development, and quality assurance across mission-critical projects. As the technical lead, the candidate will coordinate with business stakeholders, lead design reviews, and oversee the delivery of robust big data solutions in a fast-paced, collaborative environment. The position requires high proficiency in Spark, Scala, AWS services, and enterprise data platforms. Your Profile Over 10 years of experience architecting, building, and deploying enterprise big data solutions Proven hands-on development expertise with Spark, Scala, Python, AWS Glue, Lambda, SNS/SQS, and CloudWatch Deep SQL skills with preference for Redshift; also experienced in Snowflake data warehousing Advanced knowledge of ETL/ELT processes, frameworks, and workflow optimization Skilled at technical documentation, including integration and application designs Track record of effective communication with management and business stakeholders Familiar with version control (Git), CI/CD pipelines, and production support best practices Strong commitment to onsite team culture and knowledge sharing What You’ll Do Lead technical design and development of scalable big data solutions Create and review integration/application documentation and participate in peer design reviews Develop, configure, test, and validate solution components through unit, performance, and standards-based testing Perform advanced code review and enforce technical best practices throughout development lifecycle Troubleshoot and resolve complex production issues; tune and optimize environments for robust operation Support deployment processes and production stability through effective use of Git, CI/CD, and monitoring tools Uphold rigorous technical standards and champion continuous improvement across project phases Qualifications Minimum 10 years enterprise data engineering experience, with a focus on Scala, Spark, and AWS technologies Strong hands-on skills with SQL (preferably Redshift) and Snowflake data platforms Advanced Python development and scripting capabilities Demonstrated success in designing and scaling ETL/ELT solutions Deep understanding of modern data architecture, integration, and performance optimization Proven ability to lead technical reviews, mentor peers, and deliver under tight deadlines Commitment to five-day onsite work model in New York Seniority level Mid-Senior level Employment type Contract Job function Information Technology Industries Technology, Information and Media
#J-18808-Ljbffr
Manhattan, NY (Five days onsite, full-time) Rate:
$70 an hour C2C or 1099 Overview Our client is seeking a highly experienced Data Engineer to drive technical execution for enterprise-scale data engineering solutions. The successful candidate will bring deep expertise in big data architecture, hands-on development, and team leadership, with a strong commitment to onsite collaboration at their New York City office. The Position This role empowers the engineer to guide end-to-end technical design, solution development, and quality assurance across mission-critical projects. As the technical lead, the candidate will coordinate with business stakeholders, lead design reviews, and oversee the delivery of robust big data solutions in a fast-paced, collaborative environment. The position requires high proficiency in Spark, Scala, AWS services, and enterprise data platforms. Your Profile Over 10 years of experience architecting, building, and deploying enterprise big data solutions Proven hands-on development expertise with Spark, Scala, Python, AWS Glue, Lambda, SNS/SQS, and CloudWatch Deep SQL skills with preference for Redshift; also experienced in Snowflake data warehousing Advanced knowledge of ETL/ELT processes, frameworks, and workflow optimization Skilled at technical documentation, including integration and application designs Track record of effective communication with management and business stakeholders Familiar with version control (Git), CI/CD pipelines, and production support best practices Strong commitment to onsite team culture and knowledge sharing What You’ll Do Lead technical design and development of scalable big data solutions Create and review integration/application documentation and participate in peer design reviews Develop, configure, test, and validate solution components through unit, performance, and standards-based testing Perform advanced code review and enforce technical best practices throughout development lifecycle Troubleshoot and resolve complex production issues; tune and optimize environments for robust operation Support deployment processes and production stability through effective use of Git, CI/CD, and monitoring tools Uphold rigorous technical standards and champion continuous improvement across project phases Qualifications Minimum 10 years enterprise data engineering experience, with a focus on Scala, Spark, and AWS technologies Strong hands-on skills with SQL (preferably Redshift) and Snowflake data platforms Advanced Python development and scripting capabilities Demonstrated success in designing and scaling ETL/ELT solutions Deep understanding of modern data architecture, integration, and performance optimization Proven ability to lead technical reviews, mentor peers, and deliver under tight deadlines Commitment to five-day onsite work model in New York Seniority level Mid-Senior level Employment type Contract Job function Information Technology Industries Technology, Information and Media
#J-18808-Ljbffr