Tap Growth ai
Senior Data Engineer/ AWS Data Engineer/ Lead Data Engineer (REMOTE)
Tap Growth ai, Falls Church, Virginia, United States, 22042
We're Hiring: Programmer Analyst Principal!
We are seeking a skilled and experienced Programmer Analyst Principal to join our dynamic team. The ideal candidate will possess extensive knowledge in software development, system analysis, and project management, contributing to innovative solutions that drive business success.
Location:
Falls Church, VA ⏰ Work Mode: 100% Remote or Hybrid work arrangement, with occasional visits to the office(TBD during interview) Role:
Programmer Analyst Principal Contract duration:
approximate 6 months + possibility of extension/conversion to permanent role
What You'll Do:
Design and implement data pipelines to ingest, extract, transform, and load (ETL) large datasets from multiple sources
Build and maintain data warehouses, including data modeling, governance, and quality control
Ensure data integrity, accuracy, and security through validation and cleansing processes
Optimize data systems for scalability, performance, and reliability
Collaborate with customers to understand technical requirements and provide best practice guidance on Amazon Redshift usage
Partner with cross-functional teams, including analysts, data scientists, and business stakeholders, to define and deliver data solutions
Provide technical support for Amazon Redshift, including troubleshooting and performance tuning
Identify and resolve data-related issues, such as pipeline failures and quality concerns
Develop technical documentation and knowledge articles to support internal teams and clients.
Skills Required:
Bachelor’s or Master’s
degree in Computer Science or a related field, with
at least 6
years of experience in Information Technology
8+ years
of experience in data engineering and large-scale data system design
5+ years
of hands‑on experience writing optimized SQL queries for Oracle, SQL Server, and Redshift
5+ years
of experience using AWS Glue and Python/PySpark to build production ETL pipelines
Proficiency in one or more programming languages (Python, Java, Scala)
Strong understanding of database design, data modeling, and governance principles
Expertise in SQL query optimization, indexing, and performance tuning
Familiarity with data warehousing concepts (star and snowflake schemas)
Strong analytical and problem‑solving skills
Experience with data frameworks such as Apache Kafka and Fivetran
Hands‑on experience building ETL pipelines using AWS Glue, Apache Airflow, Python, and PySpark
Experience with agile development methodologies (Scrum or Kanban)
Skills Preferred:
Experience with Dataiku, Power BI, Tableau, or Alteryx
Relevant AWS certifications (e.g., AWS Certified Data Analytics – Specialty)
Experience implementing AWS best practices for data management
Experience Required:
8+ years
of data engineering experience focused on large‑scale system design
5+ years
of experience with Oracle, SQL Server, and Redshift query optimization
5+ years
of experience using AWS Glue and Python/PySpark for ETL pipeline development
Benefits:
Health Insurance
401k
Ready to make an impact?
Apply now and let’s grow together!
#J-18808-Ljbffr
Location:
Falls Church, VA ⏰ Work Mode: 100% Remote or Hybrid work arrangement, with occasional visits to the office(TBD during interview) Role:
Programmer Analyst Principal Contract duration:
approximate 6 months + possibility of extension/conversion to permanent role
What You'll Do:
Design and implement data pipelines to ingest, extract, transform, and load (ETL) large datasets from multiple sources
Build and maintain data warehouses, including data modeling, governance, and quality control
Ensure data integrity, accuracy, and security through validation and cleansing processes
Optimize data systems for scalability, performance, and reliability
Collaborate with customers to understand technical requirements and provide best practice guidance on Amazon Redshift usage
Partner with cross-functional teams, including analysts, data scientists, and business stakeholders, to define and deliver data solutions
Provide technical support for Amazon Redshift, including troubleshooting and performance tuning
Identify and resolve data-related issues, such as pipeline failures and quality concerns
Develop technical documentation and knowledge articles to support internal teams and clients.
Skills Required:
Bachelor’s or Master’s
degree in Computer Science or a related field, with
at least 6
years of experience in Information Technology
8+ years
of experience in data engineering and large-scale data system design
5+ years
of hands‑on experience writing optimized SQL queries for Oracle, SQL Server, and Redshift
5+ years
of experience using AWS Glue and Python/PySpark to build production ETL pipelines
Proficiency in one or more programming languages (Python, Java, Scala)
Strong understanding of database design, data modeling, and governance principles
Expertise in SQL query optimization, indexing, and performance tuning
Familiarity with data warehousing concepts (star and snowflake schemas)
Strong analytical and problem‑solving skills
Experience with data frameworks such as Apache Kafka and Fivetran
Hands‑on experience building ETL pipelines using AWS Glue, Apache Airflow, Python, and PySpark
Experience with agile development methodologies (Scrum or Kanban)
Skills Preferred:
Experience with Dataiku, Power BI, Tableau, or Alteryx
Relevant AWS certifications (e.g., AWS Certified Data Analytics – Specialty)
Experience implementing AWS best practices for data management
Experience Required:
8+ years
of data engineering experience focused on large‑scale system design
5+ years
of experience with Oracle, SQL Server, and Redshift query optimization
5+ years
of experience using AWS Glue and Python/PySpark for ETL pipeline development
Benefits:
Health Insurance
401k
Ready to make an impact?
Apply now and let’s grow together!
#J-18808-Ljbffr