Veracity Software Inc
Location:
Ridgefield, Connecticut, United States
Work Arrangement:
Flexible work-from-home days (onsite 2-3x per week)
Openings:
2
Step into the future with our
Enterprise Data, AI & Platforms (EDP)
team! At our company, we harness
Data & AI to transform healthcare , positively impacting the lives of millions of patients and animals. As part of the EDP team, you will contribute to building a
strong data-driven culture , drive
key data transformation initiatives , and shape the
future of decision-making
across our global organization.
We are seeking a
highly skilled and experienced Data Engineer
to
design, build, and maintain scalable data infrastructure
on a cloud platform. You will be responsible for
data pipelines, ETL processes, and overall data architecture strategy , ensuring
data availability, quality, and integrity
for business stakeholders and analytics teams.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL/ELT processes.
Collaborate with data architects, modelers, IT, and business stakeholders to define and evolve cloud-based data architecture.
Optimize data storage solutions (e.g., S3, Snowflake, Redshift), ensuring data integrity, security, and accessibility.
Implement data quality, validation processes, and monitoring frameworks.
Maintain documentation for data workflows, architecture, and pipeline processes.
Troubleshoot and optimize data pipeline performance.
Engage with clients and stakeholders to analyze requirements and recommend data solutions.
Stay current with emerging technologies and industry trends in cloud and data engineering.
Requirements Data Engineer
Associate degree in Computer Science/MIS (4+ years experience) or Bachelor's (2+ years) or Master's (1+ year) in related field.
Hands‑on experience with AWS services (Glue, Lambda, Athena, Step Functions, Lake Formation).
Proficiency in Python and SQL.
Familiarity with DevOps/CI/CD principles and project lifecycle methodologies.
Moderate knowledge of cloud platforms (AWS, Azure, GCP) and data integration concepts.
Senior Data Engineer
Associate degree (8+ years experience) or Bachelor's (4+ years) or Master's (2+ years) in relevant field.
Expert‑level experience in cloud platforms, preferably AWS.
Advanced SQL skills, data modeling, and data warehousing concepts (Kimball, star/snowflake schemas).
Experience with big data frameworks (Spark, Hadoop, Flink) and relational/NoSQL databases.
Hands‑on experience with ETL/ELT tools (Airflow, dbt, AWS Glue).
Knowledge of DevOps/CI/CD for data solutions.
Desired Skills & Abilities
4+ years of progressive data engineering experience with cloud-based data platforms.
Understanding of data governance, data quality, and metadata management.
Familiarity with Snowflake and dbt (data build tool).
Strong problem‑solving skills in pipeline troubleshooting and optimization.
AWS Solutions Architect certification is a plus.
Technical Skills (Required)
AWS services: Glue, Lambda, Athena, Step Functions, Lake Formation
Programming Languages: Python, SQL
#J-18808-Ljbffr
Ridgefield, Connecticut, United States
Work Arrangement:
Flexible work-from-home days (onsite 2-3x per week)
Openings:
2
Step into the future with our
Enterprise Data, AI & Platforms (EDP)
team! At our company, we harness
Data & AI to transform healthcare , positively impacting the lives of millions of patients and animals. As part of the EDP team, you will contribute to building a
strong data-driven culture , drive
key data transformation initiatives , and shape the
future of decision-making
across our global organization.
We are seeking a
highly skilled and experienced Data Engineer
to
design, build, and maintain scalable data infrastructure
on a cloud platform. You will be responsible for
data pipelines, ETL processes, and overall data architecture strategy , ensuring
data availability, quality, and integrity
for business stakeholders and analytics teams.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL/ELT processes.
Collaborate with data architects, modelers, IT, and business stakeholders to define and evolve cloud-based data architecture.
Optimize data storage solutions (e.g., S3, Snowflake, Redshift), ensuring data integrity, security, and accessibility.
Implement data quality, validation processes, and monitoring frameworks.
Maintain documentation for data workflows, architecture, and pipeline processes.
Troubleshoot and optimize data pipeline performance.
Engage with clients and stakeholders to analyze requirements and recommend data solutions.
Stay current with emerging technologies and industry trends in cloud and data engineering.
Requirements Data Engineer
Associate degree in Computer Science/MIS (4+ years experience) or Bachelor's (2+ years) or Master's (1+ year) in related field.
Hands‑on experience with AWS services (Glue, Lambda, Athena, Step Functions, Lake Formation).
Proficiency in Python and SQL.
Familiarity with DevOps/CI/CD principles and project lifecycle methodologies.
Moderate knowledge of cloud platforms (AWS, Azure, GCP) and data integration concepts.
Senior Data Engineer
Associate degree (8+ years experience) or Bachelor's (4+ years) or Master's (2+ years) in relevant field.
Expert‑level experience in cloud platforms, preferably AWS.
Advanced SQL skills, data modeling, and data warehousing concepts (Kimball, star/snowflake schemas).
Experience with big data frameworks (Spark, Hadoop, Flink) and relational/NoSQL databases.
Hands‑on experience with ETL/ELT tools (Airflow, dbt, AWS Glue).
Knowledge of DevOps/CI/CD for data solutions.
Desired Skills & Abilities
4+ years of progressive data engineering experience with cloud-based data platforms.
Understanding of data governance, data quality, and metadata management.
Familiarity with Snowflake and dbt (data build tool).
Strong problem‑solving skills in pipeline troubleshooting and optimization.
AWS Solutions Architect certification is a plus.
Technical Skills (Required)
AWS services: Glue, Lambda, Athena, Step Functions, Lake Formation
Programming Languages: Python, SQL
#J-18808-Ljbffr