Brooksource
We are seeking a skilled
Data Engineer
to design, build, and optimize data pipelines and architectures that enable analytics, reporting, and data-driven decision‑making across the organization. The ideal candidate is passionate about data, has strong engineering fundamentals, and enjoys solving complex data challenges in a fast‑paced environment.
Key Responsibilities
Design, develop, and maintain
scalable ETL/ELT data pipelines and integrations across multiple data sources and systems.
Build and optimize
data architectures (data lakes, warehouses, and pipelines) to support business intelligence, analytics, and machine learning use cases.
Collaborate with data analysts, scientists, and business stakeholders
to understand data needs and ensure high data quality and availability.
Implement best practices
for data governance, security, and compliance.
Monitor and troubleshoot
data pipelines for performance, reliability, and accuracy.
Automate workflows
and improve efficiency of data processing and delivery.
Maintain documentation
for data models, data flows, and systems.
Required Qualifications
Bachelor’s degree in
Computer Science, Data Engineering, Information Systems , or a related field (or equivalent experience).
3+ years
of experience in data engineering, data warehousing, or software engineering roles.
Proficiency in
SQL
and experience with relational and NoSQL databases (e.g., PostgreSQL, Snowflake, BigQuery, Redshift, MongoDB).
Strong experience with
ETL/ELT tools
and frameworks (e.g., Apache Airflow, dbt, Informatica, or AWS Glue).
Programming skills in
Python ,
Scala , or
Java .
Experience with
cloud data platforms
(AWS, Azure, GCP).
Understanding of
data modeling ,
data governance , and
data security
best practices.
Preferred Qualifications
Experience with
streaming technologies
(Kafka, Kinesis, Spark Streaming).
Familiarity with
containerization and orchestration tools
(Docker, Kubernetes).
Exposure to
CI/CD
practices for data pipelines.
Experience supporting
data science and machine learning workflows .
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
#J-18808-Ljbffr
Data Engineer
to design, build, and optimize data pipelines and architectures that enable analytics, reporting, and data-driven decision‑making across the organization. The ideal candidate is passionate about data, has strong engineering fundamentals, and enjoys solving complex data challenges in a fast‑paced environment.
Key Responsibilities
Design, develop, and maintain
scalable ETL/ELT data pipelines and integrations across multiple data sources and systems.
Build and optimize
data architectures (data lakes, warehouses, and pipelines) to support business intelligence, analytics, and machine learning use cases.
Collaborate with data analysts, scientists, and business stakeholders
to understand data needs and ensure high data quality and availability.
Implement best practices
for data governance, security, and compliance.
Monitor and troubleshoot
data pipelines for performance, reliability, and accuracy.
Automate workflows
and improve efficiency of data processing and delivery.
Maintain documentation
for data models, data flows, and systems.
Required Qualifications
Bachelor’s degree in
Computer Science, Data Engineering, Information Systems , or a related field (or equivalent experience).
3+ years
of experience in data engineering, data warehousing, or software engineering roles.
Proficiency in
SQL
and experience with relational and NoSQL databases (e.g., PostgreSQL, Snowflake, BigQuery, Redshift, MongoDB).
Strong experience with
ETL/ELT tools
and frameworks (e.g., Apache Airflow, dbt, Informatica, or AWS Glue).
Programming skills in
Python ,
Scala , or
Java .
Experience with
cloud data platforms
(AWS, Azure, GCP).
Understanding of
data modeling ,
data governance , and
data security
best practices.
Preferred Qualifications
Experience with
streaming technologies
(Kafka, Kinesis, Spark Streaming).
Familiarity with
containerization and orchestration tools
(Docker, Kubernetes).
Exposure to
CI/CD
practices for data pipelines.
Experience supporting
data science and machine learning workflows .
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
#J-18808-Ljbffr