SpaceCoast AV Consultants
Location:
Remote (U.S. Only) Job Type:
Full-time
Job Summary:
We are seeking a talented
Data Engineer
to build and optimize scalable data pipelines and infrastructure. This role is 100% remote, but applicants must be based in the U.S. You will work closely with data scientists, analysts, and software engineers to ensure seamless data integration and support business intelligence efforts. Key Responsibilities:
Develop, optimize, and maintain data pipelines and ETL processes. Design and manage scalable data storage solutions, including data lakes and warehouses. Implement data governance, security, and compliance best practices. Monitor, troubleshoot, and improve data pipeline performance. Collaborate with cross-functional teams to support data-driven decision-making. Work with cloud platforms (AWS, GCP, Azure) to manage large-scale data processing. Automate data ingestion, transformation, and validation tasks. Required Qualifications:
Bachelor's or Masters degree in Computer Science, Data Engineering, or a related field. Strong proficiency in
SQL, Python, or Scala . Hands-on experience with
Apache Spark, Hadoop, or Airflow . Solid understanding of
relational (SQL) and NoSQL databases . Experience with
cloud data platforms
(AWS Redshift, Google BigQuery, Azure Synapse). Familiarity with
CI/CD and DevOps practices for data engineering . Strong analytical and problem-solving skills. Preferred Qualifications:
Experience with
real-time data streaming technologies
(Kafka, Flink). Knowledge of
machine learning pipelines . Understanding of
data privacy regulations
(GDPR, CCPA). Competitive salary and performance-based bonuses. Flexible work hours with a fully remote setup. Health, dental, and vision insurance. 401(k) with company matching. Generous paid time off and parental leave.
#J-18808-Ljbffr
Remote (U.S. Only) Job Type:
Full-time
Job Summary:
We are seeking a talented
Data Engineer
to build and optimize scalable data pipelines and infrastructure. This role is 100% remote, but applicants must be based in the U.S. You will work closely with data scientists, analysts, and software engineers to ensure seamless data integration and support business intelligence efforts. Key Responsibilities:
Develop, optimize, and maintain data pipelines and ETL processes. Design and manage scalable data storage solutions, including data lakes and warehouses. Implement data governance, security, and compliance best practices. Monitor, troubleshoot, and improve data pipeline performance. Collaborate with cross-functional teams to support data-driven decision-making. Work with cloud platforms (AWS, GCP, Azure) to manage large-scale data processing. Automate data ingestion, transformation, and validation tasks. Required Qualifications:
Bachelor's or Masters degree in Computer Science, Data Engineering, or a related field. Strong proficiency in
SQL, Python, or Scala . Hands-on experience with
Apache Spark, Hadoop, or Airflow . Solid understanding of
relational (SQL) and NoSQL databases . Experience with
cloud data platforms
(AWS Redshift, Google BigQuery, Azure Synapse). Familiarity with
CI/CD and DevOps practices for data engineering . Strong analytical and problem-solving skills. Preferred Qualifications:
Experience with
real-time data streaming technologies
(Kafka, Flink). Knowledge of
machine learning pipelines . Understanding of
data privacy regulations
(GDPR, CCPA). Competitive salary and performance-based bonuses. Flexible work hours with a fully remote setup. Health, dental, and vision insurance. 401(k) with company matching. Generous paid time off and parental leave.
#J-18808-Ljbffr