System One
About the Role
We’re seeking a skilled and forward-thinking
Data Engineer
with hands‑on experience in
cloud platforms
to join a growing data team. You’ll play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence across the organization.
This is a hybrid/onsite role in Tulsa, Oklahoma.
Successful candidates must be able to provide proof of ability to work in the U.S. without sponsorship. This position is not open to corp‑to‑corp, subcontractor or independent consulting arrangements.
Key Responsibilities
Design, develop, and optimize robust data pipelines using cloud‑native tools (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow)
Architect and maintain scalable data lake and warehouse solutions (e.g., Snowflake, BigQuery, Redshift)
Collaborate with data scientists, analysts, and software engineers to deliver clean, reliable, and well‑documented datasets
Implement data governance, security, and compliance best practices across cloud environments
Monitor and troubleshoot data workflows, ensuring high availability and performance
Automate data ingestion, transformation, and validation processes
Stay current with emerging cloud technologies and recommend improvements to existing systems
Required Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
3+ years of experience in data engineering or software development
Strong proficiency in SQL and Python or Scala
Proven experience with at least one major cloud platform (AWS, Azure, GCP)
Familiarity with cloud‑based data tools (e.g., Databricks, Snowflake, Airflow)
Experience with CI/CD pipelines and infrastructure‑as‑code (e.g., Terraform, CloudFormation)
Solid understanding of ETL/ELT concepts and data modeling
Preferred Qualifications
Certifications in cloud technologies (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate)
Experience with real‑time data processing (e.g., Kafka, Spark Streaming)
Knowledge of DevOps practices and containerization (Docker, Kubernetes)
Exposure to machine learning workflows and MLOps
What the Company Offers
Competitive salary and performance bonuses
Flexible work arrangements and remote options
Comprehensive health, dental, and vision insurance
Professional development budget and cloud certification support
A collaborative, innovative, and inclusive work culture
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
#J-18808-Ljbffr
Data Engineer
with hands‑on experience in
cloud platforms
to join a growing data team. You’ll play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence across the organization.
This is a hybrid/onsite role in Tulsa, Oklahoma.
Successful candidates must be able to provide proof of ability to work in the U.S. without sponsorship. This position is not open to corp‑to‑corp, subcontractor or independent consulting arrangements.
Key Responsibilities
Design, develop, and optimize robust data pipelines using cloud‑native tools (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow)
Architect and maintain scalable data lake and warehouse solutions (e.g., Snowflake, BigQuery, Redshift)
Collaborate with data scientists, analysts, and software engineers to deliver clean, reliable, and well‑documented datasets
Implement data governance, security, and compliance best practices across cloud environments
Monitor and troubleshoot data workflows, ensuring high availability and performance
Automate data ingestion, transformation, and validation processes
Stay current with emerging cloud technologies and recommend improvements to existing systems
Required Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
3+ years of experience in data engineering or software development
Strong proficiency in SQL and Python or Scala
Proven experience with at least one major cloud platform (AWS, Azure, GCP)
Familiarity with cloud‑based data tools (e.g., Databricks, Snowflake, Airflow)
Experience with CI/CD pipelines and infrastructure‑as‑code (e.g., Terraform, CloudFormation)
Solid understanding of ETL/ELT concepts and data modeling
Preferred Qualifications
Certifications in cloud technologies (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate)
Experience with real‑time data processing (e.g., Kafka, Spark Streaming)
Knowledge of DevOps practices and containerization (Docker, Kubernetes)
Exposure to machine learning workflows and MLOps
What the Company Offers
Competitive salary and performance bonuses
Flexible work arrangements and remote options
Comprehensive health, dental, and vision insurance
Professional development budget and cloud certification support
A collaborative, innovative, and inclusive work culture
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
#J-18808-Ljbffr