Vaco Recruiter Services
Vaco is hiring a GCP Cloud Engineer for a direct hire opportunity in the greater Cincinnati area.
This role requires candidates to be local to the Cincinnati area, as regular in office meetings are required.
Only local candidates will be considered for this role.
No C2C solicitations.
Our client is in the midst of leading a migration of data from SQL Server to GCP.
As their GCP Cloud Engineer, you will architect and build robust ingestion frameworks, establish scalable CI/CD and DataOps practices, and develop high-performance data pipelines using Python and Apache Spark.
Key Responsibilities
Design, build, and optimize scalable data pipelines and ingestion frameworks on
Google Cloud Platform (GCP) . Migrate existing SQL Server-based data infrastructure to GCP services such as
BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer , and
Cloud Functions . Develop, deploy, and monitor robust CI/CD pipelines for data workflows using
Terraform, Cloud Build, or similar tools . Champion best practices in
DataOps
to automate, monitor, and validate data pipelines for reliability and performance. Collaborate with product, analytics, and engineering teams to ensure availability, reliability, and accuracy of data used in critical customer-facing solutions and internal operations. Work with structured and unstructured data sources and apply data transformation techniques using
Python, PySpark , and
SQL . Support and improve existing data infrastructure, monitor data quality and integrity, and troubleshoot data-related issues. Document architecture, processes, and pipelines clearly and comprehensively. Required Qualifications
5+ years
of hands-on experience in data engineering with a focus on cloud migration and modern cloud-native architecture. Deep expertise in
Google Cloud Platform (GCP) , particularly with:
BigQuery Cloud Storage Dataflow / Apache Beam Pub/Sub Cloud Composer (Airflow) Cloud Functions
Strong experience with
Python
and
Apache Spark / PySpark
for large-scale data processing. Proficiency in
SQL , especially with SQL Server and BigQuery dialects. Demonstrated experience building
CI/CD pipelines
for data workflows using tools such as
Cloud Build, GitHub Actions, Terraform, dbt , etc. Familiarity with
DataOps
practices and tools for orchestration, testing, and monitoring. Experience migrating and transforming legacy data sources (e.g., SQL Server) into cloud data warehouses. Strong understanding of data modeling, data quality, governance, and security best practices in the cloud. Preferred Qualifications
Experience with
real-time streaming data
using
Kafka or Pub/Sub . Familiarity with
dbt (data build tool)
and
Looker
or other BI tools. Experience working in a
SaaS or product-based
company with customer-facing analytics or features. Knowledge of infrastructure-as-code and containerization using
Docker/Kubernetes .
#J-18808-Ljbffr
Design, build, and optimize scalable data pipelines and ingestion frameworks on
Google Cloud Platform (GCP) . Migrate existing SQL Server-based data infrastructure to GCP services such as
BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer , and
Cloud Functions . Develop, deploy, and monitor robust CI/CD pipelines for data workflows using
Terraform, Cloud Build, or similar tools . Champion best practices in
DataOps
to automate, monitor, and validate data pipelines for reliability and performance. Collaborate with product, analytics, and engineering teams to ensure availability, reliability, and accuracy of data used in critical customer-facing solutions and internal operations. Work with structured and unstructured data sources and apply data transformation techniques using
Python, PySpark , and
SQL . Support and improve existing data infrastructure, monitor data quality and integrity, and troubleshoot data-related issues. Document architecture, processes, and pipelines clearly and comprehensively. Required Qualifications
5+ years
of hands-on experience in data engineering with a focus on cloud migration and modern cloud-native architecture. Deep expertise in
Google Cloud Platform (GCP) , particularly with:
BigQuery Cloud Storage Dataflow / Apache Beam Pub/Sub Cloud Composer (Airflow) Cloud Functions
Strong experience with
Python
and
Apache Spark / PySpark
for large-scale data processing. Proficiency in
SQL , especially with SQL Server and BigQuery dialects. Demonstrated experience building
CI/CD pipelines
for data workflows using tools such as
Cloud Build, GitHub Actions, Terraform, dbt , etc. Familiarity with
DataOps
practices and tools for orchestration, testing, and monitoring. Experience migrating and transforming legacy data sources (e.g., SQL Server) into cloud data warehouses. Strong understanding of data modeling, data quality, governance, and security best practices in the cloud. Preferred Qualifications
Experience with
real-time streaming data
using
Kafka or Pub/Sub . Familiarity with
dbt (data build tool)
and
Looker
or other BI tools. Experience working in a
SaaS or product-based
company with customer-facing analytics or features. Knowledge of infrastructure-as-code and containerization using
Docker/Kubernetes .
#J-18808-Ljbffr