Strategic Staffing Solutions
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
Job Title:
Data Engineer
Location:
Charlotte, NC ** On-Site
Duration:
12 Mon ths (Possible Extension) Role Type:
W2 Contract Engagement
Job Summary:
We are seeking a
Data Engineer
with strong experience in
Python ,
PySpark , and
Airflow , and hands-on knowledge of
Cloudera Hadoop
environments. You'll be responsible for building and maintaining scalable data pipelines, enabling high-quality data flow and transformation across the organization.
Infosec experience
and
Google Cloud Platform (GCP)
familiarity are strong pluses.
Key Responsibilities: Design, build, and maintain robust, scalable, and efficient
data pipelines
using
PySpark
and
Airflow . Manage and optimize large-scale data workflows on
Cloudera Hadoop Develop reusable code in
Python
for data ingestion, transformation, and delivery. Ensure data quality, integrity, security, and performance across all pipelines. Work cross-functionally with data analysts, data scientists, InfoSec, and DevOps teams. Support data integration projects involving
on-premise and cloud systems , particularly
GCP . Participate in
incident resolution , root cause analysis, and implement process improvements. Implement best practices for
data security ,
governance , and
access control , especially in regulated environments. Required Qualifications:
3-6 years
of professional experience in
data engineering
or related fields. Strong coding skills in
Python
and experience using
PySpark
for big data processing. Hands-on experience with
Apache Airflow
for orchestrating ETL workflows. Deep understanding of
Cloudera Hadoop
(HDFS, Hive, Impala, etc.). Solid understanding of
data pipeline design , performance optimization, and data lake/data warehouse concepts. Preferred Qualifications:
Google Cloud Platform (GCP)
experience, particularly with BigQuery, Dataflow, or Cloud Storage. Exposure to
data security practices ,
InfoSec principles , or experience working in a regulated data environment. Familiarity with version control systems (e.g.,
Git ) and CI/CD pipelines. Experience working in
agile
environments and using
Jira/Confluence . Soft Skills:
Strong problem-solving and analytical thinking skills. Excellent communication skills-able to collaborate with technical and non-technical stakeholders. Detail-oriented, with a focus on performance, reliability, and maintainability. The four pillars of our company are to:
Set the bar high for what a company should do Create jobs Offer people an opportunity to succeed and change their station in life Improve the communities where we live and work through volunteering and charitable giving
Job Title:
Data Engineer
Location:
Charlotte, NC ** On-Site
Duration:
12 Mon ths (Possible Extension) Role Type:
W2 Contract Engagement
Job Summary:
We are seeking a
Data Engineer
with strong experience in
Python ,
PySpark , and
Airflow , and hands-on knowledge of
Cloudera Hadoop
environments. You'll be responsible for building and maintaining scalable data pipelines, enabling high-quality data flow and transformation across the organization.
Infosec experience
and
Google Cloud Platform (GCP)
familiarity are strong pluses.
Key Responsibilities: Design, build, and maintain robust, scalable, and efficient
data pipelines
using
PySpark
and
Airflow . Manage and optimize large-scale data workflows on
Cloudera Hadoop Develop reusable code in
Python
for data ingestion, transformation, and delivery. Ensure data quality, integrity, security, and performance across all pipelines. Work cross-functionally with data analysts, data scientists, InfoSec, and DevOps teams. Support data integration projects involving
on-premise and cloud systems , particularly
GCP . Participate in
incident resolution , root cause analysis, and implement process improvements. Implement best practices for
data security ,
governance , and
access control , especially in regulated environments. Required Qualifications:
3-6 years
of professional experience in
data engineering
or related fields. Strong coding skills in
Python
and experience using
PySpark
for big data processing. Hands-on experience with
Apache Airflow
for orchestrating ETL workflows. Deep understanding of
Cloudera Hadoop
(HDFS, Hive, Impala, etc.). Solid understanding of
data pipeline design , performance optimization, and data lake/data warehouse concepts. Preferred Qualifications:
Google Cloud Platform (GCP)
experience, particularly with BigQuery, Dataflow, or Cloud Storage. Exposure to
data security practices ,
InfoSec principles , or experience working in a regulated data environment. Familiarity with version control systems (e.g.,
Git ) and CI/CD pipelines. Experience working in
agile
environments and using
Jira/Confluence . Soft Skills:
Strong problem-solving and analytical thinking skills. Excellent communication skills-able to collaborate with technical and non-technical stakeholders. Detail-oriented, with a focus on performance, reliability, and maintainability. The four pillars of our company are to:
Set the bar high for what a company should do Create jobs Offer people an opportunity to succeed and change their station in life Improve the communities where we live and work through volunteering and charitable giving