Strategic Staffing Solutions
Senior Data Engineer
Strategic Staffing Solutions, Charlotte, North Carolina, United States, 28245
Strategic Staffing Solutions
is currently looking for a Data Engineer IV for one of its clients.
Job Title
: Senior Data Engineer
Location
: Charlotte, NC
Setting
: Hybrid/remote
Duration
: 12 months
Candidate Requirements
: Candidates must be willing to work on our W2 only. C2C/1099 and Glider assessment are not considered.
Top Skills
Apache Flink
AWS Lake Formation
Kubernetes
Python
Terraform
AWS Glue
Job Summary We are specifically looking for individuals with at least 5 years of experience in Data Engineering and/or Software Engineering roles who can provide knowledge and support to our existing engineers.
Platform Experience
Building/optimizing Data Lake House with Open Table formats
Kubernetes deployments/cluster administration
Transitioning on‑premise big data platforms to scalable cloud-based platforms like AWS
Distributed Systems, Microservice architecture, and containers
Cloud Streaming use cases in Big Data Ecosystems (e.g., EMR, EKS, Hadoop, Spark, Hudi, Kafka/Kinesis)
Tech Stack
GitHub Hub and GitHub Actions
AWS
IAM
API Gateway
Lambda
Step Functions
Lake Formation
EKS & Kubernetes
Glue: Catalog, ETL, Crawler
Athena
S3 (Strong foundational concepts like object data store vs block data store, encryption/decryption, storage tiers etc)
Apache Hudi
Apache Flink
PostgreSQL and SQL
RDS (Relational Database Services)
Python
Java
Terraform Enterprise (must be able to write and debug TF, understand modules, providers, functions)
Helpful Tech Stack
Helm
Kafka and Kafka Schema Registry
AWS Services: CloudTrail, SNS, SQS, CloudWatch, Step Functions, Aurora, EMR, Redshift, Iceberg
Secrets Management Platform: Vault, AWS Secrets Manager
Core Responsibilities
Provides technical direction, engage team in discussion on how to best guide/build features on key technical aspects and responsible for product tech delivery
Works closely with the Product Owner and team to align on delivery goals and timing
Collaborates with architects on key technical decisions for data and overall solution
Lead design and implementation of data quality check methods
Ensure data security and permissions solutions, including data encryption, user access controls and logging
Be able to think unconventionally to find the best way to solve for a defined use case with fuzzy requirements.
Willing to do their own research to solve problems and can clearly present findings and engage in conversation on what makes one solution better than another
Thrive in a fail-fast environment, involving mini PoCs, and participate in an inspect and adapt process.
Questioning and Improvement mindset
Must be ready to ask questions about why something is currently done the way it is and suggest alternative solutions
Customer-facing skills
Interfacing with stakeholders and other product teams via pairing, troubleshooting support, and debugging issues they encounter with our products
#J-18808-Ljbffr
is currently looking for a Data Engineer IV for one of its clients.
Job Title
: Senior Data Engineer
Location
: Charlotte, NC
Setting
: Hybrid/remote
Duration
: 12 months
Candidate Requirements
: Candidates must be willing to work on our W2 only. C2C/1099 and Glider assessment are not considered.
Top Skills
Apache Flink
AWS Lake Formation
Kubernetes
Python
Terraform
AWS Glue
Job Summary We are specifically looking for individuals with at least 5 years of experience in Data Engineering and/or Software Engineering roles who can provide knowledge and support to our existing engineers.
Platform Experience
Building/optimizing Data Lake House with Open Table formats
Kubernetes deployments/cluster administration
Transitioning on‑premise big data platforms to scalable cloud-based platforms like AWS
Distributed Systems, Microservice architecture, and containers
Cloud Streaming use cases in Big Data Ecosystems (e.g., EMR, EKS, Hadoop, Spark, Hudi, Kafka/Kinesis)
Tech Stack
GitHub Hub and GitHub Actions
AWS
IAM
API Gateway
Lambda
Step Functions
Lake Formation
EKS & Kubernetes
Glue: Catalog, ETL, Crawler
Athena
S3 (Strong foundational concepts like object data store vs block data store, encryption/decryption, storage tiers etc)
Apache Hudi
Apache Flink
PostgreSQL and SQL
RDS (Relational Database Services)
Python
Java
Terraform Enterprise (must be able to write and debug TF, understand modules, providers, functions)
Helpful Tech Stack
Helm
Kafka and Kafka Schema Registry
AWS Services: CloudTrail, SNS, SQS, CloudWatch, Step Functions, Aurora, EMR, Redshift, Iceberg
Secrets Management Platform: Vault, AWS Secrets Manager
Core Responsibilities
Provides technical direction, engage team in discussion on how to best guide/build features on key technical aspects and responsible for product tech delivery
Works closely with the Product Owner and team to align on delivery goals and timing
Collaborates with architects on key technical decisions for data and overall solution
Lead design and implementation of data quality check methods
Ensure data security and permissions solutions, including data encryption, user access controls and logging
Be able to think unconventionally to find the best way to solve for a defined use case with fuzzy requirements.
Willing to do their own research to solve problems and can clearly present findings and engage in conversation on what makes one solution better than another
Thrive in a fail-fast environment, involving mini PoCs, and participate in an inspect and adapt process.
Questioning and Improvement mindset
Must be ready to ask questions about why something is currently done the way it is and suggest alternative solutions
Customer-facing skills
Interfacing with stakeholders and other product teams via pairing, troubleshooting support, and debugging issues they encounter with our products
#J-18808-Ljbffr