Lorvenk Technologies
Data Engineer (Ex-Capital One)
Lorvenk Technologies, Richmond, Virginia, United States, 23214
Title: Data Engineer
Location: Richmond, VA
Exp: 5+
Duration: 12+ Months
Contract: W2
Need Ex-Capital One
Job Summary We are looking for a skilled Data Engineer with hands-on experience in AWS cloud services to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will be responsible for ensuring reliable data flow, optimizing performance, and supporting data-driven decision-making across the organization.
Key Responsibilities
Design, develop, and maintain data pipelines and ETL workflows using AWS services.
Build and optimize data lakes, data warehouses, and real-time streaming solutions.
Work closely with data scientists, analysts, and business stakeholders to deliver high-quality, structured data.
Implement and maintain data quality, security, and governance best practices.
Develop and deploy AWS Lambda, Glue, Step Functions, and Kinesis-based data workflows.
Integrate data from multiple sources (RDBMS, APIs, logs, etc.) into centralized platforms.
Monitor and troubleshoot data pipeline issues for performance and reliability.
Automate data workflows and optimize resource usage for cost efficiency.
Required Skills
Strong proficiency in Python, SQL, and ETL development.
Hands-on experience with key AWS data services, including: AWS Glue, Lambda, Athena, Redshift, Kinesis, S3, EMR, Step Functions.
Solid understanding of data modeling, data warehousing, and data lake architectures.
Experience with CI/CD, Git, and Infrastructure as Code tools such as Terraform or CloudFormation.
Strong problem-solving skills and attention to detail.
Excellent understanding of data security, encryption, and IAM configurations.
#J-18808-Ljbffr
Location: Richmond, VA
Exp: 5+
Duration: 12+ Months
Contract: W2
Need Ex-Capital One
Job Summary We are looking for a skilled Data Engineer with hands-on experience in AWS cloud services to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will be responsible for ensuring reliable data flow, optimizing performance, and supporting data-driven decision-making across the organization.
Key Responsibilities
Design, develop, and maintain data pipelines and ETL workflows using AWS services.
Build and optimize data lakes, data warehouses, and real-time streaming solutions.
Work closely with data scientists, analysts, and business stakeholders to deliver high-quality, structured data.
Implement and maintain data quality, security, and governance best practices.
Develop and deploy AWS Lambda, Glue, Step Functions, and Kinesis-based data workflows.
Integrate data from multiple sources (RDBMS, APIs, logs, etc.) into centralized platforms.
Monitor and troubleshoot data pipeline issues for performance and reliability.
Automate data workflows and optimize resource usage for cost efficiency.
Required Skills
Strong proficiency in Python, SQL, and ETL development.
Hands-on experience with key AWS data services, including: AWS Glue, Lambda, Athena, Redshift, Kinesis, S3, EMR, Step Functions.
Solid understanding of data modeling, data warehousing, and data lake architectures.
Experience with CI/CD, Git, and Infrastructure as Code tools such as Terraform or CloudFormation.
Strong problem-solving skills and attention to detail.
Excellent understanding of data security, encryption, and IAM configurations.
#J-18808-Ljbffr