Anblicks
JOB DUTIES:
Architect, design, build, integrate, and implement databases, data structures, applications, and solutions using AWS S3, EMR, EC2, Glue, Lambda, Athena, and Redshift, BI Analytics, PySPARK, Power BI, PostgreSQL, Python, SQL, Java, Git, Apache Spark, and Hive. Design and build robust data pipelines using Apache Kafka for real-time data streaming, ensuring high availability and fault tolerance. Utilize Power BI to create insightful and interactive dashboards and reports. Monitor and optimize data pipelines, queries, and data processing jobs for performance, scalability, and cost efficiency. Orchestrated data migration from Hadoop and HDFSto implement ETL processes using Apache Spark or AWS, version controlling code with Bitbucket ,
and Git Hub for collaborative development and maintenance. Architect and manage data warehousing solutions on PostgraceSQL, leveraging its scalability and performance for storing and analyzing large volumes of structured and semi-structured data. Establish data governance policies and implement security measures using Hbase services like Azure Key Vault and Azure Active Directory to ensure compliance with regulations and protect sensitive data. Provide technical support by mentoring junior engineers, collaborating with cross-functional teams, and staying updated on emerging technologies and best practices in data engineering. Attend daily scrum status meetings and iteration planning meetings and create tasks in JIRA.
JOB REQUIREMENTS :
Bachelor's degree in computer science, Computer Information Systems, or Engineering related or Technical related fields plus 5 years of progressively responsible post-baccalaureate experience. Foreign degree equivalent is acceptable. We will also accept any suitable combination of education, training and/or experience. In lieu of the above, we will also accept a Master's degree in computer science, Computer Information Systems, or Engineering related or Technical related fields plus 2 years of experience. Experience should include at least 2 years' experience working on Hadoop, Bitbucket, JIRA, PySPARK, Apache Spark, Python, PostgreSQL, Hive, HBase, Apache Kafka, Power BI, AWS S3, EMR, EC2, Glue, Lambda, Athena, and Redshift.
HOURS : M-F, 8:00 a.m. - 5:00 p.m.
JOB LOCATION : Dallas, Texas. Travel is not required, but candidates must be willing to relocate to unanticipated locations across the country per contract demand.
CONTACT : Email resume referencing job code#SDE07252025ANB to Maruthi Technologies Inc. DBA Anblicks at hr@anblicks.com
Architect, design, build, integrate, and implement databases, data structures, applications, and solutions using AWS S3, EMR, EC2, Glue, Lambda, Athena, and Redshift, BI Analytics, PySPARK, Power BI, PostgreSQL, Python, SQL, Java, Git, Apache Spark, and Hive. Design and build robust data pipelines using Apache Kafka for real-time data streaming, ensuring high availability and fault tolerance. Utilize Power BI to create insightful and interactive dashboards and reports. Monitor and optimize data pipelines, queries, and data processing jobs for performance, scalability, and cost efficiency. Orchestrated data migration from Hadoop and HDFSto implement ETL processes using Apache Spark or AWS, version controlling code with Bitbucket ,
and Git Hub for collaborative development and maintenance. Architect and manage data warehousing solutions on PostgraceSQL, leveraging its scalability and performance for storing and analyzing large volumes of structured and semi-structured data. Establish data governance policies and implement security measures using Hbase services like Azure Key Vault and Azure Active Directory to ensure compliance with regulations and protect sensitive data. Provide technical support by mentoring junior engineers, collaborating with cross-functional teams, and staying updated on emerging technologies and best practices in data engineering. Attend daily scrum status meetings and iteration planning meetings and create tasks in JIRA.
JOB REQUIREMENTS :
Bachelor's degree in computer science, Computer Information Systems, or Engineering related or Technical related fields plus 5 years of progressively responsible post-baccalaureate experience. Foreign degree equivalent is acceptable. We will also accept any suitable combination of education, training and/or experience. In lieu of the above, we will also accept a Master's degree in computer science, Computer Information Systems, or Engineering related or Technical related fields plus 2 years of experience. Experience should include at least 2 years' experience working on Hadoop, Bitbucket, JIRA, PySPARK, Apache Spark, Python, PostgreSQL, Hive, HBase, Apache Kafka, Power BI, AWS S3, EMR, EC2, Glue, Lambda, Athena, and Redshift.
HOURS : M-F, 8:00 a.m. - 5:00 p.m.
JOB LOCATION : Dallas, Texas. Travel is not required, but candidates must be willing to relocate to unanticipated locations across the country per contract demand.
CONTACT : Email resume referencing job code#SDE07252025ANB to Maruthi Technologies Inc. DBA Anblicks at hr@anblicks.com