Logo
Innovare Solutions, Inc.

Data Engineer

Innovare Solutions, Inc., Snowflake, Arizona, United States, 85937

Save Job

Innovare Solutions, Inc. is currently seeking a highly skilled and experienced Senior Data Engineer to join our team on a contract basis. The Senior Data Engineer will play a key role in designing, building, and optimizing data pipelines and data warehousing solutions that enable us to make data-driven decisions at scale. This individual will work closely with data scientists, developers, and other cross-functional teams to ensure the efficient processing, storage, and analysis of large datasets. Responsibilities : Design, implement, and optimize scalable data pipelines to support real-time and batch data processing. Develop and maintain data infrastructure, ensuring high availability, scalability, and performance. Collaborate with data scientists and analysts to integrate machine learning models and analytics into production systems. Ensure data quality, integrity, and governance across all data assets. Create and maintain robust ETL (Extract, Transform, Load) processes for seamless data integration from various sources. Lead data modeling efforts for structured and unstructured data. Build and maintain data warehouses, databases, and big data solutions in cloud environments (AWS, GCP, or Azure). Develop and enforce data management best practices and automation to reduce manual processes. Participate in code reviews, provide mentorship to junior data engineers, and contribute to team knowledge sharing. Troubleshoot and resolve data-related issues in production systems. Requirements : Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field. 5+ years of experience in data engineering, with a focus on large-scale data architectures and processing pipelines. Strong proficiency in SQL, Python, and at least one big data technology (e.g., Spark, Hadoop, Kafka). Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data services. Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Strong experience in designing and optimizing ETL processes and data pipelines. In-depth knowledge of data modeling, data governance, and data quality standards. Experience with version control systems (Git) and CI/CD pipelines. Ability to work independently and in a team-oriented, collaborative environment. Excellent problem-solving skills and the ability to adapt to rapidly evolving technologies. Preferred : Experience in FinTech, financial services, or similar domains. Familiarity with containerization technologies (e.g., Docker, Kubernetes). Knowledge of machine learning frameworks and data science workflows. Duration:

6-12 months with the possibility of extension based on performance.

#J-18808-Ljbffr