Infinite Computer Solutions Inc
**Apply only if you can work under our company INFINITE'S W2. USC and GC preferred ** Job Summary: We are looking for an experienced Data Engineer with strong hands-on expertise in Databricks, Apache Spark, and BigQuery (BQ) to build and optimize scalable data pipelines across multi-cloud environments (GCP and AWS). The ideal candidate will work closely with data scientists, analysts, and cloud architects to design, develop, and maintain robust data infrastructure supporting analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and maintain data pipelines and ETL/ELT workflows using Databricks and Apache Spark. Build scalable and efficient data solutions integrating both AWS and GCP platforms. Develop and optimize data processing frameworks for structured and unstructured data. Work extensively with BigQuery (BQ) for data warehousing, query optimization, and performance tuning. Implement data lake and data warehouse architectures using S3, AWS Glue, GCP Storage, and BigQuery. Collaborate with cross-functional teams to ensure data integrity, security, and governance. Automate data workflows and monitor data quality and performance. Participate in code reviews, architecture discussions, and DevOps/data operations processes. Required Skills & Qualifications: Min 5–8 years of experience as a Data Engineer or in a similar data-centric role. Strong proficiency in Databricks and Apache Spark (PySpark or Scala). Hands-on experience in GCP (Google Cloud Platform) and AWS (Amazon Web Services). Expertise in BigQuery (BQ) for data warehouse design, query performance, and analytics. Solid understanding of ETL/ELT design patterns, data modeling, and data lake architectures. Proficient in SQL and at least one programming language (Python, Scala, or Java). Experience with CI/CD, version control (Git), and infrastructure-as-code tools (Terraform, CloudFormation, etc.). Knowledge of data security, governance, and best practices for cloud data solutions. Preferred Skills: Experience with Airflow, AWS Glue, or Dataflow for workflow orchestration. Exposure to machine learning pipelines within Databricks. Familiarity with Delta Lake, dbt (data build tool), and Kubernetes. Certification in GCP Data Engineer or AWS Big Data Specialty is a plus. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. Looking forward to hearing from you. Have wonderful day! Best Wishes, Manasa Pochampally Email: manasa.pochampally@infinite.com Infinite | PlatformizationTM Company www.infinite.com