NEURONES IT ASIA PTE. LTD.
As a Python Data Engineer, you will be part of a highly qualified team, leading the delivery of modern data platforms using Azure Services and Databricks.
Your role includes:
Designing and delivering software using an agile and iterative approach based on Scrum or Kanban.
Following and contributing to improvements in the client's software engineering practices, such as source code management, test-driven development, automated testing, and CI/CD.
Supporting the Lead Data Engineer in technical architecture and design.
Understanding how data is used within the client's commercial activities and collaborating with Business Analysts or users to identify system requirements.
Analyzing and estimating IT changes, providing input on technical opportunities, constraints, and trade-offs.
Creating documentation and presenting to both technical and non-technical audiences.
Providing handover to Support teams and third-line support after releases.
Owning continuous learning to remain a technical subject matter expert.
Technical Skills:
At least 7+ years of experience in data modeling, data warehousing, SQL Server database design and development (SQL, relational and dimensional modeling), analytical querying using technologies such as SSAS cubes or columnar data stores, and Python.
Experience with ETL tools like SSIS, Alteryx, Azure Data Factory.
Building data pipelines using code, e.g., Pyspark.
Knowledge of modern cloud-based data architectures and technologies such as Data Lake, Delta Lake, Data Lakehouse (Azure Data Lake Gen 2, Azure Delta Lake, Lakehouse architecture using Databricks, Hudi, or Iceberg).
Experience with NoSQL databases like CosmosDB, MongoDB, DynamoDB.
Proficiency in Python scripting, including data engineering and machine learning libraries.
Familiarity with scripting languages such as PowerShell, Bash, DAX, M-language, with primary focus on SQL, Pyspark, and Python.
Experience with distributed computing around Spark and big data file formats like Parquet, Avro, ORC.
Knowledge of big data technologies such as Hadoop, Hive, MapReduce, and performance tuning in Spark.
Experience with reporting and visualization tools like SSRS, Tableau, Power BI, or Excel.
#J-18808-Ljbffr
#J-18808-Ljbffr