Pgs Soft
While Xebia is a global tech company, in Poland, our roots came from two teams – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started.
What We Do
We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data-driven solutions, and next-gen apps using ML, LLMs, and Generative AI. Our clients include Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, and Allegro or InPost.
We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland!
Beyond Projects
What makes Xebia special? Our community. We run events like the Data&AI Warsaw Summit, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow.
What sets us apart?
Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself.
You will be:
responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems, building and maintaining architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies, evaluating and translating technical design to workable technical solutions/code and technical specifications at par with industry standards, driving creation of re-usable artifacts, establishing scalable, efficient, automated processes for data analysis, data model development, validation, and implementation, working closely with analysts/data scientists to understand impact to the downstream data models, writing efficient and well-organized software to ship products in an iterative, continual release environment, contributing and promoting good software engineering practices across the team, communicating clearly and effectively to technical and non-technical audiences, defining data retention policies, monitoring performance and advising any necessary infrastructure changes. Your profile:
ready to start immediately
, 3+ years’ experience with Azure (Data Factory, SQL, Data Lake, Power BI, Devops, Delta Lake, CosmosDB), 5+ years’ experience with data engineering or backend/fullstack software development, experience with data transformation tools – Databricks and Spark, data manipulation libraries (such as Pandas, NumPy, PySpark), experience in structuring and modelling data in both relational and non-relational forms, ability to elaborate and propose relational/non-relational approach, normalization / denormalization and data warehousing concepts (star, Snowflake schemas), designing for transactional and analytical operations, experience with CI/CD tooling (GitHub, Azure DevOps, Harness etc), working knowledge of Git, Databricks will be benefit, good verbal and written communication skills in English. Work from the European Union region and a work permit are
required. experience with Azure Event Hubs, Azure Blob Storage, Azure Synapse, Spark Streaming, experience with data modelling tools, preferably DBT, experience with Enterprise Data Warehouse solutions, preferably Snowflake, familiarity with ETL tools (such as Informatica, Talend, Datastage, Stitch, Fivetran etc.), experience in containerization and orchestration (Docker, Kubernetes etc.), Recruitment Process
CV review – HR call – Interview (with Live-coding) – Client Interview (with Live-coding) –
Hiring Manager
Interview – Decision Development:
development budgets of up to €1,500 we fund certifications e.g.: AWS, Azure, ISTQB, PSM, access to Udemy and O'Reilly (formerly Safari Books Online),
#J-18808-Ljbffr
responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems, building and maintaining architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies, evaluating and translating technical design to workable technical solutions/code and technical specifications at par with industry standards, driving creation of re-usable artifacts, establishing scalable, efficient, automated processes for data analysis, data model development, validation, and implementation, working closely with analysts/data scientists to understand impact to the downstream data models, writing efficient and well-organized software to ship products in an iterative, continual release environment, contributing and promoting good software engineering practices across the team, communicating clearly and effectively to technical and non-technical audiences, defining data retention policies, monitoring performance and advising any necessary infrastructure changes. Your profile:
ready to start immediately
, 3+ years’ experience with Azure (Data Factory, SQL, Data Lake, Power BI, Devops, Delta Lake, CosmosDB), 5+ years’ experience with data engineering or backend/fullstack software development, experience with data transformation tools – Databricks and Spark, data manipulation libraries (such as Pandas, NumPy, PySpark), experience in structuring and modelling data in both relational and non-relational forms, ability to elaborate and propose relational/non-relational approach, normalization / denormalization and data warehousing concepts (star, Snowflake schemas), designing for transactional and analytical operations, experience with CI/CD tooling (GitHub, Azure DevOps, Harness etc), working knowledge of Git, Databricks will be benefit, good verbal and written communication skills in English. Work from the European Union region and a work permit are
required. experience with Azure Event Hubs, Azure Blob Storage, Azure Synapse, Spark Streaming, experience with data modelling tools, preferably DBT, experience with Enterprise Data Warehouse solutions, preferably Snowflake, familiarity with ETL tools (such as Informatica, Talend, Datastage, Stitch, Fivetran etc.), experience in containerization and orchestration (Docker, Kubernetes etc.), Recruitment Process
CV review – HR call – Interview (with Live-coding) – Client Interview (with Live-coding) –
Hiring Manager
Interview – Decision Development:
development budgets of up to €1,500 we fund certifications e.g.: AWS, Azure, ISTQB, PSM, access to Udemy and O'Reilly (formerly Safari Books Online),
#J-18808-Ljbffr