eSimplicity
Overview
Join to apply for the
Staff Software Engineer
role at
eSimplicity . About Us:
eSimplicity
is a modern digital services company that partners with government agencies to improve the lives and protect the well-being of all Americans, from veterans and service members to children, families, and seniors. Our engineers, designers, and strategists cut through complexity to create intuitive products and services that equip federal agencies with solutions to courageously transform today for a better tomorrow. Job Type: Full-time Description
We are looking for a seasoned Staff Software Engineer who has deep experience working in large scale Databricks ecosystems. This role involves building tools to move, manage, and govern large-scale data across interconnected platforms. You\'ll work on building web interfaces, backend services, and automated workflows that power internal helper tools, support data mesh strategies, and manage authenticated access to distributed data environments. You\'ll collaborate with engineers, DevOps, product owners, and data architects to rapidly prototype, build, and scale data-aware applications and infrastructure for secure and efficient data movement and integration. Responsibilities
Leads and mentors all other data roles in the program. Identifies and owns all technical solution requirements in developing enterprise-wide data architecture. Creates project-specific technical design, product and vendor selection, application, and technical architectures. Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options. Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need. Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies. Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories. Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects. Creates new pipeline development and maintains existing pipeline; updates ETL processes; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc. Implements large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and data maturity models and develop data strategy recommendations. Assemble large, complex data sets that meet non-functional and functional business requirements. Identify, design, and implement internal process improvements, including redesigning data infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Build infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics. Work with stakeholders, including data, design, product, and government stakeholders, and assist them with data-related technical issues. Write unit and integration tests for data processing code. Work with DevOps engineers on CI, CD, and IaC. Read specs and translate them into code and design documents. Perform code reviews and develop processes for improving code quality. Requirements
Minimum Requirements:
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation. Minimum 10 years of relevant experience in software engineering. Minimum 2 years working in large scale Databricks implementation. Proficiency in at least one of the following languages: TypeScript, JavaScript, Python. Proven experience working on large-scale system architectures and Petabyte-level data systems. Proficient in automated testing frameworks (PyTest, Jest, Cypress, Playwright) and testing best practices. Experience developing, testing, and securing RESTful and GraphQL APIs. Proven track record with AWS cloud architecture, including networking, security, and service orchestration. Experience with Docker and infrastructure automation with Kubernetes and Terraform. Familiarity with Redis for caching or message queuing. Knowledge of performance monitoring tools like Grafana, Prometheus, and Sentry. Familiarity with Git, Git-based workflows, and release pipelines using GitHub Actions and CI/CD platforms. Comfortable working in a tightly integrated Agile team (15 or fewer people). Strong written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications
Strong experience with modern frameworks such as React.js, Next.js, Node.js, Flask. Deep knowledge of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB). Experience with authentication/authorization frameworks like OAuth, SAML, Okta, Active Directory, and AWS IAM (ABAC). Familiarity with data mesh principles and domain-oriented architectures with secure data domain connections. Knowledge of event-driven architectures and systems like Kafka, Kinesis, RabbitMQ, or NATS. Experience exploring or building ETL pipelines and data ingestion workflows. Strong grasp of access control, identity management, and federated data governance. CMS and Healthcare Expertise: knowledge of CMS regulations and experience with complex healthcare projects; data infrastructure related projects or similar. Experience with CMS OIT data systems (e.g. IDR-C, CCW, EDM). Working Environment
eSimplicity supports a hybrid work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager. Occasional travel for training and project meetings. It is estimated to be less than 5% per year. Benefits
We offer highly competitive salaries and full healthcare benefits. Equal Employment Opportunity
eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability. Salary Description
$112,500 - $150,000 Seniority level
Mid-Senior level Employment type
Full-time Job function
Engineering and Information Technology Industries
IT Services and IT Consulting
#J-18808-Ljbffr
Join to apply for the
Staff Software Engineer
role at
eSimplicity . About Us:
eSimplicity
is a modern digital services company that partners with government agencies to improve the lives and protect the well-being of all Americans, from veterans and service members to children, families, and seniors. Our engineers, designers, and strategists cut through complexity to create intuitive products and services that equip federal agencies with solutions to courageously transform today for a better tomorrow. Job Type: Full-time Description
We are looking for a seasoned Staff Software Engineer who has deep experience working in large scale Databricks ecosystems. This role involves building tools to move, manage, and govern large-scale data across interconnected platforms. You\'ll work on building web interfaces, backend services, and automated workflows that power internal helper tools, support data mesh strategies, and manage authenticated access to distributed data environments. You\'ll collaborate with engineers, DevOps, product owners, and data architects to rapidly prototype, build, and scale data-aware applications and infrastructure for secure and efficient data movement and integration. Responsibilities
Leads and mentors all other data roles in the program. Identifies and owns all technical solution requirements in developing enterprise-wide data architecture. Creates project-specific technical design, product and vendor selection, application, and technical architectures. Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options. Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need. Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies. Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories. Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects. Creates new pipeline development and maintains existing pipeline; updates ETL processes; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc. Implements large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and data maturity models and develop data strategy recommendations. Assemble large, complex data sets that meet non-functional and functional business requirements. Identify, design, and implement internal process improvements, including redesigning data infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Build infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics. Work with stakeholders, including data, design, product, and government stakeholders, and assist them with data-related technical issues. Write unit and integration tests for data processing code. Work with DevOps engineers on CI, CD, and IaC. Read specs and translate them into code and design documents. Perform code reviews and develop processes for improving code quality. Requirements
Minimum Requirements:
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation. Minimum 10 years of relevant experience in software engineering. Minimum 2 years working in large scale Databricks implementation. Proficiency in at least one of the following languages: TypeScript, JavaScript, Python. Proven experience working on large-scale system architectures and Petabyte-level data systems. Proficient in automated testing frameworks (PyTest, Jest, Cypress, Playwright) and testing best practices. Experience developing, testing, and securing RESTful and GraphQL APIs. Proven track record with AWS cloud architecture, including networking, security, and service orchestration. Experience with Docker and infrastructure automation with Kubernetes and Terraform. Familiarity with Redis for caching or message queuing. Knowledge of performance monitoring tools like Grafana, Prometheus, and Sentry. Familiarity with Git, Git-based workflows, and release pipelines using GitHub Actions and CI/CD platforms. Comfortable working in a tightly integrated Agile team (15 or fewer people). Strong written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications
Strong experience with modern frameworks such as React.js, Next.js, Node.js, Flask. Deep knowledge of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB). Experience with authentication/authorization frameworks like OAuth, SAML, Okta, Active Directory, and AWS IAM (ABAC). Familiarity with data mesh principles and domain-oriented architectures with secure data domain connections. Knowledge of event-driven architectures and systems like Kafka, Kinesis, RabbitMQ, or NATS. Experience exploring or building ETL pipelines and data ingestion workflows. Strong grasp of access control, identity management, and federated data governance. CMS and Healthcare Expertise: knowledge of CMS regulations and experience with complex healthcare projects; data infrastructure related projects or similar. Experience with CMS OIT data systems (e.g. IDR-C, CCW, EDM). Working Environment
eSimplicity supports a hybrid work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager. Occasional travel for training and project meetings. It is estimated to be less than 5% per year. Benefits
We offer highly competitive salaries and full healthcare benefits. Equal Employment Opportunity
eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability. Salary Description
$112,500 - $150,000 Seniority level
Mid-Senior level Employment type
Full-time Job function
Engineering and Information Technology Industries
IT Services and IT Consulting
#J-18808-Ljbffr