eSimplicity
About Us
eSimplicity is a modern digital services company that partners with government agencies to improve the lives and protect the well-being of all Americans, from veterans and service members to children, families, and seniors. Our engineers, designers, and strategists cut through complexity to create intuitive products and services that equip federal agencies with solutions to courageously transform today for a better tomorrow.
Description
We’re seeking a Staff Software Engineer who is experienced in building scalable, resilient data pipelines that ingest, validate, and transform data rapidly and accurately. This person will emphasize observability and reliability when supporting the ongoing operation and re-architecture of our data ingestion capability, which routinely supports large volumes of Medicare and Medicaid data.
Responsibilities
Leads and mentors all other data roles in the program.
Identifies and owns all technical solution requirements in developing enterprise-wide data architecture.
Creates project-specific technical design, product and vendor selection, application, and technical architectures.
Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options.
Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need.
Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies.
Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories.
Responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.
Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects.
Creates new pipeline development and maintains existing pipeline; updates ETL processes; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc.
Implements large dataset engineering with project data specialists, including data augmentation, data quality analysis, data analytics, data profiling, data algorithms, and data maturity models.
Assemble large, complex data sets that meet non-functional and functional business requirements.
Identify, design, and implement internal process improvements, including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
Build infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies.
Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics.
Work with stakeholders, including data, design, product, and government stakeholders, assisting them with data-related technical issues.
Write unit and integration tests for data processing code, and collaborate with DevOps on CI, CD, and IaC.
Qualifications
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System.
Minimum of 10 years related experience. Hands-on software development experience.
Bachelor’s degree in Computer Science, Information Systems, Engineering, Business, or related field. With ten years of general IT experience and at least eight years of specialized experience, a degree is NOT required.
Extensive data pipeline experience using Python, Java, and cloud technologies.
Experienced in designing data architecture for shared services, scalability, and performance; designing data services including API, metadata, and data catalogs.
Experience with data governance processes to ingest (batch, stream), curate, and share data with upstream and downstream data users.
Ability to build and optimize data sets, big data pipelines, and architectures.
Ability to perform root cause analysis and identify opportunities for improvement.
Excellent analytic skills for working with unstructured datasets.
Familiarity with software and tools including Kafka, Spark, Hadoop; relational and NoSQL databases; workflow tools such as Airflow; AWS services like Redshift, RDS, EMR, EC2; and programming languages including Scala, C++, Java, and Python.
Flexible and willing to adapt to changing priorities in a fast-paced, team-oriented environment.
Experience with Agile methodology and test-driven development.
Experience with Atlassian Jira/Confluence.
Excellent written and spoken English.
Ability to obtain and maintain a Public Trust; residing in the United States.
Desired qualifications: federal government contracting experience; data engineering certifications; CMS/healthcare industry experience.
Working Environment eSimplicity supports a remote work environment operating within the Eastern time zone. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed. Occasional travel is expected to be less than 5% per year.
Benefits We offer highly competitive salaries and full healthcare benefits.
Equal Employment Opportunity eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.
Salary Description
$116,200 - $155,000
#J-18808-Ljbffr
eSimplicity is a modern digital services company that partners with government agencies to improve the lives and protect the well-being of all Americans, from veterans and service members to children, families, and seniors. Our engineers, designers, and strategists cut through complexity to create intuitive products and services that equip federal agencies with solutions to courageously transform today for a better tomorrow.
Description
We’re seeking a Staff Software Engineer who is experienced in building scalable, resilient data pipelines that ingest, validate, and transform data rapidly and accurately. This person will emphasize observability and reliability when supporting the ongoing operation and re-architecture of our data ingestion capability, which routinely supports large volumes of Medicare and Medicaid data.
Responsibilities
Leads and mentors all other data roles in the program.
Identifies and owns all technical solution requirements in developing enterprise-wide data architecture.
Creates project-specific technical design, product and vendor selection, application, and technical architectures.
Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options.
Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need.
Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies.
Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories.
Responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.
Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects.
Creates new pipeline development and maintains existing pipeline; updates ETL processes; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc.
Implements large dataset engineering with project data specialists, including data augmentation, data quality analysis, data analytics, data profiling, data algorithms, and data maturity models.
Assemble large, complex data sets that meet non-functional and functional business requirements.
Identify, design, and implement internal process improvements, including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
Build infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies.
Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics.
Work with stakeholders, including data, design, product, and government stakeholders, assisting them with data-related technical issues.
Write unit and integration tests for data processing code, and collaborate with DevOps on CI, CD, and IaC.
Qualifications
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System.
Minimum of 10 years related experience. Hands-on software development experience.
Bachelor’s degree in Computer Science, Information Systems, Engineering, Business, or related field. With ten years of general IT experience and at least eight years of specialized experience, a degree is NOT required.
Extensive data pipeline experience using Python, Java, and cloud technologies.
Experienced in designing data architecture for shared services, scalability, and performance; designing data services including API, metadata, and data catalogs.
Experience with data governance processes to ingest (batch, stream), curate, and share data with upstream and downstream data users.
Ability to build and optimize data sets, big data pipelines, and architectures.
Ability to perform root cause analysis and identify opportunities for improvement.
Excellent analytic skills for working with unstructured datasets.
Familiarity with software and tools including Kafka, Spark, Hadoop; relational and NoSQL databases; workflow tools such as Airflow; AWS services like Redshift, RDS, EMR, EC2; and programming languages including Scala, C++, Java, and Python.
Flexible and willing to adapt to changing priorities in a fast-paced, team-oriented environment.
Experience with Agile methodology and test-driven development.
Experience with Atlassian Jira/Confluence.
Excellent written and spoken English.
Ability to obtain and maintain a Public Trust; residing in the United States.
Desired qualifications: federal government contracting experience; data engineering certifications; CMS/healthcare industry experience.
Working Environment eSimplicity supports a remote work environment operating within the Eastern time zone. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed. Occasional travel is expected to be less than 5% per year.
Benefits We offer highly competitive salaries and full healthcare benefits.
Equal Employment Opportunity eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.
Salary Description
$116,200 - $155,000
#J-18808-Ljbffr