INSPYR
Overview
Title: Data Operations Engineer Location: Washington D.C. - Hybrid Duration: 6+ Month Contract (with extension) Pay Range: $55/hr - $62.50/hr Position Summary:
The Data Operations Engineer oversees the development, management, and optimization of a cloud data platform on Google Cloud Platform (GCP). This role ensures secure, efficient, and scalable data solutions and is instrumental in establishing and maintaining the data infrastructure that supports advanced analytics and research on workforce development, higher education, labor market, and economic data for state partners. The Data Operations Engineer supports the Education Analytics and Technical Services team within the Employer Alignment focus area, ensuring that data infrastructure and practices meet the needs of these strategic initiatives. Results done right.
Our client provides equal employment opportunities to all employees and applicants for employment without regard to race, color, creed, ancestry, national origin, citizenship, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, religion, age, disability, genetic information, service in the military, or any other characteristic protected by applicable federal, state, or local laws and ordinances.
Key responsibilities
Data Operations Engineer has four core responsibility areas with approximate time allocation: Data Management (40%) Cloud Data Platform Administration: Manage, maintain, and optimize the Google Cloud-based data warehouse and storage environment (e.g., BigQuery, Cloud Storage) to ensure secure, efficient, and scalable data solutions. ETL/ELT Pipeline Development: Develop and orchestrate scalable ETL/ELT data pipelines for data ingestion and transformation, using GCP services (such as Cloud Run or Cloud Dataflow) to handle large-scale data processing. Data Quality & Governance: Ensure data integrity, quality, and governance compliance across all workflows, establishing best practices for data security, access control, and regulatory compliance. Third-Party Data Integration: Collaborate with internal research teams and state agency partners to integrate third-party data sources (e.g., via APIs and data marketplaces) into the platform. Identify and onboard new data sources and technology to expand the workforce and education data model. Performance Optimization: Optimize data structures, partitioning, and query strategies for performance and cost efficiency in BigQuery. Monitor and tune resource usage to ensure cost-effective operations. System Administration and Performance Optimization (30%) Monitoring & Troubleshooting: Monitor system performance and troubleshoot issues across GCP data services. Ensure high availability and responsiveness of databases, pipelines, and applications. Resource & Cost Management: Manage storage resources, query performance, and workload scheduling in the GCP environment (BigQuery, Cloud Run, etc.). Implement cost-effective strategies for managing cloud data warehouse expenditures, including rightsizing storage and compute resources. Automation & DevOps: Automate data workflows, pipeline scheduling, and deployments (leveraging Infrastructure-as-Code and CI/CD where possible) to streamline data processing and reporting. Maintain comprehensive documentation for data models, system configurations, and integration processes to support maintainability and knowledge sharing. Data Collaboration and Reporting Support (20%) Analytics Enablement: Provide technical data access and support to the Education Analytics and Technical Services team working within the data platform. Ensure that analysts and researchers can easily retrieve and analyze data needed for workforce and education insights. Data Quality Collaboration: Drive data quality improvements through close collaboration with stakeholders and contractors, supporting effective analytics and reporting outcomes. Establish feedback loops to continually refine data definitions and accuracy. Dashboard and App Support: Assist in the development and maintenance of analytics dashboards and applications by ensuring data is accessible, well-structured, and up to date. This includes supporting business intelligence tools like Tableau and custom analytics apps (e.g., Streamlit) by provisioning data and optimizing queries for front-end use. Alignment with Analytics Needs: Ensure ongoing alignment between the data infrastructure and the analytic capabilities of the team. Work closely with analysts to understand their data needs and adjust data models or pipelines to enable new metrics, visualizations, and insights. Team Leadership & DEI Commitment (10%) Technical Leadership: Provide guidance and training to research and analytics team members on best practices in data warehousing, data engineering, and governance. Foster data literacy and efficient use of the data platform across the team. Diversity, Equity & Inclusion: Partner with Human Resources and DEI leadership to promote equitable workplace practices and embrace diverse perspectives in data operations. Collaborative Culture: Foster a collaborative, inclusive environment that encourages innovation and cross-functional teamwork. Model transparency, respect, and inclusion in all professional interactions, ensuring that all team members feel valued and heard as we develop data solutions.
Qualifications and Experience
Education: Bachelor’s degree in computer science, data engineering, information systems, or a related field (or equivalent work experience). A master’s degree is a plus. Cloud Data Platform Expertise: 5+ years of experience in enterprise environments with multiple internal and external stakeholders managing and operating cloud-based data platforms, preferably on Google Cloud Platform (BigQuery, Cloud Storage, etc.). Experience with other data warehouse ecosystems such as Snowflake, Amazon Redshift, or Databricks is highly valued, with an expectation of willingness to pivot and learn GCP tools. Data Modeling & Architecture: Proven experience in designing and implementing data models that facilitate continuous research, dashboarding, reporting, and advanced analytics. Ability to optimize data workflows and performance for large-scale datasets. Programming & Scripting: Strong proficiency in SQL for data manipulation and query optimization. Experience with Python (or similar languages) for data engineering tasks and script automation (e.g., using Pandas, Apache Beam/Dataflow). ETL/ELT & Integration: Hands-on experience with ETL/ELT pipeline development and cloud-based data integration processes. Experience integrating third-party data from external APIs and data marketplaces to enrich internal datasets. CI/CD: Familiarity with tools such as Terraform, Cloud Build, GitHub Actions, or Jenkins for infrastructure provisioning and deployment automation. Data Governance & Security: Demonstrated expertise managing security, permissions, and controls for a large organization. Knowledge of key U.S. data privacy regulations (e.g., FERPA, CCPA) and cloud compliance frameworks (e.g., SOC 2, ISO 27001) for data handling in education and labor contexts is essential. Version Control: Proficient with Git for version control and collaborative development. Analytical Tools: Familiarity with business intelligence and data visualization tools such as Tableau (preferred) or Power BI, and exposure to building simple analytics applications or dashboards to support end-users. Domain Experience: Experience working with state-level workforce, education, and/or economic datasets is a strong plus. An understanding of labor market data or higher education data conventions will help in contextualizing and validating data. Soft Skills: Excellent problem-solving and troubleshooting skills in a data-centric environment. Strong communication and collaboration abilities, with experience working in cross-functional teams and explaining technical concepts to non-technical stakeholders. Experience with agile project management methodologies for data infrastructure projects is beneficial. Desired Certifications: Current Google Cloud Professional Data Engineer and/or Cloud Architect certification Our benefits
Comprehensive medical benefits Competitive pay 401(k) retirement plan …and much more! About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients\' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com. INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. #LI- SD6 #LI- Hybrid Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions’ Privacy Policy and INSPYR Solutions’ AI and Automated Employment Decision Tool Policy:
By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.
#J-18808-Ljbffr
Title: Data Operations Engineer Location: Washington D.C. - Hybrid Duration: 6+ Month Contract (with extension) Pay Range: $55/hr - $62.50/hr Position Summary:
The Data Operations Engineer oversees the development, management, and optimization of a cloud data platform on Google Cloud Platform (GCP). This role ensures secure, efficient, and scalable data solutions and is instrumental in establishing and maintaining the data infrastructure that supports advanced analytics and research on workforce development, higher education, labor market, and economic data for state partners. The Data Operations Engineer supports the Education Analytics and Technical Services team within the Employer Alignment focus area, ensuring that data infrastructure and practices meet the needs of these strategic initiatives. Results done right.
Our client provides equal employment opportunities to all employees and applicants for employment without regard to race, color, creed, ancestry, national origin, citizenship, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, religion, age, disability, genetic information, service in the military, or any other characteristic protected by applicable federal, state, or local laws and ordinances.
Key responsibilities
Data Operations Engineer has four core responsibility areas with approximate time allocation: Data Management (40%) Cloud Data Platform Administration: Manage, maintain, and optimize the Google Cloud-based data warehouse and storage environment (e.g., BigQuery, Cloud Storage) to ensure secure, efficient, and scalable data solutions. ETL/ELT Pipeline Development: Develop and orchestrate scalable ETL/ELT data pipelines for data ingestion and transformation, using GCP services (such as Cloud Run or Cloud Dataflow) to handle large-scale data processing. Data Quality & Governance: Ensure data integrity, quality, and governance compliance across all workflows, establishing best practices for data security, access control, and regulatory compliance. Third-Party Data Integration: Collaborate with internal research teams and state agency partners to integrate third-party data sources (e.g., via APIs and data marketplaces) into the platform. Identify and onboard new data sources and technology to expand the workforce and education data model. Performance Optimization: Optimize data structures, partitioning, and query strategies for performance and cost efficiency in BigQuery. Monitor and tune resource usage to ensure cost-effective operations. System Administration and Performance Optimization (30%) Monitoring & Troubleshooting: Monitor system performance and troubleshoot issues across GCP data services. Ensure high availability and responsiveness of databases, pipelines, and applications. Resource & Cost Management: Manage storage resources, query performance, and workload scheduling in the GCP environment (BigQuery, Cloud Run, etc.). Implement cost-effective strategies for managing cloud data warehouse expenditures, including rightsizing storage and compute resources. Automation & DevOps: Automate data workflows, pipeline scheduling, and deployments (leveraging Infrastructure-as-Code and CI/CD where possible) to streamline data processing and reporting. Maintain comprehensive documentation for data models, system configurations, and integration processes to support maintainability and knowledge sharing. Data Collaboration and Reporting Support (20%) Analytics Enablement: Provide technical data access and support to the Education Analytics and Technical Services team working within the data platform. Ensure that analysts and researchers can easily retrieve and analyze data needed for workforce and education insights. Data Quality Collaboration: Drive data quality improvements through close collaboration with stakeholders and contractors, supporting effective analytics and reporting outcomes. Establish feedback loops to continually refine data definitions and accuracy. Dashboard and App Support: Assist in the development and maintenance of analytics dashboards and applications by ensuring data is accessible, well-structured, and up to date. This includes supporting business intelligence tools like Tableau and custom analytics apps (e.g., Streamlit) by provisioning data and optimizing queries for front-end use. Alignment with Analytics Needs: Ensure ongoing alignment between the data infrastructure and the analytic capabilities of the team. Work closely with analysts to understand their data needs and adjust data models or pipelines to enable new metrics, visualizations, and insights. Team Leadership & DEI Commitment (10%) Technical Leadership: Provide guidance and training to research and analytics team members on best practices in data warehousing, data engineering, and governance. Foster data literacy and efficient use of the data platform across the team. Diversity, Equity & Inclusion: Partner with Human Resources and DEI leadership to promote equitable workplace practices and embrace diverse perspectives in data operations. Collaborative Culture: Foster a collaborative, inclusive environment that encourages innovation and cross-functional teamwork. Model transparency, respect, and inclusion in all professional interactions, ensuring that all team members feel valued and heard as we develop data solutions.
Qualifications and Experience
Education: Bachelor’s degree in computer science, data engineering, information systems, or a related field (or equivalent work experience). A master’s degree is a plus. Cloud Data Platform Expertise: 5+ years of experience in enterprise environments with multiple internal and external stakeholders managing and operating cloud-based data platforms, preferably on Google Cloud Platform (BigQuery, Cloud Storage, etc.). Experience with other data warehouse ecosystems such as Snowflake, Amazon Redshift, or Databricks is highly valued, with an expectation of willingness to pivot and learn GCP tools. Data Modeling & Architecture: Proven experience in designing and implementing data models that facilitate continuous research, dashboarding, reporting, and advanced analytics. Ability to optimize data workflows and performance for large-scale datasets. Programming & Scripting: Strong proficiency in SQL for data manipulation and query optimization. Experience with Python (or similar languages) for data engineering tasks and script automation (e.g., using Pandas, Apache Beam/Dataflow). ETL/ELT & Integration: Hands-on experience with ETL/ELT pipeline development and cloud-based data integration processes. Experience integrating third-party data from external APIs and data marketplaces to enrich internal datasets. CI/CD: Familiarity with tools such as Terraform, Cloud Build, GitHub Actions, or Jenkins for infrastructure provisioning and deployment automation. Data Governance & Security: Demonstrated expertise managing security, permissions, and controls for a large organization. Knowledge of key U.S. data privacy regulations (e.g., FERPA, CCPA) and cloud compliance frameworks (e.g., SOC 2, ISO 27001) for data handling in education and labor contexts is essential. Version Control: Proficient with Git for version control and collaborative development. Analytical Tools: Familiarity with business intelligence and data visualization tools such as Tableau (preferred) or Power BI, and exposure to building simple analytics applications or dashboards to support end-users. Domain Experience: Experience working with state-level workforce, education, and/or economic datasets is a strong plus. An understanding of labor market data or higher education data conventions will help in contextualizing and validating data. Soft Skills: Excellent problem-solving and troubleshooting skills in a data-centric environment. Strong communication and collaboration abilities, with experience working in cross-functional teams and explaining technical concepts to non-technical stakeholders. Experience with agile project management methodologies for data infrastructure projects is beneficial. Desired Certifications: Current Google Cloud Professional Data Engineer and/or Cloud Architect certification Our benefits
Comprehensive medical benefits Competitive pay 401(k) retirement plan …and much more! About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients\' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com. INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. #LI- SD6 #LI- Hybrid Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions’ Privacy Policy and INSPYR Solutions’ AI and Automated Employment Decision Tool Policy:
By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.
#J-18808-Ljbffr