Virginia Staffing
Sr. Data Engineer (Technical) Hybrid
Virginia Staffing, Washington, District of Columbia, us, 20022
Job Posting: Sr. Data Engineer
Job Summary: Title: Sr. Data Engineer (Technical) Hybrid Location: Arlington, VA, USA Length and Terms: Long term - W2 Only Position Created On: 09/04/2025 05:49 pm Job Description: Webcam interview; Long term project; Hybrid (3 days per week onsite at VA); W2 only; Mandatory Skills: Highly proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, Nifi, Sqoop), SQL to build Big Data products & platforms Required Skills: (8+ years Minimum) Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it. Experience in building and deploying production data-driven applications and data processing workflows/pipelines. High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, Nifi, Sqoop), SQL to build Big Data products & platforms. Experience building robust and efficient data pipelines end-to-end with a strong focus on data quality. Implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting. Cloud Knowledge (Databricks or AWS ecosystem) is a plus but not required.
Job Summary: Title: Sr. Data Engineer (Technical) Hybrid Location: Arlington, VA, USA Length and Terms: Long term - W2 Only Position Created On: 09/04/2025 05:49 pm Job Description: Webcam interview; Long term project; Hybrid (3 days per week onsite at VA); W2 only; Mandatory Skills: Highly proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, Nifi, Sqoop), SQL to build Big Data products & platforms Required Skills: (8+ years Minimum) Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it. Experience in building and deploying production data-driven applications and data processing workflows/pipelines. High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Airflow, Nifi, Sqoop), SQL to build Big Data products & platforms. Experience building robust and efficient data pipelines end-to-end with a strong focus on data quality. Implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting. Cloud Knowledge (Databricks or AWS ecosystem) is a plus but not required.