Abbott
Overview
Join to apply for the
Staff Data Engineer
role at
Abbott .
Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio spans diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries. We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions and we’re revolutionizing the way people monitor their glucose levels with our new sensing technology.
This
Staff Data Engineer
position can work remotely within the U.S.
What You’ll Work On You will be building big data collection and analytics capabilities to uncover customer, product and operational insights. You will lead cloud-based big data engineering efforts, including data wrangling, analysis, and pipeline development. You will help define and implement the organization’s Big Data strategy, working closely with data engineers, analysts, and scientists to solve complex business problems using data science and machine learning. You will work in a technology-driven environment utilizing tools such as Databricks, Redshift, S3, Lambda, DynamoDB, Spark and Python.
Responsibilities
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives
Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services
Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3
Integrate and assemble large, complex data sets that meet a broad range of business requirements
Read, extract, transform, stage and load data to selected tools and frameworks as required
Customize and manage integration tools, databases, warehouses, and analytical systems
Process unstructured data into a form suitable for analysis and assist in analysis of the processed data
Work directly with technology and engineering teams to integrate data processing and business objectives
Monitor and optimize data performance, uptime, and scale; maintain high standards of code quality and thoughtful design
Create software architecture and design documentation for supported solutions and best practices
Support the team with technical planning, design, and code reviews including peer reviews
Provide architecture and technical knowledge training and support for solution groups
Develop good working relations with other solution teams and groups (Engineering, Marketing, Product, Test, QA)
Stay current with emerging trends and make recommendations to help the organization innovate
Plan complex projects from scope/timeline development through technical design and execution
Demonstrate leadership through mentoring other team members
Required Qualifications
Bachelors Degree in Computer Science, Information Technology or other relevant field
At least 5 to 10 years of recent experience in Software Engineering, Data Engineering or Big Data
Ability to work effectively within a team in a fast-paced changing environment
Knowledge of or direct experience with Databricks and/or Spark
Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new languages
Knowledge of strategies for processing large amounts of structured and unstructured data
Knowledge of data cleaning, wrangling, visualization and reporting
Ability to explore new alternatives to solve data mining issues using best practices and data innovations
Familiarity with databases, BI applications, data quality and performance tuning
Excellent written, verbal and listening communication skills
Comfortable working asynchronously with a distributed team
Preferred Qualifications
Knowledge of or direct experience with AWS services such as S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda
Experience working in an agile environment
Practical knowledge of Linux
Apply Learn more about health and wellness benefits: www.abbottbenefits.com
Abbott is an Equal Opportunity Employer, committed to employee diversity.
Connect with us at www.abbott.com, on Facebook at www.facebook.com/Abbott and on Twitter @AbbottNews and @AbbottGlobal
The base pay for this position is $97,300.00 – $194,700.00. In some locations, the pay range may vary from the posted range.
Note: Referrals increase your chances of interviewing at Abbott by 2x.
Chicago, IL
$109,000.00-$155,500.00
1 week ago
#J-18808-Ljbffr
Staff Data Engineer
role at
Abbott .
Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio spans diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries. We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions and we’re revolutionizing the way people monitor their glucose levels with our new sensing technology.
This
Staff Data Engineer
position can work remotely within the U.S.
What You’ll Work On You will be building big data collection and analytics capabilities to uncover customer, product and operational insights. You will lead cloud-based big data engineering efforts, including data wrangling, analysis, and pipeline development. You will help define and implement the organization’s Big Data strategy, working closely with data engineers, analysts, and scientists to solve complex business problems using data science and machine learning. You will work in a technology-driven environment utilizing tools such as Databricks, Redshift, S3, Lambda, DynamoDB, Spark and Python.
Responsibilities
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives
Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services
Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3
Integrate and assemble large, complex data sets that meet a broad range of business requirements
Read, extract, transform, stage and load data to selected tools and frameworks as required
Customize and manage integration tools, databases, warehouses, and analytical systems
Process unstructured data into a form suitable for analysis and assist in analysis of the processed data
Work directly with technology and engineering teams to integrate data processing and business objectives
Monitor and optimize data performance, uptime, and scale; maintain high standards of code quality and thoughtful design
Create software architecture and design documentation for supported solutions and best practices
Support the team with technical planning, design, and code reviews including peer reviews
Provide architecture and technical knowledge training and support for solution groups
Develop good working relations with other solution teams and groups (Engineering, Marketing, Product, Test, QA)
Stay current with emerging trends and make recommendations to help the organization innovate
Plan complex projects from scope/timeline development through technical design and execution
Demonstrate leadership through mentoring other team members
Required Qualifications
Bachelors Degree in Computer Science, Information Technology or other relevant field
At least 5 to 10 years of recent experience in Software Engineering, Data Engineering or Big Data
Ability to work effectively within a team in a fast-paced changing environment
Knowledge of or direct experience with Databricks and/or Spark
Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new languages
Knowledge of strategies for processing large amounts of structured and unstructured data
Knowledge of data cleaning, wrangling, visualization and reporting
Ability to explore new alternatives to solve data mining issues using best practices and data innovations
Familiarity with databases, BI applications, data quality and performance tuning
Excellent written, verbal and listening communication skills
Comfortable working asynchronously with a distributed team
Preferred Qualifications
Knowledge of or direct experience with AWS services such as S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda
Experience working in an agile environment
Practical knowledge of Linux
Apply Learn more about health and wellness benefits: www.abbottbenefits.com
Abbott is an Equal Opportunity Employer, committed to employee diversity.
Connect with us at www.abbott.com, on Facebook at www.facebook.com/Abbott and on Twitter @AbbottNews and @AbbottGlobal
The base pay for this position is $97,300.00 – $194,700.00. In some locations, the pay range may vary from the posted range.
Note: Referrals increase your chances of interviewing at Abbott by 2x.
Chicago, IL
$109,000.00-$155,500.00
1 week ago
#J-18808-Ljbffr