Intelliswift
Machine Learning/Artificial Intelligence Engineer
Intelliswift, Pleasanton, California, United States, 94566
Additional Note: Successful candidate will need to relocate to the Bay Area prior to remote start.
Must Haves
Machine Learning, Deep Learning, NLP
TensorFlow, Scipy, PyTorch, OpenCV, OCR
Numpy, SKlearn, Pandas
Python, Spark, Hive, SQL
Productionize model.
Technical Knowledge and Skills
Consultant resources shall possess most of the following technical knowledge and experience: Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture. Strong background in statistical modeling, NLP and machine learning. Expertise in various facets of Client and NLP, such as classification, feature engineering, information extraction, clustering, semi-supervised learning, topic modeling and ranking. Strong hands-on experience in building, deploying and productionizing Client models using software such as Spark MLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatory Ability to evaluate and choose best suited Client algorithms, perform feature engineering and optimize machine learning models is mandatory Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must Strong experience with Data Science Notebooks like RStudio, Jupyter, Zeppelin, PyCharm etc. Design and implement an integrated Big Data platform and analytics solution Design and implement data collectors to collect and transport data to the Big Data Platform. Good to have but not mandatory 4+ years of hands-on development, deployment and production support experience in Hadoop environment. 4-5 years of programming experience in Java, Scala, Python. Proficient in SQL and relational database design and methods for data retrieval. Good to have but not mandatory building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase. Good to have but not mandatory experience with developing Hive QL, UDFs for analyzing semi-structured/structured datasets. Good to have but not mandatory experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc. Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm Must have working experience in the data warehousing and Business Intelligence systems. Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs. Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
#J-18808-Ljbffr
Technical Knowledge and Skills
Consultant resources shall possess most of the following technical knowledge and experience: Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture. Strong background in statistical modeling, NLP and machine learning. Expertise in various facets of Client and NLP, such as classification, feature engineering, information extraction, clustering, semi-supervised learning, topic modeling and ranking. Strong hands-on experience in building, deploying and productionizing Client models using software such as Spark MLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatory Ability to evaluate and choose best suited Client algorithms, perform feature engineering and optimize machine learning models is mandatory Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must Strong experience with Data Science Notebooks like RStudio, Jupyter, Zeppelin, PyCharm etc. Design and implement an integrated Big Data platform and analytics solution Design and implement data collectors to collect and transport data to the Big Data Platform. Good to have but not mandatory 4+ years of hands-on development, deployment and production support experience in Hadoop environment. 4-5 years of programming experience in Java, Scala, Python. Proficient in SQL and relational database design and methods for data retrieval. Good to have but not mandatory building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase. Good to have but not mandatory experience with developing Hive QL, UDFs for analyzing semi-structured/structured datasets. Good to have but not mandatory experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc. Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm Must have working experience in the data warehousing and Business Intelligence systems. Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs. Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
#J-18808-Ljbffr