TALENT Software Services
AI Engineer - Prompt (Redmond)
TALENT Software Services, Redmond, Washington, United States, 98052
Are you an experienced AI Engineer - Prompt with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced AI Engineer - Prompt to work at their company in Redmond, WA.
Position Summary:
The main function of an AI Engineer- Prompt is to design and craft conversational prompts, messages, and responses for AI chatbots, virtual assistants, or customer service applications. Their role revolves around creating prompts that align with organizational objectives, provide a seamless user experience, and ensure effective interactions between users and AI systems. This entails developing a library of pre-defined prompts tailored to various scenarios, industries, and user needs.
Primary Responsibilities/Accountabilities: Fine-tune and improve a variety of sophisticated software implementation projects Gather and analyse system requirements, document specifications, and develop software solutions to meet client needs and data Analyse and review enhancement requests and specifications Implement system software and customise client requirements Prepare the detailed software specifications and test plans Code new programs to the client's specifications and create test data for testing Modify existing programs to new standards and conduct unit testing of developed programs Create migration packages for system testing, user testing, and implementation Provide quality assurance reviews Perform post-implementation validation of software and resolve any bugs found during testing Support Eval for Office services with a primary focus on evaluation of Suggested Users Actions (SUAs) across Word, Excel, PowerPoint and other supported host applications. Examples of work: Set up synthetic tenant data and data ingestion, perform testing across starter, consumer and premium accounts, generate grounding data, and create configurations as code. Maintain, validate, and automate the creation of test datasets for an LLM evaluation system. Integrate evaluation quality checks in our build and deployment pipeline, ensuring performance, efficiency, and scalability. Perform a mixture of hands-on and hands-off validation and leverage toolsets such as Seval in their work. The role will consist of creating evaluation test sets, running them, inspecting results, and iterating with partner teams; maintaining/validating test datasets as configuration-as-code; and building pipelines to report results and automate pieces of the evaluation process. Purpose of the Team: The purpose of this team is to build and operate the middle services that power AI experiences in Word, Excel, and PowerPoint, with a focus on prompt evaluation and related automation for those experiences. Key projects: This role will contribute to AI within Office apps, including evaluations for prompts, manual/automated testing, test environment setup, and coding activities needed to support these evaluations.
Qualifications: Bachelor's degree in a technical field, such as computer science, computer engineering or a related field required 2-4 years' experience required A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design Large systems software design and development experience Experience performing in-depth troubleshooting and unit testing with both new and legacy production systems Experience in programming and experience with problem diagnosis and resolution The ideal resume would contain prior LLM evaluation experience, plus data science/experimentation background; Python experience is called out as an A+. Experience with prompt engineering, LLM prompt evaluation, synthetic data, and coding/scripting. Ability to set up synthetic tenant data and data ingestion - test accounts, generate grounding data, and configuration as code Maintain, validate, and automate the creation of test datasets for an LLM evaluation system Integrate evaluation quality checks in our build and deployment pipeline, ensuring performance and scalability
Position Summary:
The main function of an AI Engineer- Prompt is to design and craft conversational prompts, messages, and responses for AI chatbots, virtual assistants, or customer service applications. Their role revolves around creating prompts that align with organizational objectives, provide a seamless user experience, and ensure effective interactions between users and AI systems. This entails developing a library of pre-defined prompts tailored to various scenarios, industries, and user needs.
Primary Responsibilities/Accountabilities: Fine-tune and improve a variety of sophisticated software implementation projects Gather and analyse system requirements, document specifications, and develop software solutions to meet client needs and data Analyse and review enhancement requests and specifications Implement system software and customise client requirements Prepare the detailed software specifications and test plans Code new programs to the client's specifications and create test data for testing Modify existing programs to new standards and conduct unit testing of developed programs Create migration packages for system testing, user testing, and implementation Provide quality assurance reviews Perform post-implementation validation of software and resolve any bugs found during testing Support Eval for Office services with a primary focus on evaluation of Suggested Users Actions (SUAs) across Word, Excel, PowerPoint and other supported host applications. Examples of work: Set up synthetic tenant data and data ingestion, perform testing across starter, consumer and premium accounts, generate grounding data, and create configurations as code. Maintain, validate, and automate the creation of test datasets for an LLM evaluation system. Integrate evaluation quality checks in our build and deployment pipeline, ensuring performance, efficiency, and scalability. Perform a mixture of hands-on and hands-off validation and leverage toolsets such as Seval in their work. The role will consist of creating evaluation test sets, running them, inspecting results, and iterating with partner teams; maintaining/validating test datasets as configuration-as-code; and building pipelines to report results and automate pieces of the evaluation process. Purpose of the Team: The purpose of this team is to build and operate the middle services that power AI experiences in Word, Excel, and PowerPoint, with a focus on prompt evaluation and related automation for those experiences. Key projects: This role will contribute to AI within Office apps, including evaluations for prompts, manual/automated testing, test environment setup, and coding activities needed to support these evaluations.
Qualifications: Bachelor's degree in a technical field, such as computer science, computer engineering or a related field required 2-4 years' experience required A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design Large systems software design and development experience Experience performing in-depth troubleshooting and unit testing with both new and legacy production systems Experience in programming and experience with problem diagnosis and resolution The ideal resume would contain prior LLM evaluation experience, plus data science/experimentation background; Python experience is called out as an A+. Experience with prompt engineering, LLM prompt evaluation, synthetic data, and coding/scripting. Ability to set up synthetic tenant data and data ingestion - test accounts, generate grounding data, and configuration as code Maintain, validate, and automate the creation of test datasets for an LLM evaluation system Integrate evaluation quality checks in our build and deployment pipeline, ensuring performance and scalability