OpenReq
About Etched
Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep chain-of-thought reasoning.
Software, LLM Compilation Software sells chips. Etched ASICs are no exception. While our first chip, Sohu, is only able to run transformer models, we still need production-grade software to map existing LLMs onto our chip.
You will help make this happen. You will write optimized kernels for the operations that make up a transformer, like attention, model parallelism, and normalization, and package them into components that developers can use (e.g. in the way that vLLM has its fused
MergedColumnParallelLinear
component). You will work with the hardware team to debug issues that hurt performance.
You will work with the software team to build integrations with existing libraries like vLLM and HuggingFace Transformers, so that our software can be drop-in compatible. You will not build a Pytorch compiler stack - instead, we will build a few highly-optimized fused kernels that can be used to implement transformer models.
Representative projects:
Write an optimized kernel to compute a new attention variant on our hardware
Implement HuggingFace’s
CohereForCausalLM
class using Etched’s transformer building blocks
Implement a synchronization mechanism to coordinate between the host CPU and Etched accelerator
Implement FP8 quantization for FP16 models using the same mechanism as TransformerEngine
You may be a good fit if you:
Have 3+ years of software engineering experience
Have experience working with machine learning operators
Are comfortable doing low-level embedded programming
Pick up slack, even if it goes outside your job description
Are results-oriented, and bias towards shipping products
Want to learn more about machine learning research
Strong candidates may also have experience with:
Transformer optimizations, such as FlashAttention
Ongoing research in machine learning
How we’re different: Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
Benefits:
Full medical, dental, and vision packages, with 100% of premium covered, 90% for dependents
Housing subsidy of
$2,000/month
for those living within walking distance of the office
Daily lunch and dinner in our office
Relocation support for those moving to Cupertino
#J-18808-Ljbffr
Software, LLM Compilation Software sells chips. Etched ASICs are no exception. While our first chip, Sohu, is only able to run transformer models, we still need production-grade software to map existing LLMs onto our chip.
You will help make this happen. You will write optimized kernels for the operations that make up a transformer, like attention, model parallelism, and normalization, and package them into components that developers can use (e.g. in the way that vLLM has its fused
MergedColumnParallelLinear
component). You will work with the hardware team to debug issues that hurt performance.
You will work with the software team to build integrations with existing libraries like vLLM and HuggingFace Transformers, so that our software can be drop-in compatible. You will not build a Pytorch compiler stack - instead, we will build a few highly-optimized fused kernels that can be used to implement transformer models.
Representative projects:
Write an optimized kernel to compute a new attention variant on our hardware
Implement HuggingFace’s
CohereForCausalLM
class using Etched’s transformer building blocks
Implement a synchronization mechanism to coordinate between the host CPU and Etched accelerator
Implement FP8 quantization for FP16 models using the same mechanism as TransformerEngine
You may be a good fit if you:
Have 3+ years of software engineering experience
Have experience working with machine learning operators
Are comfortable doing low-level embedded programming
Pick up slack, even if it goes outside your job description
Are results-oriented, and bias towards shipping products
Want to learn more about machine learning research
Strong candidates may also have experience with:
Transformer optimizations, such as FlashAttention
Ongoing research in machine learning
How we’re different: Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
Benefits:
Full medical, dental, and vision packages, with 100% of premium covered, 90% for dependents
Housing subsidy of
$2,000/month
for those living within walking distance of the office
Daily lunch and dinner in our office
Relocation support for those moving to Cupertino
#J-18808-Ljbffr