Fabric enables retailers and brands to profitably scale their online business with fast fulfillment and a new kind of delivery experience. By leveraging innovative software and robotics and placing flexible micro-fulfillment centers close to where customers are, Fabric helps businesses meet even 1-hour delivery standards. Better yet, with Fabric’s powerful technology, businesses can deliver an engaging, branded experience that helps strengthen their customer relationships.
Founded in 2015, Fabric has raised $338 million to date and is backed by Aleph, Corner Ventures, Canada Pension Plan Investment Board (CPPIB), Evolv (Kraft Heinz), Innovation Endeavors, La Maison, Playground Ventures, and Temasek. With offices in New York City, Atlanta and Tel Aviv, Fabric is constantly growing with over 200 team members globally and 20 sites under development/contract, including four live micro-fulfillment centers.
Fabric continues its rapid expansion and plans to continue rolling out its operations in key urban locations, as it realizes its mission, to bring brands and online shoppers closer.
Senior Data Engineer
As a Senior Data Engineer, you will design and implement solutions for the company’s data infrastructure, including data lake, data pipeline and client libraries.
You will write code, leverage managed solutions, open-source tools, and industry best practices to ensure our data is reliable, standardized and reusable.
You will work closely with researchers, product managers and engineers to make sure the data meets the business needs, while improving the scalability and robustness of the system.
- Make data standardized and reusable, from architecture to production
- Keep our data consistent, reliable, and reproducible
- Manage data pipelines and ETLs that collect and transform data to support ML models, analysis, and reporting
- Build the tools and methodologies to support the scale of our system while increasing development velocity
- 5+ years of experience in SW development
- Experience in similar challenges with cloud environments - AWS, Azure, or GCP
- Deep understanding of different database types. Relational, Document, Graph, etc.
- Proficiency in Python, Java, or other high-level languages
- Passionate about solving high scale problems
- Experience with building and maintaining data pipelines and ETLs