Our client, an innovative start-up in the IT sector, currently seeks a Data Engineer to join their dynamic team. This permanent position offers the chance to shape critical data infrastructure within the realm of legal research, utilising the latest advancements in generative AI.
Key Responsibilities:
- Architect and manage data infrastructure for a legal research platform
- Develop custom API integrations with law firm document management systems such as iManage, SharePoint, and NetDocuments
- Integrate and manage new legal data sources by collaborating with legal experts
- Design monitoring systems to ensure data freshness, quality, and pipeline health
- Scale and optimise data pipelines as new sources are added and ingestion volume increases
Job Requirements:
- Experience building and scaling production data pipelines
- Strong Python backend development skills with a focus on clean, maintainable code
- Expertise in deploying and managing services on cloud platforms like Azure or AWS
- Hands-on experience with orchestration tools such as Dagster, Airflow, Modal, or Prefect
- Proficiency in database technologies like PostgreSQL, Vespa, or MongoDB
- Experience with infrastructure technologies such as Kubernetes and Docker
Nice-to-Haves:
- Experience with integrating enterprise document management systems
- Knowledge of building AI-powered data pipelines, including embeddings and vector search
- Background in regulated industries where data accuracy and audit trails are crucial
Benefits:
- Competitive salary
- Stock options
- 22 days of paid vacation plus your birthday off
- Flexible working location and schedule
- Health insurance and equipment budget
- Opportunity to own and shape the data architecture at a fast-moving start-up
If you are a skilled Data Engineer looking to make a significant impact in a pioneering start-up environment, apply now to join our client's growing team.